| { |
| "paper_id": "D18-1012", |
| "header": { |
| "generated_with": "S2ORC 1.0.0", |
| "date_generated": "2023-01-19T16:49:25.901108Z" |
| }, |
| "title": "Game-Based Video-Context Dialogue", |
| "authors": [ |
| { |
| "first": "Ramakanth", |
| "middle": [], |
| "last": "Pasunuru", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "UNC Chapel Hill", |
| "location": {} |
| }, |
| "email": "" |
| }, |
| { |
| "first": "Mohit", |
| "middle": [], |
| "last": "Bansal", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "UNC Chapel Hill", |
| "location": {} |
| }, |
| "email": "mbansal@cs.unc.edu" |
| } |
| ], |
| "year": "", |
| "venue": null, |
| "identifiers": {}, |
| "abstract": "Current dialogue systems focus more on textual and speech context knowledge and are usually based on two speakers. Some recent work has investigated static image-based dialogue. However, several real-world human interactions also involve dynamic visual context (similar to videos) as well as dialogue exchanges among multiple speakers. To move closer towards such multimodal conversational skills and visually-situated applications, we introduce a new video-context, many-speaker dialogue dataset based on livebroadcast soccer game videos and chats from Twitch.tv. This challenging testbed allows us to develop visually-grounded dialogue models that should generate relevant temporal and spatial event language from the live video, while also being relevant to the chat history. For strong baselines, we also present several discriminative and generative models, e.g., based on tridirectional attention flow (TriDAF). We evaluate these models via retrieval ranking-recall, automatic phrasematching metrics, as well as human evaluation studies. We also present dataset analyses, model ablations, and visualizations to understand the contribution of different modalities and model components.", |
| "pdf_parse": { |
| "paper_id": "D18-1012", |
| "_pdf_hash": "", |
| "abstract": [ |
| { |
| "text": "Current dialogue systems focus more on textual and speech context knowledge and are usually based on two speakers. Some recent work has investigated static image-based dialogue. However, several real-world human interactions also involve dynamic visual context (similar to videos) as well as dialogue exchanges among multiple speakers. To move closer towards such multimodal conversational skills and visually-situated applications, we introduce a new video-context, many-speaker dialogue dataset based on livebroadcast soccer game videos and chats from Twitch.tv. This challenging testbed allows us to develop visually-grounded dialogue models that should generate relevant temporal and spatial event language from the live video, while also being relevant to the chat history. For strong baselines, we also present several discriminative and generative models, e.g., based on tridirectional attention flow (TriDAF). We evaluate these models via retrieval ranking-recall, automatic phrasematching metrics, as well as human evaluation studies. We also present dataset analyses, model ablations, and visualizations to understand the contribution of different modalities and model components.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Abstract", |
| "sec_num": null |
| } |
| ], |
| "body_text": [ |
| { |
| "text": "Dialogue systems or conversational agents which are able to hold natural, relevant, and coherent interactions with humans have been a long-standing goal of artificial intelligence and machine learning. There has been a lot of important previous work in this field for decades (Weizenbaum, 1966; Isbell et al., 2000; Rambow et al., 2001; Rieser et al., 2005; Georgila et al., 2006; Rieser and Lemon, 2008; Ritter et al., 2011) , includ-", |
| "cite_spans": [ |
| { |
| "start": 276, |
| "end": 294, |
| "text": "(Weizenbaum, 1966;", |
| "ref_id": "BIBREF62" |
| }, |
| { |
| "start": 295, |
| "end": 315, |
| "text": "Isbell et al., 2000;", |
| "ref_id": "BIBREF23" |
| }, |
| { |
| "start": 316, |
| "end": 336, |
| "text": "Rambow et al., 2001;", |
| "ref_id": "BIBREF43" |
| }, |
| { |
| "start": 337, |
| "end": 357, |
| "text": "Rieser et al., 2005;", |
| "ref_id": "BIBREF44" |
| }, |
| { |
| "start": 358, |
| "end": 380, |
| "text": "Georgila et al., 2006;", |
| "ref_id": "BIBREF18" |
| }, |
| { |
| "start": 381, |
| "end": 404, |
| "text": "Rieser and Lemon, 2008;", |
| "ref_id": "BIBREF45" |
| }, |
| { |
| "start": 405, |
| "end": 425, |
| "text": "Ritter et al., 2011)", |
| "ref_id": "BIBREF46" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "We release all data, code, and models at: https:// github.com/ramakanth-pasunuru/video-dialogue S1: what an offside trap OMEGALUL S2: Lol that finish bro S3: suprised you didn't do the extra pass S4: @S10 a drunk bet? S5: @S11 thanks mate S6: could have passed one more S7: Pass that S1: record now! S8: !record S9: done a nother pass there Figure 1 : Sample example from our many-speaker, video-context dialogue dataset, based on live soccer game chat. The task is to predict the response (bottomright) using the video context (left) and the chat context (top-right).", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 341, |
| "end": 349, |
| "text": "Figure 1", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "ing recent work on introduction of large textualdialogue datasets (e.g., Lowe et al. (2015) ; Serban et al. (2016) ) and end-to-end neural network based models (Sordoni et al., 2015; Vinyals and Le, 2015; Su et al., 2016; Luan et al., 2016; Li et al., 2016; Serban et al., 2017a,b) .", |
| "cite_spans": [ |
| { |
| "start": 73, |
| "end": 91, |
| "text": "Lowe et al. (2015)", |
| "ref_id": "BIBREF32" |
| }, |
| { |
| "start": 94, |
| "end": 114, |
| "text": "Serban et al. (2016)", |
| "ref_id": "BIBREF51" |
| }, |
| { |
| "start": 160, |
| "end": 182, |
| "text": "(Sordoni et al., 2015;", |
| "ref_id": "BIBREF55" |
| }, |
| { |
| "start": 183, |
| "end": 204, |
| "text": "Vinyals and Le, 2015;", |
| "ref_id": "BIBREF59" |
| }, |
| { |
| "start": 205, |
| "end": 221, |
| "text": "Su et al., 2016;", |
| "ref_id": "BIBREF56" |
| }, |
| { |
| "start": 222, |
| "end": 240, |
| "text": "Luan et al., 2016;", |
| "ref_id": "BIBREF33" |
| }, |
| { |
| "start": 241, |
| "end": 257, |
| "text": "Li et al., 2016;", |
| "ref_id": "BIBREF27" |
| }, |
| { |
| "start": 258, |
| "end": 281, |
| "text": "Serban et al., 2017a,b)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "Current dialogue tasks are usually focused on the textual or verbal context (conversation history). In terms of multimodal dialogue, speechbased spoken dialogue systems have been widely explored (Eckert et al., 1997; Young, 2000; Janin et al., 2003; Celikyilmaz et al., 2017; Wen et al., 2015; Su et al., 2016; Mrk\u0161i\u0107 et al., 2016) , as well as work on gesture and haptics based dialogue (Johnston et al., 2002; Cassell, 1999; Foster et al., 2008) . In order to address the additional advantage of using visually-grounded context knowledge in dialogue, recent work introduced the visual dialogue task (Das et al., 2017; de Vries et al., 2017; Mostafazadeh et al., 2017) . However, the visual context in these tasks is lim-ited to one static image. Moreover, the interactions are between two speakers with fixed roles (one asks questions and the other answers).", |
| "cite_spans": [ |
| { |
| "start": 195, |
| "end": 216, |
| "text": "(Eckert et al., 1997;", |
| "ref_id": "BIBREF14" |
| }, |
| { |
| "start": 217, |
| "end": 229, |
| "text": "Young, 2000;", |
| "ref_id": "BIBREF66" |
| }, |
| { |
| "start": 230, |
| "end": 249, |
| "text": "Janin et al., 2003;", |
| "ref_id": "BIBREF25" |
| }, |
| { |
| "start": 250, |
| "end": 275, |
| "text": "Celikyilmaz et al., 2017;", |
| "ref_id": "BIBREF4" |
| }, |
| { |
| "start": 276, |
| "end": 293, |
| "text": "Wen et al., 2015;", |
| "ref_id": "BIBREF63" |
| }, |
| { |
| "start": 294, |
| "end": 310, |
| "text": "Su et al., 2016;", |
| "ref_id": "BIBREF56" |
| }, |
| { |
| "start": 311, |
| "end": 331, |
| "text": "Mrk\u0161i\u0107 et al., 2016)", |
| "ref_id": "BIBREF39" |
| }, |
| { |
| "start": 388, |
| "end": 411, |
| "text": "(Johnston et al., 2002;", |
| "ref_id": "BIBREF26" |
| }, |
| { |
| "start": 412, |
| "end": 426, |
| "text": "Cassell, 1999;", |
| "ref_id": "BIBREF3" |
| }, |
| { |
| "start": 427, |
| "end": 447, |
| "text": "Foster et al., 2008)", |
| "ref_id": "BIBREF16" |
| }, |
| { |
| "start": 601, |
| "end": 619, |
| "text": "(Das et al., 2017;", |
| "ref_id": "BIBREF10" |
| }, |
| { |
| "start": 620, |
| "end": 642, |
| "text": "de Vries et al., 2017;", |
| "ref_id": "BIBREF60" |
| }, |
| { |
| "start": 643, |
| "end": 669, |
| "text": "Mostafazadeh et al., 2017)", |
| "ref_id": "BIBREF38" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "Several situations of real-world dialogue among humans involve more 'dynamic' visual context, i.e., video-style information of the world moving around us (both spatially and temporally). Further, several human conversations involve more than two speakers, with changing roles. In order to develop such dynamically-visual multimodal dialogue models, we introduce a new 'manyspeaker, video-context chat' testbed, along with a new dataset and models for the same. Our dataset is based on live-broadcast soccer (FIFA-18) game videos from the 'Twitch.tv' live video streaming platform, along with the spontaneous, many-speaker live chats about the game. This challenging testbed allows us to develop dialogue models where the generated response is required to be relevant to the temporal and spatial events in the live video, as well as be relevant to the chat history (with potential impact towards videogrounded applications such as personal assistants, intelligent tutors, and human-robot collaboration).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "We also present several strong discriminative and generative baselines that learn to retrieve and generate bimodal-relevant responses. We first present a triple-encoder discriminative model to encode the video, chat history, and response, and then classify the relevance label of the response. We then improve over this model via tridirectional attention flow (TriDAF). For the generative models, we model bidirectional attention flow between the video and textual chat context encoders, which then decodes the response. We evaluate these models via retrieval ranking-recall, phrasematching metrics, as well as human evaluation studies. We also present dataset analysis as well as model ablations and attention visualizations to understand the contribution of the video vs. chat modalities and the model components.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "Early dialogue systems had components of natural language (NL) understanding unit, dialogue manager, and NL generation unit (Bates, 1995) . Statistical learning methods were used for automatic feature extraction (Dowding et al., 1993; Mikolov et al., 2013) , dialogue managers incorporated reward-driven reinforcement learning (Young et al., 2013; Shah et al., 2016) , and the generation units have been extended with seq2seq neural network models (Vinyals and Le, 2015; Serban et al., 2016; Luan et al., 2016) .", |
| "cite_spans": [ |
| { |
| "start": 124, |
| "end": 137, |
| "text": "(Bates, 1995)", |
| "ref_id": "BIBREF2" |
| }, |
| { |
| "start": 212, |
| "end": 234, |
| "text": "(Dowding et al., 1993;", |
| "ref_id": "BIBREF13" |
| }, |
| { |
| "start": 235, |
| "end": 256, |
| "text": "Mikolov et al., 2013)", |
| "ref_id": "BIBREF37" |
| }, |
| { |
| "start": 327, |
| "end": 347, |
| "text": "(Young et al., 2013;", |
| "ref_id": "BIBREF65" |
| }, |
| { |
| "start": 348, |
| "end": 366, |
| "text": "Shah et al., 2016)", |
| "ref_id": "BIBREF53" |
| }, |
| { |
| "start": 448, |
| "end": 470, |
| "text": "(Vinyals and Le, 2015;", |
| "ref_id": "BIBREF59" |
| }, |
| { |
| "start": 471, |
| "end": 491, |
| "text": "Serban et al., 2016;", |
| "ref_id": "BIBREF51" |
| }, |
| { |
| "start": 492, |
| "end": 510, |
| "text": "Luan et al., 2016)", |
| "ref_id": "BIBREF33" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Work", |
| "sec_num": "2" |
| }, |
| { |
| "text": "In addition to the focus on textual dialogue context, using multimodal context brings more potential for having real-world grounded conversations. For example, spoken dialogue systems have been widely explored Gurevych and Strube, 2004; Georgila et al., 2006; Eckert et al., 1997; Young, 2000; Janin et al., 2003; De Mori, 2007; Wen et al., 2015; Su et al., 2016; Mrk\u0161i\u0107 et al., 2016; Hori et al., 2016; Celikyilmaz et al., 2015 Celikyilmaz et al., , 2017 , as well as gesture and haptics based dialogue (Johnston et al., 2002; Cassell, 1999; Foster et al., 2008) . Additionally, dialogue systems for digital personal assistants are also well explored (Myers et al., 2007; Sarikaya et al., 2016; Damacharla et al., 2018) . In the visual modality direction, some important recent attempts have been made to use static image based context in dialogue systems (Das et al., 2017; de Vries et al., 2017; Mostafazadeh et al., 2017) , who proposed the 'visual dialog' task, where the human can ask questions on a static image, and an agent interacts by answering these questions based on the previous chat context and the image's visual features. Also, Celikyilmaz et al. (2014) used visual display information for on-screen item resolution in utterances for improving personal digital assistants.", |
| "cite_spans": [ |
| { |
| "start": 210, |
| "end": 236, |
| "text": "Gurevych and Strube, 2004;", |
| "ref_id": "BIBREF20" |
| }, |
| { |
| "start": 237, |
| "end": 259, |
| "text": "Georgila et al., 2006;", |
| "ref_id": "BIBREF18" |
| }, |
| { |
| "start": 260, |
| "end": 280, |
| "text": "Eckert et al., 1997;", |
| "ref_id": "BIBREF14" |
| }, |
| { |
| "start": 281, |
| "end": 293, |
| "text": "Young, 2000;", |
| "ref_id": "BIBREF66" |
| }, |
| { |
| "start": 294, |
| "end": 313, |
| "text": "Janin et al., 2003;", |
| "ref_id": "BIBREF25" |
| }, |
| { |
| "start": 314, |
| "end": 328, |
| "text": "De Mori, 2007;", |
| "ref_id": "BIBREF11" |
| }, |
| { |
| "start": 329, |
| "end": 346, |
| "text": "Wen et al., 2015;", |
| "ref_id": "BIBREF63" |
| }, |
| { |
| "start": 347, |
| "end": 363, |
| "text": "Su et al., 2016;", |
| "ref_id": "BIBREF56" |
| }, |
| { |
| "start": 364, |
| "end": 384, |
| "text": "Mrk\u0161i\u0107 et al., 2016;", |
| "ref_id": "BIBREF39" |
| }, |
| { |
| "start": 385, |
| "end": 403, |
| "text": "Hori et al., 2016;", |
| "ref_id": "BIBREF22" |
| }, |
| { |
| "start": 404, |
| "end": 428, |
| "text": "Celikyilmaz et al., 2015", |
| "ref_id": "BIBREF6" |
| }, |
| { |
| "start": 429, |
| "end": 455, |
| "text": "Celikyilmaz et al., , 2017", |
| "ref_id": "BIBREF4" |
| }, |
| { |
| "start": 504, |
| "end": 527, |
| "text": "(Johnston et al., 2002;", |
| "ref_id": "BIBREF26" |
| }, |
| { |
| "start": 528, |
| "end": 542, |
| "text": "Cassell, 1999;", |
| "ref_id": "BIBREF3" |
| }, |
| { |
| "start": 543, |
| "end": 563, |
| "text": "Foster et al., 2008)", |
| "ref_id": "BIBREF16" |
| }, |
| { |
| "start": 652, |
| "end": 672, |
| "text": "(Myers et al., 2007;", |
| "ref_id": "BIBREF40" |
| }, |
| { |
| "start": 673, |
| "end": 695, |
| "text": "Sarikaya et al., 2016;", |
| "ref_id": "BIBREF47" |
| }, |
| { |
| "start": 696, |
| "end": 720, |
| "text": "Damacharla et al., 2018)", |
| "ref_id": "BIBREF8" |
| }, |
| { |
| "start": 857, |
| "end": 875, |
| "text": "(Das et al., 2017;", |
| "ref_id": "BIBREF10" |
| }, |
| { |
| "start": 876, |
| "end": 898, |
| "text": "de Vries et al., 2017;", |
| "ref_id": "BIBREF60" |
| }, |
| { |
| "start": 899, |
| "end": 925, |
| "text": "Mostafazadeh et al., 2017)", |
| "ref_id": "BIBREF38" |
| }, |
| { |
| "start": 1146, |
| "end": 1171, |
| "text": "Celikyilmaz et al. (2014)", |
| "ref_id": "BIBREF5" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Work", |
| "sec_num": "2" |
| }, |
| { |
| "text": "In contrast, we propose to employ dynamic video-based information as visual context knowledge in dialogue models, so as to move towards video-grounded intelligent assistant applications. In the video+language direction, previous work has looked at video captioning (Venugopalan et al., 2015) as well as Q&A and fill-inthe-blank tasks on videos (Tapaswi et al., 2016; Jang et al., 2017; Maharaj et al., 2017) and interactive 3D environments (Das et al., 2018; Yan et al., 2018; Gordon et al., 2017; Anderson et al., 2017) . There has also been early related work on generating sportscast commentaries from simulation (RoboCup) soccer videos represented as non-visual state information (Chen and Mooney, 2008) . Also, Liu et al. (2016a) presented some initial ideas on robots learning grounded task representations by watching and interacting with humans performing the task (i.e., by converting human demonstration videos to Causal And-Or graphs). On the other hand, we propose a new video-chat dataset where the dialogue models need to generate the next response in the sequence of chats, conditioned both on the raw video features as well as the previous textual chat history. Moreover, our new dataset presents a many-speaker conversation setting, similar to previous work on meeting understanding and Computer Supported Cooperative Work (CSCW) (Janin et al., 2003; Waibel et al., 2001; Schmidt and Bannon, 1992) . In the live video stream direction, Fu et al. (2017) and Ping and Chen (2017) used real-time comments to predict the frame highlights in a video, and Barbieri et al. (2017) presented emotes and troll prediction.", |
| "cite_spans": [ |
| { |
| "start": 265, |
| "end": 291, |
| "text": "(Venugopalan et al., 2015)", |
| "ref_id": "BIBREF58" |
| }, |
| { |
| "start": 344, |
| "end": 366, |
| "text": "(Tapaswi et al., 2016;", |
| "ref_id": "BIBREF57" |
| }, |
| { |
| "start": 367, |
| "end": 385, |
| "text": "Jang et al., 2017;", |
| "ref_id": "BIBREF24" |
| }, |
| { |
| "start": 386, |
| "end": 407, |
| "text": "Maharaj et al., 2017)", |
| "ref_id": "BIBREF35" |
| }, |
| { |
| "start": 440, |
| "end": 458, |
| "text": "(Das et al., 2018;", |
| "ref_id": "BIBREF9" |
| }, |
| { |
| "start": 459, |
| "end": 476, |
| "text": "Yan et al., 2018;", |
| "ref_id": "BIBREF64" |
| }, |
| { |
| "start": 477, |
| "end": 497, |
| "text": "Gordon et al., 2017;", |
| "ref_id": "BIBREF19" |
| }, |
| { |
| "start": 498, |
| "end": 520, |
| "text": "Anderson et al., 2017)", |
| "ref_id": "BIBREF0" |
| }, |
| { |
| "start": 684, |
| "end": 707, |
| "text": "(Chen and Mooney, 2008)", |
| "ref_id": "BIBREF7" |
| }, |
| { |
| "start": 716, |
| "end": 734, |
| "text": "Liu et al. (2016a)", |
| "ref_id": "BIBREF30" |
| }, |
| { |
| "start": 1347, |
| "end": 1367, |
| "text": "(Janin et al., 2003;", |
| "ref_id": "BIBREF25" |
| }, |
| { |
| "start": 1368, |
| "end": 1388, |
| "text": "Waibel et al., 2001;", |
| "ref_id": "BIBREF61" |
| }, |
| { |
| "start": 1389, |
| "end": 1414, |
| "text": "Schmidt and Bannon, 1992)", |
| "ref_id": "BIBREF48" |
| }, |
| { |
| "start": 1453, |
| "end": 1469, |
| "text": "Fu et al. (2017)", |
| "ref_id": "BIBREF17" |
| }, |
| { |
| "start": 1474, |
| "end": 1494, |
| "text": "Ping and Chen (2017)", |
| "ref_id": "BIBREF42" |
| }, |
| { |
| "start": 1567, |
| "end": 1589, |
| "text": "Barbieri et al. (2017)", |
| "ref_id": "BIBREF1" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Work", |
| "sec_num": "2" |
| }, |
| { |
| "text": "3 Twitch-FIFA Dataset", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Work", |
| "sec_num": "2" |
| }, |
| { |
| "text": "For our new video-context dialogue task, we used the publicly accessible Twitch.tv live broadcast platform, and collected videos of soccer (FIFA-18) games along with the users' live chat conversations about the game. This dataset has videos involving various realistic human actions and events in a complex sports environment and hence serves as a good testbed and first step towards multimodal video-based dialogue data. An example is shown in Fig. 1 (and an original screenshot example in Fig. 2) , where the users perform a complex 'manyspeaker', 'multimodal' dialogue. Overall, we collected 49 FIFA-18 game videos along with their users' chat, and divided them into 33 videos for training, 8 videos for validation, and 8 videos for testing. Each such video is several hours long, providing a good amount of data (Table 2) .", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 445, |
| "end": 451, |
| "text": "Fig. 1", |
| "ref_id": null |
| }, |
| { |
| "start": 491, |
| "end": 498, |
| "text": "Fig. 2)", |
| "ref_id": "FIGREF0" |
| }, |
| { |
| "start": 816, |
| "end": 825, |
| "text": "(Table 2)", |
| "ref_id": "TABREF2" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Dataset Collection and Processing", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "To extract triples (instances) of video context, chat context, and response from this data, we divide these videos based on the fixed time frames instead of fixed number of utterances in order to maintain conversation topic clusters (because of the sparse nature of chat utterances count over the time). First, we use 20-sec context windows to extract the video clips and users utterances in Relevance to Video+Chat filtered response wins 34% 1st response wins 3% Non-distinguishable 63% (56 both-good, 7 both-bad) this time frame, and use it as our video and chat contexts, resp. Next, the chat utterances in the immediately-following 10-sec window (response window) that do not overlap with the next instance's context window are considered as potential responses. 1 Hence, there are only two instances (triples) in a 60-sec long video, i.e., 20-sec video+chat context window and 10-sec response window, and there is no overlap between the instances. Now, out of these potential responses, to only allow the response that has at least some good coherence and relevance with the chat context's topic, we choose the first (earliest) response that has high similarity with some other utterance in this response window (using 0.5 BLEU-4 threshold, based on manual inspection). 2 Human Quality Evaluation of Data Filtering Process: To evaluate the quality of the responses that result from our filtering process described above, we performed an anonymous (randomly shuffled w/o identity) human comparison between the response selected by our filtering process vs. the first response from the response window without any filtering, based on relevance w.r.t. video and chat context. Table 1 presents the results on 100 sample size, showing that humans in a blindtest found 90% (34+56) of our filtered responses as valid responses, verifying that our response selection procedure is reasonable. Furthermore, out of these 90% valid responses, we found that 55% are chat-only relevant, 11% are video-only relevant, and 24% are both video+chat relevant. In order to make the above procedure safe and to make the dataset more challenging, we also discourage frequent responses (top-20 most-frequent 1 We use non-overlapping windows because: (1) the utterances are non-uniformly distributed in time and hence if we have a shifting window, sometimes a particular data instance/chunk becomes very sparse and contains almost zero utterances; (2) we do not want overlap between response of one window with the context of the next window, so as to avoid the encoder already having seen the response (as part of context) that the decoder needs to generate for the other window.", |
| "cite_spans": [ |
| { |
| "start": 2189, |
| "end": 2190, |
| "text": "1", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 1678, |
| "end": 1685, |
| "text": "Table 1", |
| "ref_id": "TABREF0" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Dataset Collection and Processing", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "2 Based on intuition that if multiple speakers are saying the same response in that 10-second window, then this response should be more meaningful/relevant w.r.t. chat context. generic utterances) unless no other response satisfies the similarity condition, hence suppressing the frequent responses. 3 If we couldn't find any utterance based on the multi-response matching procedure described above, then we just consider the first utterance in the 10-second window as the response. 4 We also make sure that the chat context window has at least 4 utterances, otherwise we exclude that context window and also the corresponding response window from the dataset. After all this processing, our final resulting dataset contains 10, 510 samples in training, 2, 153 samples in validation, and 2, 780 samples in test. 5", |
| "cite_spans": [ |
| { |
| "start": 483, |
| "end": 484, |
| "text": "4", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Dataset Collection and Processing", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "Dataset Statistics Table 2 presents the full statistics on train, validation, and test sets of our Twitch-FIFA dataset, after the filtering process described in Sec. 3.1. As shown, the average chat context length in the dataset is around 68 words, and the average response length is 6.3 words. Chat Context Size Fig. 3 presents the study of number of utterances in the chat context vs. the number of such training samples. As we limit the minimum number of utterances to 4, chat context with less than 4 utterances is not present in the dataset. From the Fig. 3 , it is clear that as the number of utterances in the chat context increases, the number of such training samples decrease. Frequent Words Fig. 4 presents the top-20 frequent words (excluding stop words) and their corresponding frequency in our Twitch-FIFA dataset. Most of these frequent words are related to soccer vocabulary. Also, some of these frequent words are twitch emotes (e.g. 'kappa', 'inceptionlove'). ", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 19, |
| "end": 26, |
| "text": "Table 2", |
| "ref_id": "TABREF2" |
| }, |
| { |
| "start": 312, |
| "end": 318, |
| "text": "Fig. 3", |
| "ref_id": null |
| }, |
| { |
| "start": 555, |
| "end": 561, |
| "text": "Fig. 3", |
| "ref_id": null |
| }, |
| { |
| "start": 701, |
| "end": 707, |
| "text": "Fig. 4", |
| "ref_id": "FIGREF2" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Dataset Analysis", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "Let v = {v 1 , v 2 , .., v m } be the video context frames, u = {u 1 , u 2 , .", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Models", |
| "sec_num": "4" |
| }, |
| { |
| "text": "., u n } be the textual chat (utterance) context tokens, and r = {r 1 , r 2 , .., r k } be response tokens generated (or retrieved).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Models", |
| "sec_num": "4" |
| }, |
| { |
| "text": "Our simple non-trained baselines are Most-Frequent-Response (re-rank the candidate responses based on their frequency in the training set), Chat-Response-Cosine (re-rank the candidate responses based on their similarity score w.r.t. the chat context), and Nearest-Neighbor (find the Kbest similar chat contexts in the training set, take their corresponding responses, and then re-rank the candidate responses based on mean similarity score w.r.t. this K-best response set). For trained baselines, we use logistic regression and Naive Bayes methods. We use the final state of a Twitch-trained RNN Language Model to represent the chat context and response. Please see supplementary for full details.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Baselines", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "For our simpler discriminative model, we use a 'triple encoder' to encode the video context, chat context, and response (see Fig. 5 ), as an extension of the dual encoder model in Lowe et al. (2015) . The task here is to predict the given train- ing triple (v, u, r) as positive or negative. Let", |
| "cite_spans": [ |
| { |
| "start": 180, |
| "end": 198, |
| "text": "Lowe et al. (2015)", |
| "ref_id": "BIBREF32" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 125, |
| "end": 131, |
| "text": "Fig. 5", |
| "ref_id": "FIGREF3" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Triple Encoder", |
| "sec_num": "4.2.1" |
| }, |
| { |
| "text": "h v f , h u", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Triple Encoder", |
| "sec_num": "4.2.1" |
| }, |
| { |
| "text": "f , and h r f be the final state information of the video, chat, and response LSTM-RNN (bidirectional) encoders respectively; then the probability of a positive training triple is defined as follows:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Triple Encoder", |
| "sec_num": "4.2.1" |
| }, |
| { |
| "text": "p(v, u, r; \u03b8) = \u03c3([h v f ; h u f ] T W h r f + b) (1)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Triple Encoder", |
| "sec_num": "4.2.1" |
| }, |
| { |
| "text": "where W and b are trainable parameters. Here, W can be viewed as a similarity matrix which will bring the context [h v f ; h u f ] into the same space as the response h r f , and get a suitable similarity score. For optimizing our discriminative model, we use max-margin loss function similar to Mao et al. (2016) and Yu et al. (2017) . Given a positive training triple (v, u, r) , let the corresponding negative training triples be (v , u, r), (v, u , r) , and (v, u, r ), i.e., one modality is wrong at a time in each of these three (see Sec. 5 for the negative example selection). The max-margin loss is:", |
| "cite_spans": [ |
| { |
| "start": 296, |
| "end": 313, |
| "text": "Mao et al. (2016)", |
| "ref_id": "BIBREF36" |
| }, |
| { |
| "start": 318, |
| "end": 334, |
| "text": "Yu et al. (2017)", |
| "ref_id": "BIBREF24" |
| }, |
| { |
| "start": 370, |
| "end": 379, |
| "text": "(v, u, r)", |
| "ref_id": null |
| }, |
| { |
| "start": 445, |
| "end": 455, |
| "text": "(v, u , r)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Triple Encoder", |
| "sec_num": "4.2.1" |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "L(\u03b8) = [max(0, M + log p(v , u, r) \u2212 log p(v, u, r)) + max(0, M + log p(v, u , r) \u2212 log p(v, u, r)) + max(0, M + log p(v, u, r ) \u2212 log p(v, u, r))]", |
| "eq_num": "(2)" |
| } |
| ], |
| "section": "Triple Encoder", |
| "sec_num": "4.2.1" |
| }, |
| { |
| "text": "where the summation is over all the training triples in the dataset. M is a tunable margin hyperparameter between positive and negative training triples.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Triple Encoder", |
| "sec_num": "4.2.1" |
| }, |
| { |
| "text": "Our tridirectional attention flow model learns stronger joint spaces between the three modalities in a mutual-information way. We use bidirectional attention flow mechanisms (Seo et al., 2017) between the video and chat contexts, between the video context and the response, as well as between the chat context and the response, hence enabling attention flow across all three modalities, as shown in Fig. 6 . We name this model Tridirectional Attention Flow or TriDAF. We will next discuss the bidirectional attention flow mechanism between video and chat contexts, but the same formulation holds true for bidirectional attention between video context and response, and between chat context and response. Given the video context hidden state h v i and chat context hidden state h u j at time steps i and j respectively, the bidirectional attention mechanism is based on the similarity score:", |
| "cite_spans": [ |
| { |
| "start": 174, |
| "end": 192, |
| "text": "(Seo et al., 2017)", |
| "ref_id": "BIBREF49" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 399, |
| "end": 405, |
| "text": "Fig. 6", |
| "ref_id": "FIGREF4" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Tridirectional Attention Flow (TriDAF)", |
| "sec_num": "4.2.2" |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "S (v,u) i,j = w T S (v,u) [h v i ; h u j ; h v i h u j ]", |
| "eq_num": "(3)" |
| } |
| ], |
| "section": "Tridirectional Attention Flow (TriDAF)", |
| "sec_num": "4.2.2" |
| }, |
| { |
| "text": "where S", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Tridirectional Attention Flow (TriDAF)", |
| "sec_num": "4.2.2" |
| }, |
| { |
| "text": "(v,u) i,j is a scalar, w S (v,u)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Tridirectional Attention Flow (TriDAF)", |
| "sec_num": "4.2.2" |
| }, |
| { |
| "text": "is a trainable parameter, and denote element-wise multiplication. The attention distribution from chat context to video context is defined as \u03b1 i: = sof tmax(S i: ), hence the chat-to-video context vector c v\u2190u", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Tridirectional Attention Flow (TriDAF)", |
| "sec_num": "4.2.2" |
| }, |
| { |
| "text": "i = j \u03b1 i,j h u j .", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Tridirectional Attention Flow (TriDAF)", |
| "sec_num": "4.2.2" |
| }, |
| { |
| "text": "Similarly, the attention distribution from video context to chat context is defined as \u03b2 j: = sof tmax(S :j ), hence the videoto-chat context vector c u\u2190v j", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Tridirectional Attention Flow (TriDAF)", |
| "sec_num": "4.2.2" |
| }, |
| { |
| "text": "= i \u03b2 j,i h v", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Tridirectional Attention Flow (TriDAF)", |
| "sec_num": "4.2.2" |
| }, |
| { |
| "text": "i . We then compute similar bidirectional attention flow mechanisms between the video context and response, and between the chat context and response. Then, we concatenate each hidden state and its corresponding context vector from other two modalities, e.g.,", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Tridirectional Attention Flow (TriDAF)", |
| "sec_num": "4.2.2" |
| }, |
| { |
| "text": "\u0125 v i = [h v i ; c v\u2190u i ; c v\u2190r i ]", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Tridirectional Attention Flow (TriDAF)", |
| "sec_num": "4.2.2" |
| }, |
| { |
| "text": "for the i th timestep of the video context. Finally, we add self-attention mechanism (Lin et al., 2017) across the concatenated hidden states of each of the three modules. 6 If\u0125 v i is the final concatenated vector of the video context at time step i, then the selfattention weights \u03b1 s for this video context are the softmax of e s :", |
| "cite_spans": [ |
| { |
| "start": 85, |
| "end": 103, |
| "text": "(Lin et al., 2017)", |
| "ref_id": "BIBREF29" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Tridirectional Attention Flow (TriDAF)", |
| "sec_num": "4.2.2" |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "e s i = V v a tanh(W v a\u0125 v i + b v a )", |
| "eq_num": "(4)" |
| } |
| ], |
| "section": "Tridirectional Attention Flow (TriDAF)", |
| "sec_num": "4.2.2" |
| }, |
| { |
| "text": "where ", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Tridirectional Attention Flow (TriDAF)", |
| "sec_num": "4.2.2" |
| }, |
| { |
| "text": "V v a , W v", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Tridirectional Attention Flow (TriDAF)", |
| "sec_num": "4.2.2" |
| }, |
| { |
| "text": "v = i \u03b1 s i\u0125 v", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Tridirectional Attention Flow (TriDAF)", |
| "sec_num": "4.2.2" |
| }, |
| { |
| "text": "i . Similarly, the final representation vectors of the chat context and the response are\u0109 u and\u0109 r , respectively. Finally, the probability that the given training triple (v, u, r) is positive is:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Tridirectional Attention Flow (TriDAF)", |
| "sec_num": "4.2.2" |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "p(v, u, r; \u03b8) = \u03c3([\u0109 v ;\u0109 u ] T W\u0109 r + b)", |
| "eq_num": "(5)" |
| } |
| ], |
| "section": "Tridirectional Attention Flow (TriDAF)", |
| "sec_num": "4.2.2" |
| }, |
| { |
| "text": "Again, here also we use max-margin loss (Eqn. 2).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Tridirectional Attention Flow (TriDAF)", |
| "sec_num": "4.2.2" |
| }, |
| { |
| "text": "Our simpler generative model is a sequence-tosequence model with bilinear attention mechanism (similar to Luong et al. (2015) ). We have two encoders, one for encoding the video context and another for encoding the chat context, as shown in Fig. 7 . We combine the final state information from both encoders and give it as initial state to the response generation decoder. The two encoders and the decoder are all two-layer LSTM-RNNs. Let h v i and h u j be the hidden states of video and chat encoders at time step i and j respectively. At each time step t of the decoder with hidden state h r t , the decoder attends to parts of video and chat encoders and uses the combined information to generate the next token. Let \u03b1 t and \u03b2 t be the attention weight distributions for video and chat encoders respectively with video context vector", |
| "cite_spans": [ |
| { |
| "start": 106, |
| "end": 125, |
| "text": "Luong et al. (2015)", |
| "ref_id": "BIBREF34" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 241, |
| "end": 247, |
| "text": "Fig. 7", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Seq2seq with Attention", |
| "sec_num": "4.3.1" |
| }, |
| { |
| "text": "c v t = i \u03b1 t,i h v i and chat context vector c u t = j \u03b2 t,j h u j .", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Seq2seq with Attention", |
| "sec_num": "4.3.1" |
| }, |
| { |
| "text": "The attention distribution for video encoder is defined as (and the same holds for chat encoder):", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Seq2seq with Attention", |
| "sec_num": "4.3.1" |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "e t,i = h r t T W v a h v i ; \u03b1 t = softmax(e t )", |
| "eq_num": "(6)" |
| } |
| ], |
| "section": "Seq2seq with Attention", |
| "sec_num": "4.3.1" |
| }, |
| { |
| "text": "where W v a is a trainable parameter. Next, we concatenate the attention-based context information (c v t and c u t ) and decoder hidden state (h r t ), and do a non-linear transformation to get the final hidden state\u0125 r t as follows:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Seq2seq with Attention", |
| "sec_num": "4.3.1" |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "h r t = tanh(W c [c v t ; c u t ; h r t ])", |
| "eq_num": "(7)" |
| } |
| ], |
| "section": "Seq2seq with Attention", |
| "sec_num": "4.3.1" |
| }, |
| { |
| "text": "where W c is again a trainable parameter. Finally, we project the final hidden state information to vocabulary size and give it as input to a softmax layer to get the vocabulary distribution p(r t |r 1:t\u22121 , v, u; \u03b8). During training, we minimize the cross-entropy loss defined as follows:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Seq2seq with Attention", |
| "sec_num": "4.3.1" |
| }, |
| { |
| "text": "L XE (\u03b8) = \u2212 t log p(r t |r 1:t\u22121 , v, u; \u03b8) (8)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Seq2seq with Attention", |
| "sec_num": "4.3.1" |
| }, |
| { |
| "text": "where the final summation is over all the training triples in the dataset. Further, to train a stronger generative model with negative training examples (which teaches chat-to-video attention video-to-chat attention Figure 7 : Overview of our generative model with bidirectional attention flow between video context and chat context during response generation.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 216, |
| "end": 224, |
| "text": "Figure 7", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Seq2seq with Attention", |
| "sec_num": "4.3.1" |
| }, |
| { |
| "text": "the model to give higher generative decoder probability to the positive response as compared to all the negative ones), we use a max-margin loss (similar to Eqn. 2 in Sec. 4.2.1):", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Seq2seq with Attention", |
| "sec_num": "4.3.1" |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "LMM(\u03b8) = [max(0, M + log p(r|v , u) \u2212 log p(r|v, u)) + max(0, M + log p(r|v, u ) \u2212 log p(r|v, u)) + max(0, M + log p(r |v, u) \u2212 log p(r|v, u))]", |
| "eq_num": "(9)" |
| } |
| ], |
| "section": "Seq2seq with Attention", |
| "sec_num": "4.3.1" |
| }, |
| { |
| "text": "where the summation is over all the training triples in the dataset. Overall, the final joint loss function is a weighted combination of cross-entropy loss and max-margin loss:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Seq2seq with Attention", |
| "sec_num": "4.3.1" |
| }, |
| { |
| "text": "L(\u03b8) = L XE (\u03b8) + \u03bbL MM (\u03b8)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Seq2seq with Attention", |
| "sec_num": "4.3.1" |
| }, |
| { |
| "text": ", where \u03bb is a tunable hyperparameter.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Seq2seq with Attention", |
| "sec_num": "4.3.1" |
| }, |
| { |
| "text": "The stronger version of our generative model extends the two-encoder-attention-decoder model above to add bidirectional attention flow (BiDAF) mechanism (Seo et al., 2017) between video and chat encoders, as shown in Fig. 7 . Given the hidden states h v i and h u j of video and chat encoders at time step i and j, the final hidden states after the BiDAF are\u0125", |
| "cite_spans": [ |
| { |
| "start": 153, |
| "end": 171, |
| "text": "(Seo et al., 2017)", |
| "ref_id": "BIBREF49" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 217, |
| "end": 223, |
| "text": "Fig. 7", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Bidirectional Attention Flow (BiDAF)", |
| "sec_num": "4.3.2" |
| }, |
| { |
| "text": "v i = [h v i ; c v\u2190u i ] and\u0125 u j = [h u i ; c u\u2190v j", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Bidirectional Attention Flow (BiDAF)", |
| "sec_num": "4.3.2" |
| }, |
| { |
| "text": "] (similar to as described in Sec. 4.2.2), respectively. Now, the decoder attends over these final hidden states, and the rest of the decoder process is similar to Sec 4.3.1 above, including the weighted joint cross-entropy and max-margin loss.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Bidirectional Attention Flow (BiDAF)", |
| "sec_num": "4.3.2" |
| }, |
| { |
| "text": "Evaluation We first evaluate both our discriminative and generative models using retrieval-based recall@k scores, which is a concrete metric for such dialogue generation tasks (Lowe et al., 2015) . For our discriminative models, we simply rerank the given responses (in a candidate list of size 10, based on 9 negative examples; more details below) in the order of the probability score each response gets from the model. If the positive response is within the top-k list, then the recall@k score is 1, otherwise 0, following previous Ubuntu-dialogue work (Lowe et al., 2015) . For the generative models, we follow a similar approach, but the reranking score for a candidate response is based on the log probability score given by the generative models' decoder for that response, following the setup of previous visual-dialog work (Das et al., 2017) . In our experiments, we use recall@1, recall@2, and recall@5 scores. For completeness, we also report the phrase-matching metric scores: METEOR (Denkowski and Lavie, 2014) and ROUGE (Lin, 2004) for our generative models. We also present human evaluation.", |
| "cite_spans": [ |
| { |
| "start": 176, |
| "end": 195, |
| "text": "(Lowe et al., 2015)", |
| "ref_id": "BIBREF32" |
| }, |
| { |
| "start": 556, |
| "end": 575, |
| "text": "(Lowe et al., 2015)", |
| "ref_id": "BIBREF32" |
| }, |
| { |
| "start": 832, |
| "end": 850, |
| "text": "(Das et al., 2017)", |
| "ref_id": "BIBREF10" |
| }, |
| { |
| "start": 996, |
| "end": 1023, |
| "text": "(Denkowski and Lavie, 2014)", |
| "ref_id": "BIBREF12" |
| }, |
| { |
| "start": 1034, |
| "end": 1045, |
| "text": "(Lin, 2004)", |
| "ref_id": "BIBREF28" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experimental Setup", |
| "sec_num": "5" |
| }, |
| { |
| "text": "Training Details For negative samples, during training, for every positive triple (video, chat, response) in the training set, we sample 3 random negative triples. For validation/test, we sample 9 random negative responses elsewhere from the validation/test set. Also, the negative samples don't come from the video corresponding to the positive response. More details of negative samples and other training details (e.g., dimension/vocab sizes, visual feature details, validationbased hyperparamater tuning and model selection), are discussed in the supplementary.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experimental Setup", |
| "sec_num": "5" |
| }, |
| { |
| "text": "6 Results and Analysis", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experimental Setup", |
| "sec_num": "5" |
| }, |
| { |
| "text": "First, the overall human quality evaluation of our dataset (shown in Table 1) demonstrates that it contains 90% responses relevant to video and/or chat context. Next, we also do a blind human study on the recall-based setup (on a set of 100 samples from the validation set), where we anonymize the positive response by randomly mixing it with 9 tricky negative responses in the retrieval list, and ask the user to select the most relevant response for the given video and/or chat context. We found that human performance on this task is around 55% recall@1, demonstrating that this 10-way-discriminative recall-based task setup is reasonably challenging for humans, 7 but also that there is a lot of scope for future model improvements because the chance baseline is only 10% and the best-performing model so far (see Sec. 6.3) achieves only 22% recall@1 (on dev set), and hence there is a large 33% gap.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Human Evaluation of Dataset", |
| "sec_num": "6.1" |
| }, |
| { |
| "text": "Table 3 displays all our primary results. We first discuss results of our simple non-trained and trained baselines (see Sec. 4.1). The 'Most-Frequent-Response' baseline, which just ranks the 10-sized response retrieval list based on their frequency in the training data, gets only around 10% recall@1. 8 Our other non-trained baselines: 'Chat-Response-Cosine' and 'Nearest Neighbor', which ranks the candidate responses based on (Twitch-trained RNN encoder's vector) cosine similarity with chat-context and K-best training contexts' response vectors, respectively, achieves slightly better scores. We also show that our simple trained baselines (logistic regression and nearest neighbor) also achieve relatively low scores, indicating that a simple, shallow model will not work on this challenging dataset.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Baseline Results", |
| "sec_num": "6.2" |
| }, |
| { |
| "text": "Next, we present the recall@k retrieval performance of our various discriminative models in This relatively low human recall@1 performance is because this is a challenging, 10-way-discriminative evaluation, i.e., the choice comes w.r.t. 9 tricky negative examples along with just 1 positive example (hence chance-baseline is only 10%). Note that these negative examples are an artifact of specifically recall-based evaluation only, and will not affect the more important real-world task of response generation (for which our dataset's response quality is 90%, as shown in Table 1 ). Moreover, our dataset filtering (see Sec. 3.1) also 'suppresses' simple baselines and makes the task even harder.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 572, |
| "end": 579, |
| "text": "Table 1", |
| "ref_id": "TABREF0" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Discriminative Model Results", |
| "sec_num": "6.3" |
| }, |
| { |
| "text": "8 Note that the performance of this baseline is worse than the random choice baseline (recall@1:10%, recall@2:20%, recall@5:50%) because our dataset filtering process already suppresses frequent responses (see Sec. 3.1), in order to provide a challenging dataset for the community. ble 3: dual encoder (chat context only), dual encoder (video context only), triple encoder, and TriDAF model with self-attention. Our dual encoder models are significantly better than random choice and all our simple baselines above, and further show that they have complementary information because using both of them together (in 'Triple Encoder') improves the overall performance of the model. Finally, we show that our novel TriDAF model with self-attention performs significantly better than the triple encoder model. 9", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Discriminative Model Results", |
| "sec_num": "6.3" |
| }, |
| { |
| "text": "Next, we evaluate the performance of our generative models with both retrieval-based recall@k scores and phrase matching-based metrics as discussed in Sec. 5 (as well as human evaluation). We first discuss the retrieval-based recall@k results in Table 3 . Starting with a simple sequenceto-sequence attention model with video only, chat only, and both video and chat encoders, the recall@k scores are better than all the simple baselines. Moreover, using both video+chat context is again better than using only one context modality. Finally, we show that the addition of the bidirectional attention flow mechanism improves the performance in all recall@k scores. 10 Note that generative model scores are lower than the discriminative models on retrieval recall@k metric, which is expected (see discussion in previous visual dialogue work (Das et al., 2017) ), because discriminative models can tune to the biases in the response candidate options, but generative models are more useful for real-world tasks such as 9 Statistical significance of p < 0.01 for recall@1, based on the bootstrap test (Noreen, 1989; Efron and Tibshirani, 1994) with 100K samples.", |
| "cite_spans": [ |
| { |
| "start": 838, |
| "end": 856, |
| "text": "(Das et al., 2017)", |
| "ref_id": "BIBREF10" |
| }, |
| { |
| "start": 1015, |
| "end": 1016, |
| "text": "9", |
| "ref_id": null |
| }, |
| { |
| "start": 1096, |
| "end": 1110, |
| "text": "(Noreen, 1989;", |
| "ref_id": "BIBREF41" |
| }, |
| { |
| "start": 1111, |
| "end": 1138, |
| "text": "Efron and Tibshirani, 1994)", |
| "ref_id": "BIBREF15" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 246, |
| "end": 253, |
| "text": "Table 3", |
| "ref_id": "TABREF6" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Generative Model Results", |
| "sec_num": "6.4" |
| }, |
| { |
| "text": "10 Stat. signif. p < 0.05 for recall@1 w.r.t. Seq2seq+Atten (video+chat); p < 0.01 w.r.t. chat-and video-only models.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Generative Model Results", |
| "sec_num": "6.4" |
| }, |
| { |
| "text": "Models recall@1 recall@2 recall@5 1 neg.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Generative Model Results", |
| "sec_num": "6.4" |
| }, |
| { |
| "text": "18 generation of novel responses word-by-word from scratch in Siri/Alexa/Cortana style applications (whereas discriminative models can only rank the pre-given list of responses). We also evaluate our generative models with phrase-level matching metrics: METEOR and ROUGE-L, as shown in Table 4 . Again, our BiDAF model is stat. significantly better than non-BiDAF model on both METEOR (p < 0.01) and ROUGE-L (p < 0.02) metrics. Since dialogue systems can have several diverse, non-overlapping valid responses, we consider a multi-reference setup where all the utterances in the 10-sec response window are treated as valid responses. 11", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 286, |
| "end": 293, |
| "text": "Table 4", |
| "ref_id": "TABREF8" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Generative Model Results", |
| "sec_num": "6.4" |
| }, |
| { |
| "text": "Finally, we also perform human evaluation to compare our top two generative models, i.e., the video+chat seq2seq with attention and its extension with BiDAF (Sec. 4.3), based on a 100-sized sample. We take the generated response from both these models, and randomly shuffle these pairs to anonymize model identity. We then ask two annotators (for 50 task instances each) to score the responses of these two models based on relevance. Note that the human evaluators were familiar with Twitch FIFA-18 video games and also the Twitch's unique set of chat mannerisms and emotes. As shown in Table 5 , our BiDAF based generative model performs better than the non-BiDAF one, which is already quite a strong video+chat encoder model with attention.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 587, |
| "end": 594, |
| "text": "Table 5", |
| "ref_id": "TABREF9" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Human Evaluation of Models", |
| "sec_num": "6.5" |
| }, |
| { |
| "text": "We also compare the effect of different negative training triples that we discussed in Sec. 5. Table 6 shows the comparison between one negative 11 Liu et al. (2016b) discussed that BLEU and most phrase matching metrics are not good for evaluating dialogue systems. Also, generative models have very low phrasematching metric scores because the generated response can be valid but still very different from the ground truth reference (Lowe et al., 2015; Liu et al., 2016b; Li et al., 2016) . We present results for the relatively better metrics like paraphrase-enabled METEOR for completeness, but still focus on retrieval recall@k and human evaluation. Figure 9 : Attention visualization: generated word 'goal' in response is intuitively aligning to goal-related video frames (top-3-weight frames highlighted) and context words (top-10-weight words highlighted).", |
| "cite_spans": [ |
| { |
| "start": 145, |
| "end": 147, |
| "text": "11", |
| "ref_id": null |
| }, |
| { |
| "start": 148, |
| "end": 166, |
| "text": "Liu et al. (2016b)", |
| "ref_id": "BIBREF31" |
| }, |
| { |
| "start": 434, |
| "end": 453, |
| "text": "(Lowe et al., 2015;", |
| "ref_id": "BIBREF32" |
| }, |
| { |
| "start": 454, |
| "end": 472, |
| "text": "Liu et al., 2016b;", |
| "ref_id": "BIBREF31" |
| }, |
| { |
| "start": 473, |
| "end": 489, |
| "text": "Li et al., 2016)", |
| "ref_id": "BIBREF27" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 95, |
| "end": 102, |
| "text": "Table 6", |
| "ref_id": "TABREF11" |
| }, |
| { |
| "start": 654, |
| "end": 662, |
| "text": "Figure 9", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Negative Training Pairs", |
| "sec_num": "7.1" |
| }, |
| { |
| "text": "training triple (with just a negative response) vs. three negative training triples (one with negative video context, one with negative chat context, and another with negative response), showing that using the 3-negative examples setup is substantially better. Table 7 shows the performance comparison between the classification loss and max-margin loss on our TriDAF with self-attention discriminative model (Sec. 4.2.2). We observe that max-margin loss performs better than the classification loss, which is intuitive because max-margin loss tries to differentiate between positive and negative training example triples. ", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 261, |
| "end": 268, |
| "text": "Table 7", |
| "ref_id": "TABREF14" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Negative Training Pairs", |
| "sec_num": "7.1" |
| }, |
| { |
| "text": "For our best generative model (BiDAF), Table 8 shows that using a joint loss of cross-entropy and max-margin is better than just using only cross-entropy loss optimization (Sec. 4.3.1). Maxmargin loss provides knowledge about the negative samples for the generative model, hence improves the retrieval-based recall@k scores.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 39, |
| "end": 46, |
| "text": "Table 8", |
| "ref_id": "TABREF16" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Generative Loss Functions", |
| "sec_num": "7.3" |
| }, |
| { |
| "text": "Finally, we show some interesting output examples from both our discriminative and generative models as shown in Fig. 8 . Additionally, Fig. 9 Models recall@1 recall@2 recall@5 Cross-entropy (XE) 13 visualizes that our models can learn some correct attention alignments from the generated output response word to the appropriate (goal-related) video frames as well as chat context words.", |
| "cite_spans": [ |
| { |
| "start": 191, |
| "end": 195, |
| "text": "(XE)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 113, |
| "end": 119, |
| "text": "Fig. 8", |
| "ref_id": null |
| }, |
| { |
| "start": 136, |
| "end": 142, |
| "text": "Fig. 9", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Attention Visualization and Examples", |
| "sec_num": "7.4" |
| }, |
| { |
| "text": "We presented a new game-chat based videocontext, many-speaker dialogue task and dataset. We also presented several baselines and state-ofthe-art discriminative and generative models on this task. We hope that this testbed will be a good starting point to encourage future work on the challenging video-context dialogue paradigm. In future work, we plan to investigate the effects of multiple users, i.e., the multi-party aspect of this dataset. We also plan to explore advanced video features such as activity recognition, person identification, etc.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusion", |
| "sec_num": "8" |
| }, |
| { |
| "text": "Note that this filtering suppresses the performance of simple frequent-response baseline described in Sec. 4.1.4 Other preprocessing steps include: omit the utterances in the response window which refer to a speaker name out of the current chat context; remove non-representative utterances, e.g., those with hyperlinks; replace (anonymize) all the user identities mentioned in the utterances with a common tag (i.e., anonymizing due to similar intuitions from the Q&A community (Hermann et al., 2015)).5 Note that this is substantially larger than or comparable to most current video captioning datasets. We plan to further extend our dataset based on diverse games and video types.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "In our preliminary experiments, we found that adding self-attention is 0.92% better in recall@1 and faster than passing the hidden states through another layer of RNN, as done inSeo et al. (2017).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| } |
| ], |
| "back_matter": [ |
| { |
| "text": "We thank the reviewers for their helpful comments.This work was supported by DARPA YFA17-D17AP00022, ARO-YIP Award W911NF-18-1-0336, Google Faculty Research Award, Bloomberg Data Science Research Grant, and NVidia GPU awards. The views, opinions, and/or findings contained in this article are those of the authors and should not be interpreted as representing the official views or policies, either expressed or implied, of the funding agency.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Acknowledgments", |
| "sec_num": null |
| } |
| ], |
| "bib_entries": { |
| "BIBREF0": { |
| "ref_id": "b0", |
| "title": "Visionand-language navigation: Interpreting visuallygrounded navigation instructions in real environments", |
| "authors": [ |
| { |
| "first": "Peter", |
| "middle": [], |
| "last": "Anderson", |
| "suffix": "" |
| }, |
| { |
| "first": "Qi", |
| "middle": [], |
| "last": "Wu", |
| "suffix": "" |
| }, |
| { |
| "first": "Damien", |
| "middle": [], |
| "last": "Teney", |
| "suffix": "" |
| }, |
| { |
| "first": "Jake", |
| "middle": [], |
| "last": "Bruce", |
| "suffix": "" |
| }, |
| { |
| "first": "Mark", |
| "middle": [], |
| "last": "Johnson", |
| "suffix": "" |
| }, |
| { |
| "first": "Niko", |
| "middle": [], |
| "last": "S\u00fcnderhauf", |
| "suffix": "" |
| }, |
| { |
| "first": "Ian", |
| "middle": [], |
| "last": "Reid", |
| "suffix": "" |
| }, |
| { |
| "first": "Stephen", |
| "middle": [], |
| "last": "Gould", |
| "suffix": "" |
| }, |
| { |
| "first": "Anton", |
| "middle": [], |
| "last": "Van Den", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Hengel", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "CVPR", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Peter Anderson, Qi Wu, Damien Teney, Jake Bruce, Mark Johnson, Niko S\u00fcnderhauf, Ian Reid, Stephen Gould, and Anton van den Hengel. 2017. Vision- and-language navigation: Interpreting visually- grounded navigation instructions in real environ- ments. In CVPR.", |
| "links": null |
| }, |
| "BIBREF1": { |
| "ref_id": "b1", |
| "title": "Towards the understanding of gaming audiences by modeling twitch emotes", |
| "authors": [ |
| { |
| "first": "Francesco", |
| "middle": [], |
| "last": "Barbieri", |
| "suffix": "" |
| }, |
| { |
| "first": "Luis", |
| "middle": [], |
| "last": "Espinosa-Anke", |
| "suffix": "" |
| }, |
| { |
| "first": "Miguel", |
| "middle": [], |
| "last": "Ballesteros", |
| "suffix": "" |
| }, |
| { |
| "first": "Horacio", |
| "middle": [], |
| "last": "Saggion", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "Third Workshop on Noisy Usergenerated Text", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Francesco Barbieri, Luis Espinosa-Anke, Miguel Ballesteros, Horacio Saggion, et al. 2017. Towards the understanding of gaming audiences by modeling twitch emotes. In Third Workshop on Noisy User- generated Text (W-NUT 2017).", |
| "links": null |
| }, |
| "BIBREF2": { |
| "ref_id": "b2", |
| "title": "Models of natural language understanding", |
| "authors": [ |
| { |
| "first": "Madeleine", |
| "middle": [], |
| "last": "Bates", |
| "suffix": "" |
| } |
| ], |
| "year": 1995, |
| "venue": "Proceedings of the National Academy of Sciences", |
| "volume": "92", |
| "issue": "22", |
| "pages": "9977--9982", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Madeleine Bates. 1995. Models of natural lan- guage understanding. Proceedings of the National Academy of Sciences, 92(22):9977-9982.", |
| "links": null |
| }, |
| "BIBREF3": { |
| "ref_id": "b3", |
| "title": "Embodied conversation: integrating face and gesture into automatic spoken dialogue systems", |
| "authors": [ |
| { |
| "first": "Justine", |
| "middle": [], |
| "last": "Cassell", |
| "suffix": "" |
| } |
| ], |
| "year": 1999, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Justine Cassell. 1999. Embodied conversation: inte- grating face and gesture into automatic spoken dia- logue systems.", |
| "links": null |
| }, |
| "BIBREF4": { |
| "ref_id": "b4", |
| "title": "Deep learning for spoken and text dialog systems", |
| "authors": [ |
| { |
| "first": "Asli", |
| "middle": [], |
| "last": "Celikyilmaz", |
| "suffix": "" |
| }, |
| { |
| "first": "Li", |
| "middle": [], |
| "last": "Deng", |
| "suffix": "" |
| }, |
| { |
| "first": "Dilek", |
| "middle": [], |
| "last": "Hakkani-Tur", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "Deep Learning in Natural Language Processing", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Asli Celikyilmaz, Li Deng, and Dilek Hakkani-Tur. 2017. Deep learning for spoken and text dialog sys- tems. Deep Learning in Natural Language Process- ing (eds. Li Deng and Yang Liu).", |
| "links": null |
| }, |
| "BIBREF5": { |
| "ref_id": "b5", |
| "title": "Resolving referring expressions in conversational dialogs for natural user interfaces", |
| "authors": [ |
| { |
| "first": "Asli", |
| "middle": [], |
| "last": "Celikyilmaz", |
| "suffix": "" |
| }, |
| { |
| "first": "Zhaleh", |
| "middle": [], |
| "last": "Feizollahi", |
| "suffix": "" |
| }, |
| { |
| "first": "Dilek", |
| "middle": [], |
| "last": "Hakkani-Tur", |
| "suffix": "" |
| }, |
| { |
| "first": "Ruhi", |
| "middle": [], |
| "last": "Sarikaya", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "EMNLP", |
| "volume": "", |
| "issue": "", |
| "pages": "2094--2104", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Asli Celikyilmaz, Zhaleh Feizollahi, Dilek Hakkani- Tur, and Ruhi Sarikaya. 2014. Resolving refer- ring expressions in conversational dialogs for natural user interfaces. In EMNLP, pages 2094-2104.", |
| "links": null |
| }, |
| "BIBREF6": { |
| "ref_id": "b6", |
| "title": "A universal model for flexible item selection in conversational dialogs", |
| "authors": [ |
| { |
| "first": "Asli", |
| "middle": [], |
| "last": "Celikyilmaz", |
| "suffix": "" |
| }, |
| { |
| "first": "Zhaleh", |
| "middle": [], |
| "last": "Feizollahi", |
| "suffix": "" |
| }, |
| { |
| "first": "Dilek", |
| "middle": [], |
| "last": "Hakkani-Tur", |
| "suffix": "" |
| }, |
| { |
| "first": "Ruhi", |
| "middle": [], |
| "last": "Sarikaya", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "Automatic Speech Recognition and Understanding (ASRU)", |
| "volume": "", |
| "issue": "", |
| "pages": "361--367", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Asli Celikyilmaz, Zhaleh Feizollahi, Dilek Hakkani- Tur, and Ruhi Sarikaya. 2015. A universal model for flexible item selection in conversational dialogs. In Automatic Speech Recognition and Understanding (ASRU), 2015 IEEE Workshop on, pages 361-367. IEEE.", |
| "links": null |
| }, |
| "BIBREF7": { |
| "ref_id": "b7", |
| "title": "Learning to sportscast: a test of grounded language acquisition", |
| "authors": [ |
| { |
| "first": "L", |
| "middle": [], |
| "last": "David", |
| "suffix": "" |
| }, |
| { |
| "first": "Raymond J", |
| "middle": [], |
| "last": "Chen", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Mooney", |
| "suffix": "" |
| } |
| ], |
| "year": 2008, |
| "venue": "Proceedings of the 25th international conference on Machine learning", |
| "volume": "", |
| "issue": "", |
| "pages": "128--135", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "David L Chen and Raymond J Mooney. 2008. Learn- ing to sportscast: a test of grounded language acqui- sition. In Proceedings of the 25th international con- ference on Machine learning, pages 128-135. ACM.", |
| "links": null |
| }, |
| "BIBREF8": { |
| "ref_id": "b8", |
| "title": "Effects of voice-based synthetic assistant on performance of emergency care provider in training", |
| "authors": [ |
| { |
| "first": "Praveen", |
| "middle": [], |
| "last": "Damacharla", |
| "suffix": "" |
| }, |
| { |
| "first": "Parashar", |
| "middle": [], |
| "last": "Dhakal", |
| "suffix": "" |
| }, |
| { |
| "first": "Sebastian", |
| "middle": [], |
| "last": "Stumbo", |
| "suffix": "" |
| }, |
| { |
| "first": "Y", |
| "middle": [], |
| "last": "Ahmad", |
| "suffix": "" |
| }, |
| { |
| "first": "Subhashini", |
| "middle": [], |
| "last": "Javaid", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Ganapathy", |
| "suffix": "" |
| }, |
| { |
| "first": "A", |
| "middle": [], |
| "last": "David", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Malek", |
| "suffix": "" |
| }, |
| { |
| "first": "C", |
| "middle": [], |
| "last": "Douglas", |
| "suffix": "" |
| }, |
| { |
| "first": "Vijay", |
| "middle": [], |
| "last": "Hodge", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Devabhaktuni", |
| "suffix": "" |
| } |
| ], |
| "year": 2018, |
| "venue": "International Journal of Artificial Intelligence in Education", |
| "volume": "", |
| "issue": "", |
| "pages": "1--22", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Praveen Damacharla, Parashar Dhakal, Sebastian Stumbo, Ahmad Y Javaid, Subhashini Ganapathy, David A Malek, Douglas C Hodge, and Vijay Dev- abhaktuni. 2018. Effects of voice-based synthetic assistant on performance of emergency care provider in training. International Journal of Artificial Intel- ligence in Education, pages 1-22.", |
| "links": null |
| }, |
| "BIBREF9": { |
| "ref_id": "b9", |
| "title": "Embodied question answering", |
| "authors": [ |
| { |
| "first": "Abhishek", |
| "middle": [], |
| "last": "Das", |
| "suffix": "" |
| }, |
| { |
| "first": "Samyak", |
| "middle": [], |
| "last": "Datta", |
| "suffix": "" |
| }, |
| { |
| "first": "Georgia", |
| "middle": [], |
| "last": "Gkioxari", |
| "suffix": "" |
| }, |
| { |
| "first": "Stefan", |
| "middle": [], |
| "last": "Lee", |
| "suffix": "" |
| }, |
| { |
| "first": "Devi", |
| "middle": [], |
| "last": "Parikh", |
| "suffix": "" |
| }, |
| { |
| "first": "Dhruv", |
| "middle": [], |
| "last": "Batra", |
| "suffix": "" |
| } |
| ], |
| "year": 2018, |
| "venue": "CVPR", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Abhishek Das, Samyak Datta, Georgia Gkioxari, Ste- fan Lee, Devi Parikh, and Dhruv Batra. 2018. Em- bodied question answering. In CVPR.", |
| "links": null |
| }, |
| "BIBREF10": { |
| "ref_id": "b10", |
| "title": "Visual dialog", |
| "authors": [ |
| { |
| "first": "Abhishek", |
| "middle": [], |
| "last": "Das", |
| "suffix": "" |
| }, |
| { |
| "first": "Satwik", |
| "middle": [], |
| "last": "Kottur", |
| "suffix": "" |
| }, |
| { |
| "first": "Khushi", |
| "middle": [], |
| "last": "Gupta", |
| "suffix": "" |
| }, |
| { |
| "first": "Avi", |
| "middle": [], |
| "last": "Singh", |
| "suffix": "" |
| }, |
| { |
| "first": "Deshraj", |
| "middle": [], |
| "last": "Yadav", |
| "suffix": "" |
| }, |
| { |
| "first": "M", |
| "middle": [ |
| "F" |
| ], |
| "last": "Jos\u00e9", |
| "suffix": "" |
| }, |
| { |
| "first": "Devi", |
| "middle": [], |
| "last": "Moura", |
| "suffix": "" |
| }, |
| { |
| "first": "Dhruv", |
| "middle": [], |
| "last": "Parikh", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Batra", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Abhishek Das, Satwik Kottur, Khushi Gupta, Avi Singh, Deshraj Yadav, Jos\u00e9 MF Moura, Devi Parikh, and Dhruv Batra. 2017. Visual dialog. In CVPR.", |
| "links": null |
| }, |
| "BIBREF11": { |
| "ref_id": "b11", |
| "title": "Spoken language understanding: a survey", |
| "authors": [ |
| { |
| "first": "Renato De", |
| "middle": [], |
| "last": "Mori", |
| "suffix": "" |
| } |
| ], |
| "year": 2007, |
| "venue": "Automatic Speech Recognition & Understanding, 2007. ASRU. IEEE Workshop on", |
| "volume": "", |
| "issue": "", |
| "pages": "365--376", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Renato De Mori. 2007. Spoken language understand- ing: a survey. In Automatic Speech Recognition & Understanding, 2007. ASRU. IEEE Workshop on, pages 365-376. IEEE.", |
| "links": null |
| }, |
| "BIBREF12": { |
| "ref_id": "b12", |
| "title": "Meteor universal: Language specific translation evaluation for any target language", |
| "authors": [ |
| { |
| "first": "Michael", |
| "middle": [], |
| "last": "Denkowski", |
| "suffix": "" |
| }, |
| { |
| "first": "Alon", |
| "middle": [], |
| "last": "Lavie", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "Proceedings of the ninth workshop on statistical machine translation", |
| "volume": "", |
| "issue": "", |
| "pages": "376--380", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Michael Denkowski and Alon Lavie. 2014. Meteor universal: Language specific translation evaluation for any target language. In Proceedings of the ninth workshop on statistical machine translation, pages 376-380.", |
| "links": null |
| }, |
| "BIBREF13": { |
| "ref_id": "b13", |
| "title": "Gemini: A natural language system for spoken-language understanding", |
| "authors": [ |
| { |
| "first": "John", |
| "middle": [], |
| "last": "Dowding", |
| "suffix": "" |
| }, |
| { |
| "first": "Jean", |
| "middle": [ |
| "Mark" |
| ], |
| "last": "Gawron", |
| "suffix": "" |
| }, |
| { |
| "first": "Doug", |
| "middle": [], |
| "last": "Appelt", |
| "suffix": "" |
| }, |
| { |
| "first": "John", |
| "middle": [], |
| "last": "Bear", |
| "suffix": "" |
| }, |
| { |
| "first": "Lynn", |
| "middle": [], |
| "last": "Cherny", |
| "suffix": "" |
| }, |
| { |
| "first": "Robert", |
| "middle": [], |
| "last": "Moore", |
| "suffix": "" |
| }, |
| { |
| "first": "Douglas", |
| "middle": [], |
| "last": "Moran", |
| "suffix": "" |
| } |
| ], |
| "year": 1993, |
| "venue": "ACL", |
| "volume": "", |
| "issue": "", |
| "pages": "54--61", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "John Dowding, Jean Mark Gawron, Doug Appelt, John Bear, Lynn Cherny, Robert Moore, and Douglas Moran. 1993. Gemini: A natural language system for spoken-language understanding. In ACL, pages 54-61.", |
| "links": null |
| }, |
| "BIBREF14": { |
| "ref_id": "b14", |
| "title": "User modeling for spoken dialogue system evaluation", |
| "authors": [ |
| { |
| "first": "Wieland", |
| "middle": [], |
| "last": "Eckert", |
| "suffix": "" |
| }, |
| { |
| "first": "Esther", |
| "middle": [], |
| "last": "Levin", |
| "suffix": "" |
| }, |
| { |
| "first": "Roberto", |
| "middle": [], |
| "last": "Pieraccini", |
| "suffix": "" |
| } |
| ], |
| "year": 1997, |
| "venue": "Automatic Speech Recognition and Understanding", |
| "volume": "", |
| "issue": "", |
| "pages": "80--87", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Wieland Eckert, Esther Levin, and Roberto Pierac- cini. 1997. User modeling for spoken dialogue sys- tem evaluation. In Automatic Speech Recognition and Understanding, 1997. Proceedings., 1997 IEEE Workshop on, pages 80-87. IEEE.", |
| "links": null |
| }, |
| "BIBREF15": { |
| "ref_id": "b15", |
| "title": "An introduction to the bootstrap", |
| "authors": [ |
| { |
| "first": "Bradley", |
| "middle": [], |
| "last": "Efron", |
| "suffix": "" |
| }, |
| { |
| "first": "J", |
| "middle": [], |
| "last": "Robert", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Tibshirani", |
| "suffix": "" |
| } |
| ], |
| "year": 1994, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Bradley Efron and Robert J Tibshirani. 1994. An intro- duction to the bootstrap. CRC press.", |
| "links": null |
| }, |
| "BIBREF16": { |
| "ref_id": "b16", |
| "title": "The roles of haptic-ostensive referring expressions in cooperative, task-based human-robot dialogue", |
| "authors": [ |
| { |
| "first": "Mary", |
| "middle": [ |
| "Ellen" |
| ], |
| "last": "Foster", |
| "suffix": "" |
| }, |
| { |
| "first": "Ellen", |
| "middle": [ |
| "Gurman" |
| ], |
| "last": "Bard", |
| "suffix": "" |
| }, |
| { |
| "first": "Markus", |
| "middle": [], |
| "last": "Guhe", |
| "suffix": "" |
| }, |
| { |
| "first": "L", |
| "middle": [], |
| "last": "Robin", |
| "suffix": "" |
| }, |
| { |
| "first": "Jon", |
| "middle": [], |
| "last": "Hill", |
| "suffix": "" |
| }, |
| { |
| "first": "Alois", |
| "middle": [], |
| "last": "Oberlander", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Knoll", |
| "suffix": "" |
| } |
| ], |
| "year": 2008, |
| "venue": "Proceedings of the 3rd ACM/IEEE international conference on Human robot interaction", |
| "volume": "", |
| "issue": "", |
| "pages": "295--302", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Mary Ellen Foster, Ellen Gurman Bard, Markus Guhe, Robin L Hill, Jon Oberlander, and Alois Knoll. 2008. The roles of haptic-ostensive referring expres- sions in cooperative, task-based human-robot dia- logue. In Proceedings of the 3rd ACM/IEEE in- ternational conference on Human robot interaction, pages 295-302. ACM.", |
| "links": null |
| }, |
| "BIBREF17": { |
| "ref_id": "b17", |
| "title": "Video highlight prediction using audience chat reactions", |
| "authors": [ |
| { |
| "first": "Cheng-Yang", |
| "middle": [], |
| "last": "Fu", |
| "suffix": "" |
| }, |
| { |
| "first": "Joon", |
| "middle": [], |
| "last": "Lee", |
| "suffix": "" |
| }, |
| { |
| "first": "Mohit", |
| "middle": [], |
| "last": "Bansal", |
| "suffix": "" |
| }, |
| { |
| "first": "Alexander C", |
| "middle": [], |
| "last": "Berg", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "EMNLP", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Cheng-Yang Fu, Joon Lee, Mohit Bansal, and Alexan- der C Berg. 2017. Video highlight prediction using audience chat reactions. In EMNLP.", |
| "links": null |
| }, |
| "BIBREF18": { |
| "ref_id": "b18", |
| "title": "User simulation for spoken dialogue systems: Learning and evaluation", |
| "authors": [ |
| { |
| "first": "Kallirroi", |
| "middle": [], |
| "last": "Georgila", |
| "suffix": "" |
| }, |
| { |
| "first": "James", |
| "middle": [], |
| "last": "Henderson", |
| "suffix": "" |
| }, |
| { |
| "first": "Oliver", |
| "middle": [], |
| "last": "Lemon", |
| "suffix": "" |
| } |
| ], |
| "year": 2006, |
| "venue": "Ninth International Conference on Spoken Language Processing", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Kallirroi Georgila, James Henderson, and Oliver Lemon. 2006. User simulation for spoken dialogue systems: Learning and evaluation. In Ninth Interna- tional Conference on Spoken Language Processing.", |
| "links": null |
| }, |
| "BIBREF19": { |
| "ref_id": "b19", |
| "title": "Iqa: Visual question answering in interactive environments", |
| "authors": [ |
| { |
| "first": "Daniel", |
| "middle": [], |
| "last": "Gordon", |
| "suffix": "" |
| }, |
| { |
| "first": "Aniruddha", |
| "middle": [], |
| "last": "Kembhavi", |
| "suffix": "" |
| }, |
| { |
| "first": "Mohammad", |
| "middle": [], |
| "last": "Rastegari", |
| "suffix": "" |
| }, |
| { |
| "first": "Joseph", |
| "middle": [], |
| "last": "Redmon", |
| "suffix": "" |
| }, |
| { |
| "first": "Dieter", |
| "middle": [], |
| "last": "Fox", |
| "suffix": "" |
| }, |
| { |
| "first": "Ali", |
| "middle": [], |
| "last": "Farhadi", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": { |
| "arXiv": [ |
| "arXiv:1712.03316" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Daniel Gordon, Aniruddha Kembhavi, Mohammad Rastegari, Joseph Redmon, Dieter Fox, and Ali Farhadi. 2017. Iqa: Visual question answer- ing in interactive environments. arXiv preprint arXiv:1712.03316.", |
| "links": null |
| }, |
| "BIBREF20": { |
| "ref_id": "b20", |
| "title": "Semantic similarity applied to spoken dialogue summarization", |
| "authors": [ |
| { |
| "first": "Iryna", |
| "middle": [], |
| "last": "Gurevych", |
| "suffix": "" |
| }, |
| { |
| "first": "Michael", |
| "middle": [], |
| "last": "Strube", |
| "suffix": "" |
| } |
| ], |
| "year": 2004, |
| "venue": "Proceedings of the 20th international conference on Computational Linguistics, page 764. Association for Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Iryna Gurevych and Michael Strube. 2004. Seman- tic similarity applied to spoken dialogue summariza- tion. In Proceedings of the 20th international con- ference on Computational Linguistics, page 764. As- sociation for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF21": { |
| "ref_id": "b21", |
| "title": "Teaching machines to read and comprehend", |
| "authors": [ |
| { |
| "first": "Karl", |
| "middle": [], |
| "last": "Moritz Hermann", |
| "suffix": "" |
| }, |
| { |
| "first": "Tomas", |
| "middle": [], |
| "last": "Kocisky", |
| "suffix": "" |
| }, |
| { |
| "first": "Edward", |
| "middle": [], |
| "last": "Grefenstette", |
| "suffix": "" |
| }, |
| { |
| "first": "Lasse", |
| "middle": [], |
| "last": "Espeholt", |
| "suffix": "" |
| }, |
| { |
| "first": "Will", |
| "middle": [], |
| "last": "Kay", |
| "suffix": "" |
| }, |
| { |
| "first": "Mustafa", |
| "middle": [], |
| "last": "Suleyman", |
| "suffix": "" |
| }, |
| { |
| "first": "Phil", |
| "middle": [], |
| "last": "Blunsom", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "NIPS", |
| "volume": "", |
| "issue": "", |
| "pages": "1693--1701", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Karl Moritz Hermann, Tomas Kocisky, Edward Grefenstette, Lasse Espeholt, Will Kay, Mustafa Su- leyman, and Phil Blunsom. 2015. Teaching ma- chines to read and comprehend. In NIPS, pages 1693-1701.", |
| "links": null |
| }, |
| "BIBREF22": { |
| "ref_id": "b22", |
| "title": "Dialog state tracking with attention-based sequenceto-sequence learning", |
| "authors": [ |
| { |
| "first": "Takaaki", |
| "middle": [], |
| "last": "Hori", |
| "suffix": "" |
| }, |
| { |
| "first": "Hai", |
| "middle": [], |
| "last": "Wang", |
| "suffix": "" |
| }, |
| { |
| "first": "Chiori", |
| "middle": [], |
| "last": "Hori", |
| "suffix": "" |
| }, |
| { |
| "first": "Shinji", |
| "middle": [], |
| "last": "Watanabe", |
| "suffix": "" |
| }, |
| { |
| "first": "Bret", |
| "middle": [], |
| "last": "Harsham", |
| "suffix": "" |
| }, |
| { |
| "first": "Jonathan", |
| "middle": [ |
| "Le" |
| ], |
| "last": "Roux", |
| "suffix": "" |
| }, |
| { |
| "first": "John", |
| "middle": [ |
| "R" |
| ], |
| "last": "Hershey", |
| "suffix": "" |
| }, |
| { |
| "first": "Yusuke", |
| "middle": [], |
| "last": "Koji", |
| "suffix": "" |
| }, |
| { |
| "first": "Yi", |
| "middle": [], |
| "last": "Jing", |
| "suffix": "" |
| }, |
| { |
| "first": "Zhaocheng", |
| "middle": [], |
| "last": "Zhu", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "Spoken Language Technology Workshop (SLT)", |
| "volume": "", |
| "issue": "", |
| "pages": "552--558", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Takaaki Hori, Hai Wang, Chiori Hori, Shinji Watanabe, Bret Harsham, Jonathan Le Roux, John R Hershey, Yusuke Koji, Yi Jing, Zhaocheng Zhu, et al. 2016. Dialog state tracking with attention-based sequence- to-sequence learning. In Spoken Language Technol- ogy Workshop (SLT), 2016 IEEE, pages 552-558. IEEE.", |
| "links": null |
| }, |
| "BIBREF23": { |
| "ref_id": "b23", |
| "title": "Cobot in lambdamoo: A social statistics agent", |
| "authors": [ |
| { |
| "first": "Charles", |
| "middle": [ |
| "Lee" |
| ], |
| "last": "Isbell", |
| "suffix": "" |
| }, |
| { |
| "first": "Michael", |
| "middle": [], |
| "last": "Kearns", |
| "suffix": "" |
| }, |
| { |
| "first": "Dave", |
| "middle": [], |
| "last": "Kormann", |
| "suffix": "" |
| }, |
| { |
| "first": "Satinder", |
| "middle": [], |
| "last": "Singh", |
| "suffix": "" |
| }, |
| { |
| "first": "Peter", |
| "middle": [], |
| "last": "Stone", |
| "suffix": "" |
| } |
| ], |
| "year": 2000, |
| "venue": "AAAI/IAAI", |
| "volume": "", |
| "issue": "", |
| "pages": "36--41", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Charles Lee Isbell, Michael Kearns, Dave Kormann, Satinder Singh, and Peter Stone. 2000. Cobot in lambdamoo: A social statistics agent. In AAAI/IAAI, pages 36-41.", |
| "links": null |
| }, |
| "BIBREF24": { |
| "ref_id": "b24", |
| "title": "Tgif-qa: Toward spatiotemporal reasoning in visual question answering", |
| "authors": [ |
| { |
| "first": "Yunseok", |
| "middle": [], |
| "last": "Jang", |
| "suffix": "" |
| }, |
| { |
| "first": "Yale", |
| "middle": [], |
| "last": "Song", |
| "suffix": "" |
| }, |
| { |
| "first": "Youngjae", |
| "middle": [], |
| "last": "Yu", |
| "suffix": "" |
| }, |
| { |
| "first": "Youngjin", |
| "middle": [], |
| "last": "Kim", |
| "suffix": "" |
| }, |
| { |
| "first": "Gunhee", |
| "middle": [], |
| "last": "Kim", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "CVPR", |
| "volume": "", |
| "issue": "", |
| "pages": "2680--2688", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Yunseok Jang, Yale Song, Youngjae Yu, Youngjin Kim, and Gunhee Kim. 2017. Tgif-qa: Toward spatio- temporal reasoning in visual question answering. In CVPR, pages 2680-8.", |
| "links": null |
| }, |
| "BIBREF25": { |
| "ref_id": "b25", |
| "title": "The icsi meeting corpus", |
| "authors": [ |
| { |
| "first": "Adam", |
| "middle": [], |
| "last": "Janin", |
| "suffix": "" |
| }, |
| { |
| "first": "Don", |
| "middle": [], |
| "last": "Baron", |
| "suffix": "" |
| }, |
| { |
| "first": "Jane", |
| "middle": [], |
| "last": "Edwards", |
| "suffix": "" |
| }, |
| { |
| "first": "Dan", |
| "middle": [], |
| "last": "Ellis", |
| "suffix": "" |
| }, |
| { |
| "first": "David", |
| "middle": [], |
| "last": "Gelbart", |
| "suffix": "" |
| }, |
| { |
| "first": "Nelson", |
| "middle": [], |
| "last": "Morgan", |
| "suffix": "" |
| }, |
| { |
| "first": "Barbara", |
| "middle": [], |
| "last": "Peskin", |
| "suffix": "" |
| }, |
| { |
| "first": "Thilo", |
| "middle": [], |
| "last": "Pfau", |
| "suffix": "" |
| }, |
| { |
| "first": "Elizabeth", |
| "middle": [], |
| "last": "Shriberg", |
| "suffix": "" |
| }, |
| { |
| "first": "Andreas", |
| "middle": [], |
| "last": "Stolcke", |
| "suffix": "" |
| } |
| ], |
| "year": 2003, |
| "venue": "Acoustics, Speech, and Signal Processing", |
| "volume": "1", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Adam Janin, Don Baron, Jane Edwards, Dan Ellis, David Gelbart, Nelson Morgan, Barbara Peskin, Thilo Pfau, Elizabeth Shriberg, Andreas Stolcke, et al. 2003. The icsi meeting corpus. In Acous- tics, Speech, and Signal Processing, 2003. Proceed- ings.(ICASSP'03). 2003 IEEE International Confer- ence on, volume 1, pages I-I. IEEE.", |
| "links": null |
| }, |
| "BIBREF26": { |
| "ref_id": "b26", |
| "title": "Match: An architecture for multimodal dialogue systems", |
| "authors": [ |
| { |
| "first": "Michael", |
| "middle": [], |
| "last": "Johnston", |
| "suffix": "" |
| }, |
| { |
| "first": "Srinivas", |
| "middle": [], |
| "last": "Bangalore", |
| "suffix": "" |
| }, |
| { |
| "first": "Gunaranjan", |
| "middle": [], |
| "last": "Vasireddy", |
| "suffix": "" |
| }, |
| { |
| "first": "Amanda", |
| "middle": [], |
| "last": "Stent", |
| "suffix": "" |
| }, |
| { |
| "first": "Patrick", |
| "middle": [], |
| "last": "Ehlen", |
| "suffix": "" |
| }, |
| { |
| "first": "Marilyn", |
| "middle": [], |
| "last": "Walker", |
| "suffix": "" |
| }, |
| { |
| "first": "Steve", |
| "middle": [], |
| "last": "Whittaker", |
| "suffix": "" |
| }, |
| { |
| "first": "Preetam", |
| "middle": [], |
| "last": "Maloor", |
| "suffix": "" |
| } |
| ], |
| "year": 2002, |
| "venue": "Proceedings of the 40th Annual Meeting on Association for Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "376--383", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Michael Johnston, Srinivas Bangalore, Gunaranjan Vasireddy, Amanda Stent, Patrick Ehlen, Marilyn Walker, Steve Whittaker, and Preetam Maloor. 2002. Match: An architecture for multimodal dialogue systems. In Proceedings of the 40th Annual Meet- ing on Association for Computational Linguistics, pages 376-383. Association for Computational Lin- guistics.", |
| "links": null |
| }, |
| "BIBREF27": { |
| "ref_id": "b27", |
| "title": "A persona-based neural conversation model", |
| "authors": [ |
| { |
| "first": "Jiwei", |
| "middle": [], |
| "last": "Li", |
| "suffix": "" |
| }, |
| { |
| "first": "Michel", |
| "middle": [], |
| "last": "Galley", |
| "suffix": "" |
| }, |
| { |
| "first": "Chris", |
| "middle": [], |
| "last": "Brockett", |
| "suffix": "" |
| }, |
| { |
| "first": "P", |
| "middle": [], |
| "last": "Georgios", |
| "suffix": "" |
| }, |
| { |
| "first": "Jianfeng", |
| "middle": [], |
| "last": "Spithourakis", |
| "suffix": "" |
| }, |
| { |
| "first": "Bill", |
| "middle": [], |
| "last": "Gao", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Dolan", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "ACL", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Jiwei Li, Michel Galley, Chris Brockett, Georgios P Spithourakis, Jianfeng Gao, and Bill Dolan. 2016. A persona-based neural conversation model. In ACL.", |
| "links": null |
| }, |
| "BIBREF28": { |
| "ref_id": "b28", |
| "title": "ROUGE: A package for automatic evaluation of summaries", |
| "authors": [ |
| { |
| "first": "Chin-Yew", |
| "middle": [], |
| "last": "Lin", |
| "suffix": "" |
| } |
| ], |
| "year": 2004, |
| "venue": "Text summarization branches out: Proceedings of the ACL-04 workshop", |
| "volume": "8", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Chin-Yew Lin. 2004. ROUGE: A package for auto- matic evaluation of summaries. In Text summariza- tion branches out: Proceedings of the ACL-04 work- shop, volume 8. Barcelona, Spain.", |
| "links": null |
| }, |
| "BIBREF29": { |
| "ref_id": "b29", |
| "title": "A structured self-attentive sentence embedding", |
| "authors": [ |
| { |
| "first": "Zhouhan", |
| "middle": [], |
| "last": "Lin", |
| "suffix": "" |
| }, |
| { |
| "first": "Minwei", |
| "middle": [], |
| "last": "Feng", |
| "suffix": "" |
| }, |
| { |
| "first": "Cicero", |
| "middle": [], |
| "last": "Nogueira", |
| "suffix": "" |
| }, |
| { |
| "first": "Mo", |
| "middle": [], |
| "last": "Santos", |
| "suffix": "" |
| }, |
| { |
| "first": "Bing", |
| "middle": [], |
| "last": "Yu", |
| "suffix": "" |
| }, |
| { |
| "first": "Bowen", |
| "middle": [], |
| "last": "Xiang", |
| "suffix": "" |
| }, |
| { |
| "first": "Yoshua", |
| "middle": [], |
| "last": "Zhou", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Bengio", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "ICLR", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Zhouhan Lin, Minwei Feng, Cicero Nogueira dos San- tos, Mo Yu, Bing Xiang, Bowen Zhou, and Yoshua Bengio. 2017. A structured self-attentive sentence embedding. In ICLR.", |
| "links": null |
| }, |
| "BIBREF30": { |
| "ref_id": "b30", |
| "title": "Task learning through visual demonstration and situated dialogue", |
| "authors": [ |
| { |
| "first": "Changsong", |
| "middle": [], |
| "last": "Liu", |
| "suffix": "" |
| }, |
| { |
| "first": "Y", |
| "middle": [], |
| "last": "Joyce", |
| "suffix": "" |
| }, |
| { |
| "first": "Nishant", |
| "middle": [], |
| "last": "Chai", |
| "suffix": "" |
| }, |
| { |
| "first": "Song-Chun", |
| "middle": [], |
| "last": "Shukla", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Zhu", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "AAAI Workshop: Symbiotic Cognitive Systems", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Changsong Liu, Joyce Y Chai, Nishant Shukla, and Song-Chun Zhu. 2016a. Task learning through vi- sual demonstration and situated dialogue. In AAAI Workshop: Symbiotic Cognitive Systems.", |
| "links": null |
| }, |
| "BIBREF31": { |
| "ref_id": "b31", |
| "title": "How not to evaluate your dialogue system: An empirical study of unsupervised evaluation metrics for dialogue response generation", |
| "authors": [ |
| { |
| "first": "Chia-Wei", |
| "middle": [], |
| "last": "Liu", |
| "suffix": "" |
| }, |
| { |
| "first": "Ryan", |
| "middle": [], |
| "last": "Lowe", |
| "suffix": "" |
| }, |
| { |
| "first": "V", |
| "middle": [], |
| "last": "Iulian", |
| "suffix": "" |
| }, |
| { |
| "first": "Michael", |
| "middle": [], |
| "last": "Serban", |
| "suffix": "" |
| }, |
| { |
| "first": "Laurent", |
| "middle": [], |
| "last": "Noseworthy", |
| "suffix": "" |
| }, |
| { |
| "first": "Joelle", |
| "middle": [], |
| "last": "Charlin", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Pineau", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "EMNLP", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Chia-Wei Liu, Ryan Lowe, Iulian V Serban, Michael Noseworthy, Laurent Charlin, and Joelle Pineau. 2016b. How not to evaluate your dialogue system: An empirical study of unsupervised evaluation met- rics for dialogue response generation. In EMNLP.", |
| "links": null |
| }, |
| "BIBREF32": { |
| "ref_id": "b32", |
| "title": "The ubuntu dialogue corpus: A large dataset for research in unstructured multi-turn dialogue systems", |
| "authors": [ |
| { |
| "first": "Ryan", |
| "middle": [], |
| "last": "Lowe", |
| "suffix": "" |
| }, |
| { |
| "first": "Nissan", |
| "middle": [], |
| "last": "Pow", |
| "suffix": "" |
| }, |
| { |
| "first": "Iulian", |
| "middle": [], |
| "last": "Serban", |
| "suffix": "" |
| }, |
| { |
| "first": "Joelle", |
| "middle": [], |
| "last": "Pineau", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "16th Annual Meeting of the Special Interest Group on Discourse and Dialogue", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Ryan Lowe, Nissan Pow, Iulian Serban, and Joelle Pineau. 2015. The ubuntu dialogue corpus: A large dataset for research in unstructured multi-turn dia- logue systems. In 16th Annual Meeting of the Spe- cial Interest Group on Discourse and Dialogue.", |
| "links": null |
| }, |
| "BIBREF33": { |
| "ref_id": "b33", |
| "title": "Lstm based conversation models", |
| "authors": [ |
| { |
| "first": "Yi", |
| "middle": [], |
| "last": "Luan", |
| "suffix": "" |
| }, |
| { |
| "first": "Yangfeng", |
| "middle": [], |
| "last": "Ji", |
| "suffix": "" |
| }, |
| { |
| "first": "Mari", |
| "middle": [], |
| "last": "Ostendorf", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": { |
| "arXiv": [ |
| "arXiv:1603.09457" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Yi Luan, Yangfeng Ji, and Mari Ostendorf. 2016. Lstm based conversation models. arXiv preprint arXiv:1603.09457.", |
| "links": null |
| }, |
| "BIBREF34": { |
| "ref_id": "b34", |
| "title": "Effective approaches to attentionbased neural machine translation", |
| "authors": [ |
| { |
| "first": "Minh-Thang", |
| "middle": [], |
| "last": "Luong", |
| "suffix": "" |
| }, |
| { |
| "first": "Hieu", |
| "middle": [], |
| "last": "Pham", |
| "suffix": "" |
| }, |
| { |
| "first": "Christopher", |
| "middle": [ |
| "D" |
| ], |
| "last": "Manning", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "EMNLP", |
| "volume": "", |
| "issue": "", |
| "pages": "1412--1421", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Minh-Thang Luong, Hieu Pham, and Christopher D. Manning. 2015. Effective approaches to attention- based neural machine translation. In EMNLP, pages 1412-1421.", |
| "links": null |
| }, |
| "BIBREF35": { |
| "ref_id": "b35", |
| "title": "A dataset and exploration of models for understanding video data through fill-in-the-blank question-answering", |
| "authors": [ |
| { |
| "first": "Tegan", |
| "middle": [], |
| "last": "Maharaj", |
| "suffix": "" |
| }, |
| { |
| "first": "Nicolas", |
| "middle": [], |
| "last": "Ballas", |
| "suffix": "" |
| }, |
| { |
| "first": "Anna", |
| "middle": [], |
| "last": "Rohrbach", |
| "suffix": "" |
| }, |
| { |
| "first": "Aaron", |
| "middle": [], |
| "last": "Courville", |
| "suffix": "" |
| }, |
| { |
| "first": "Christopher", |
| "middle": [], |
| "last": "Pal", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Tegan Maharaj, Nicolas Ballas, Anna Rohrbach, Aaron Courville, and Christopher Pal. 2017. A dataset and exploration of models for understanding video data through fill-in-the-blank question-answering. CVPR.", |
| "links": null |
| }, |
| "BIBREF36": { |
| "ref_id": "b36", |
| "title": "Generation and comprehension of unambiguous object descriptions", |
| "authors": [ |
| { |
| "first": "Junhua", |
| "middle": [], |
| "last": "Mao", |
| "suffix": "" |
| }, |
| { |
| "first": "Jonathan", |
| "middle": [], |
| "last": "Huang", |
| "suffix": "" |
| }, |
| { |
| "first": "Alexander", |
| "middle": [], |
| "last": "Toshev", |
| "suffix": "" |
| }, |
| { |
| "first": "Oana", |
| "middle": [], |
| "last": "Camburu", |
| "suffix": "" |
| }, |
| { |
| "first": "Alan", |
| "middle": [ |
| "L" |
| ], |
| "last": "Yuille", |
| "suffix": "" |
| }, |
| { |
| "first": "Kevin", |
| "middle": [], |
| "last": "Murphy", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "CVPR", |
| "volume": "", |
| "issue": "", |
| "pages": "11--20", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Junhua Mao, Jonathan Huang, Alexander Toshev, Oana Camburu, Alan L Yuille, and Kevin Murphy. 2016. Generation and comprehension of unambiguous ob- ject descriptions. In CVPR, pages 11-20.", |
| "links": null |
| }, |
| "BIBREF37": { |
| "ref_id": "b37", |
| "title": "Distributed representations of words and phrases and their compositionality", |
| "authors": [ |
| { |
| "first": "Tomas", |
| "middle": [], |
| "last": "Mikolov", |
| "suffix": "" |
| }, |
| { |
| "first": "Ilya", |
| "middle": [], |
| "last": "Sutskever", |
| "suffix": "" |
| }, |
| { |
| "first": "Kai", |
| "middle": [], |
| "last": "Chen", |
| "suffix": "" |
| }, |
| { |
| "first": "Greg", |
| "middle": [ |
| "S" |
| ], |
| "last": "Corrado", |
| "suffix": "" |
| }, |
| { |
| "first": "Jeff", |
| "middle": [], |
| "last": "Dean", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "NIPS", |
| "volume": "", |
| "issue": "", |
| "pages": "3111--3119", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Cor- rado, and Jeff Dean. 2013. Distributed representa- tions of words and phrases and their compositional- ity. In NIPS, pages 3111-3119.", |
| "links": null |
| }, |
| "BIBREF38": { |
| "ref_id": "b38", |
| "title": "Imagegrounded conversations: Multimodal context for natural question and response generation", |
| "authors": [ |
| { |
| "first": "Nasrin", |
| "middle": [], |
| "last": "Mostafazadeh", |
| "suffix": "" |
| }, |
| { |
| "first": "Chris", |
| "middle": [], |
| "last": "Brockett", |
| "suffix": "" |
| }, |
| { |
| "first": "Bill", |
| "middle": [], |
| "last": "Dolan", |
| "suffix": "" |
| }, |
| { |
| "first": "Michel", |
| "middle": [], |
| "last": "Galley", |
| "suffix": "" |
| }, |
| { |
| "first": "Jianfeng", |
| "middle": [], |
| "last": "Gao", |
| "suffix": "" |
| }, |
| { |
| "first": "P", |
| "middle": [], |
| "last": "Georgios", |
| "suffix": "" |
| }, |
| { |
| "first": "Lucy", |
| "middle": [], |
| "last": "Spithourakis", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Vanderwende", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": { |
| "arXiv": [ |
| "arXiv:1701.08251" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Nasrin Mostafazadeh, Chris Brockett, Bill Dolan, Michel Galley, Jianfeng Gao, Georgios P Sp- ithourakis, and Lucy Vanderwende. 2017. Image- grounded conversations: Multimodal context for natural question and response generation. arXiv preprint arXiv:1701.08251.", |
| "links": null |
| }, |
| "BIBREF39": { |
| "ref_id": "b39", |
| "title": "Neural belief tracker: Data-driven dialogue state tracking", |
| "authors": [ |
| { |
| "first": "Nikola", |
| "middle": [], |
| "last": "Mrk\u0161i\u0107", |
| "suffix": "" |
| }, |
| { |
| "first": "O", |
| "middle": [], |
| "last": "Diarmuid", |
| "suffix": "" |
| }, |
| { |
| "first": "Tsung-Hsien", |
| "middle": [], |
| "last": "S\u00e9aghdha", |
| "suffix": "" |
| }, |
| { |
| "first": "Blaise", |
| "middle": [], |
| "last": "Wen", |
| "suffix": "" |
| }, |
| { |
| "first": "Steve", |
| "middle": [], |
| "last": "Thomson", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Young", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": { |
| "arXiv": [ |
| "arXiv:1606.03777" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Nikola Mrk\u0161i\u0107, Diarmuid O S\u00e9aghdha, Tsung-Hsien Wen, Blaise Thomson, and Steve Young. 2016. Neural belief tracker: Data-driven dialogue state tracking. arXiv preprint arXiv:1606.03777.", |
| "links": null |
| }, |
| "BIBREF40": { |
| "ref_id": "b40", |
| "title": "An intelligent personal assistant for task and time management", |
| "authors": [ |
| { |
| "first": "Karen", |
| "middle": [], |
| "last": "Myers", |
| "suffix": "" |
| }, |
| { |
| "first": "Pauline", |
| "middle": [], |
| "last": "Berry", |
| "suffix": "" |
| }, |
| { |
| "first": "Jim", |
| "middle": [], |
| "last": "Blythe", |
| "suffix": "" |
| }, |
| { |
| "first": "Ken", |
| "middle": [], |
| "last": "Conley", |
| "suffix": "" |
| }, |
| { |
| "first": "Melinda", |
| "middle": [], |
| "last": "Gervasio", |
| "suffix": "" |
| }, |
| { |
| "first": "Deborah", |
| "middle": [ |
| "L" |
| ], |
| "last": "Mcguinness", |
| "suffix": "" |
| }, |
| { |
| "first": "David", |
| "middle": [], |
| "last": "Morley", |
| "suffix": "" |
| }, |
| { |
| "first": "Avi", |
| "middle": [], |
| "last": "Pfeffer", |
| "suffix": "" |
| }, |
| { |
| "first": "Martha", |
| "middle": [], |
| "last": "Pollack", |
| "suffix": "" |
| }, |
| { |
| "first": "Milind", |
| "middle": [], |
| "last": "Tambe", |
| "suffix": "" |
| } |
| ], |
| "year": 2007, |
| "venue": "AI Magazine", |
| "volume": "28", |
| "issue": "2", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Karen Myers, Pauline Berry, Jim Blythe, Ken Conley, Melinda Gervasio, Deborah L McGuinness, David Morley, Avi Pfeffer, Martha Pollack, and Milind Tambe. 2007. An intelligent personal assistant for task and time management. AI Magazine, 28(2):47.", |
| "links": null |
| }, |
| "BIBREF41": { |
| "ref_id": "b41", |
| "title": "Computer-intensive methods for testing hypotheses", |
| "authors": [ |
| { |
| "first": "", |
| "middle": [], |
| "last": "Eric W Noreen", |
| "suffix": "" |
| } |
| ], |
| "year": 1989, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Eric W Noreen. 1989. Computer-intensive methods for testing hypotheses. Wiley New York.", |
| "links": null |
| }, |
| "BIBREF42": { |
| "ref_id": "b42", |
| "title": "Video highlights detection and summarization with lag-calibration based on concept-emotion mapping of crowdsourced time-sync comments", |
| "authors": [ |
| { |
| "first": "Qing", |
| "middle": [], |
| "last": "Ping", |
| "suffix": "" |
| }, |
| { |
| "first": "Chaomei", |
| "middle": [], |
| "last": "Chen", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "EMNLP Workshop on New Frontiers in Summarization", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Qing Ping and Chaomei Chen. 2017. Video highlights detection and summarization with lag-calibration based on concept-emotion mapping of crowd- sourced time-sync comments. In EMNLP Workshop on New Frontiers in Summarization.", |
| "links": null |
| }, |
| "BIBREF43": { |
| "ref_id": "b43", |
| "title": "Natural language generation in dialog systems", |
| "authors": [ |
| { |
| "first": "Owen", |
| "middle": [], |
| "last": "Rambow", |
| "suffix": "" |
| }, |
| { |
| "first": "Srinivas", |
| "middle": [], |
| "last": "Bangalore", |
| "suffix": "" |
| }, |
| { |
| "first": "Marilyn", |
| "middle": [], |
| "last": "Walker", |
| "suffix": "" |
| } |
| ], |
| "year": 2001, |
| "venue": "Proceedings of the first international conference on Human language technology research", |
| "volume": "", |
| "issue": "", |
| "pages": "1--4", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Owen Rambow, Srinivas Bangalore, and Marilyn Walker. 2001. Natural language generation in di- alog systems. In Proceedings of the first interna- tional conference on Human language technology research, pages 1-4. Association for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF44": { |
| "ref_id": "b44", |
| "title": "A corpus collection and annotation framework for learning multimodal clarification strategies", |
| "authors": [ |
| { |
| "first": "Verena", |
| "middle": [], |
| "last": "Rieser", |
| "suffix": "" |
| }, |
| { |
| "first": "Ivana", |
| "middle": [], |
| "last": "Kruijff-Korbayov\u00e1", |
| "suffix": "" |
| }, |
| { |
| "first": "Oliver", |
| "middle": [], |
| "last": "Lemon", |
| "suffix": "" |
| } |
| ], |
| "year": 2005, |
| "venue": "6th SIGdial Workshop on DIS-COURSE and DIALOGUE", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Verena Rieser, Ivana Kruijff-Korbayov\u00e1, and Oliver Lemon. 2005. A corpus collection and annota- tion framework for learning multimodal clarifica- tion strategies. In 6th SIGdial Workshop on DIS- COURSE and DIALOGUE.", |
| "links": null |
| }, |
| "BIBREF45": { |
| "ref_id": "b45", |
| "title": "Learning effective multimodal dialogue strategies from wizardof-oz data: Bootstrapping and evaluation", |
| "authors": [ |
| { |
| "first": "Verena", |
| "middle": [], |
| "last": "Rieser", |
| "suffix": "" |
| }, |
| { |
| "first": "Oliver", |
| "middle": [], |
| "last": "Lemon", |
| "suffix": "" |
| } |
| ], |
| "year": 2008, |
| "venue": "ACL", |
| "volume": "", |
| "issue": "", |
| "pages": "638--646", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Verena Rieser and Oliver Lemon. 2008. Learning ef- fective multimodal dialogue strategies from wizard- of-oz data: Bootstrapping and evaluation. In ACL, pages 638-646.", |
| "links": null |
| }, |
| "BIBREF46": { |
| "ref_id": "b46", |
| "title": "Data-driven response generation in social media", |
| "authors": [ |
| { |
| "first": "Alan", |
| "middle": [], |
| "last": "Ritter", |
| "suffix": "" |
| }, |
| { |
| "first": "Colin", |
| "middle": [], |
| "last": "Cherry", |
| "suffix": "" |
| }, |
| { |
| "first": "William B", |
| "middle": [], |
| "last": "Dolan", |
| "suffix": "" |
| } |
| ], |
| "year": 2011, |
| "venue": "Association for Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "583--593", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Alan Ritter, Colin Cherry, and William B Dolan. 2011. Data-driven response generation in social media. In EMNLP, pages 583-593. Association for Computa- tional Linguistics.", |
| "links": null |
| }, |
| "BIBREF47": { |
| "ref_id": "b47", |
| "title": "An overview of end-to-end language understanding and dialog management for personal digital assistants", |
| "authors": [ |
| { |
| "first": "Ruhi", |
| "middle": [], |
| "last": "Sarikaya", |
| "suffix": "" |
| }, |
| { |
| "first": "A", |
| "middle": [], |
| "last": "Paul", |
| "suffix": "" |
| }, |
| { |
| "first": "Alex", |
| "middle": [], |
| "last": "Crook", |
| "suffix": "" |
| }, |
| { |
| "first": "Minwoo", |
| "middle": [], |
| "last": "Marin", |
| "suffix": "" |
| }, |
| { |
| "first": "Jean-Philippe", |
| "middle": [], |
| "last": "Jeong", |
| "suffix": "" |
| }, |
| { |
| "first": "Asli", |
| "middle": [], |
| "last": "Robichaud", |
| "suffix": "" |
| }, |
| { |
| "first": "Young-Bum", |
| "middle": [], |
| "last": "Celikyilmaz", |
| "suffix": "" |
| }, |
| { |
| "first": "Alexandre", |
| "middle": [], |
| "last": "Kim", |
| "suffix": "" |
| }, |
| { |
| "first": "Omar", |
| "middle": [ |
| "Zia" |
| ], |
| "last": "Rochette", |
| "suffix": "" |
| }, |
| { |
| "first": "Xiaohu", |
| "middle": [], |
| "last": "Khan", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Liu", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "Spoken Language Technology Workshop (SLT)", |
| "volume": "", |
| "issue": "", |
| "pages": "391--397", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Ruhi Sarikaya, Paul A Crook, Alex Marin, Minwoo Jeong, Jean-Philippe Robichaud, Asli Celikyilmaz, Young-Bum Kim, Alexandre Rochette, Omar Zia Khan, Xiaohu Liu, et al. 2016. An overview of end-to-end language understanding and dialog man- agement for personal digital assistants. In Spoken Language Technology Workshop (SLT), 2016 IEEE, pages 391-397. IEEE.", |
| "links": null |
| }, |
| "BIBREF48": { |
| "ref_id": "b48", |
| "title": "Taking cscw seriously", |
| "authors": [ |
| { |
| "first": "Kjeld", |
| "middle": [], |
| "last": "Schmidt", |
| "suffix": "" |
| }, |
| { |
| "first": "Liam", |
| "middle": [], |
| "last": "Bannon", |
| "suffix": "" |
| } |
| ], |
| "year": 1992, |
| "venue": "Computer Supported Cooperative Work (CSCW)", |
| "volume": "", |
| "issue": "", |
| "pages": "7--40", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Kjeld Schmidt and Liam Bannon. 1992. Taking cscw seriously. Computer Supported Cooperative Work (CSCW), 1(1-2):7-40.", |
| "links": null |
| }, |
| "BIBREF49": { |
| "ref_id": "b49", |
| "title": "Bidirectional attention flow for machine comprehension", |
| "authors": [ |
| { |
| "first": "Minjoon", |
| "middle": [], |
| "last": "Seo", |
| "suffix": "" |
| }, |
| { |
| "first": "Aniruddha", |
| "middle": [], |
| "last": "Kembhavi", |
| "suffix": "" |
| }, |
| { |
| "first": "Ali", |
| "middle": [], |
| "last": "Farhadi", |
| "suffix": "" |
| }, |
| { |
| "first": "Hannaneh", |
| "middle": [], |
| "last": "Hajishirzi", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "ICLR", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Minjoon Seo, Aniruddha Kembhavi, Ali Farhadi, and Hannaneh Hajishirzi. 2017. Bidirectional attention flow for machine comprehension. In ICLR.", |
| "links": null |
| }, |
| "BIBREF50": { |
| "ref_id": "b50", |
| "title": "Multiresolution recurrent neural networks: An application to dialogue response generation", |
| "authors": [ |
| { |
| "first": "Iulian", |
| "middle": [], |
| "last": "Vlad Serban", |
| "suffix": "" |
| }, |
| { |
| "first": "Tim", |
| "middle": [], |
| "last": "Klinger", |
| "suffix": "" |
| }, |
| { |
| "first": "Gerald", |
| "middle": [], |
| "last": "Tesauro", |
| "suffix": "" |
| }, |
| { |
| "first": "Kartik", |
| "middle": [], |
| "last": "Talamadupula", |
| "suffix": "" |
| }, |
| { |
| "first": "Bowen", |
| "middle": [], |
| "last": "Zhou", |
| "suffix": "" |
| }, |
| { |
| "first": "Yoshua", |
| "middle": [], |
| "last": "Bengio", |
| "suffix": "" |
| }, |
| { |
| "first": "Aaron C", |
| "middle": [], |
| "last": "Courville", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "AAAI", |
| "volume": "", |
| "issue": "", |
| "pages": "3288--3294", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Iulian Vlad Serban, Tim Klinger, Gerald Tesauro, Kar- tik Talamadupula, Bowen Zhou, Yoshua Bengio, and Aaron C Courville. 2017a. Multiresolution re- current neural networks: An application to dialogue response generation. In AAAI, pages 3288-3294.", |
| "links": null |
| }, |
| "BIBREF51": { |
| "ref_id": "b51", |
| "title": "Building end-to-end dialogue systems using generative hierarchical neural network models", |
| "authors": [ |
| { |
| "first": "Iulian", |
| "middle": [], |
| "last": "Vlad Serban", |
| "suffix": "" |
| }, |
| { |
| "first": "Alessandro", |
| "middle": [], |
| "last": "Sordoni", |
| "suffix": "" |
| }, |
| { |
| "first": "Yoshua", |
| "middle": [], |
| "last": "Bengio", |
| "suffix": "" |
| }, |
| { |
| "first": "C", |
| "middle": [], |
| "last": "Aaron", |
| "suffix": "" |
| }, |
| { |
| "first": "Joelle", |
| "middle": [], |
| "last": "Courville", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Pineau", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "AAAI", |
| "volume": "", |
| "issue": "", |
| "pages": "3776--3784", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Iulian Vlad Serban, Alessandro Sordoni, Yoshua Ben- gio, Aaron C Courville, and Joelle Pineau. 2016. Building end-to-end dialogue systems using gener- ative hierarchical neural network models. In AAAI, pages 3776-3784.", |
| "links": null |
| }, |
| "BIBREF52": { |
| "ref_id": "b52", |
| "title": "A hierarchical latent variable encoder-decoder model for generating dialogues", |
| "authors": [ |
| { |
| "first": "Iulian", |
| "middle": [], |
| "last": "Vlad Serban", |
| "suffix": "" |
| }, |
| { |
| "first": "Alessandro", |
| "middle": [], |
| "last": "Sordoni", |
| "suffix": "" |
| }, |
| { |
| "first": "Ryan", |
| "middle": [], |
| "last": "Lowe", |
| "suffix": "" |
| }, |
| { |
| "first": "Laurent", |
| "middle": [], |
| "last": "Charlin", |
| "suffix": "" |
| }, |
| { |
| "first": "Joelle", |
| "middle": [], |
| "last": "Pineau", |
| "suffix": "" |
| }, |
| { |
| "first": "C", |
| "middle": [], |
| "last": "Aaron", |
| "suffix": "" |
| }, |
| { |
| "first": "Yoshua", |
| "middle": [], |
| "last": "Courville", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Bengio", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "AAAI", |
| "volume": "", |
| "issue": "", |
| "pages": "3295--3301", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Iulian Vlad Serban, Alessandro Sordoni, Ryan Lowe, Laurent Charlin, Joelle Pineau, Aaron C Courville, and Yoshua Bengio. 2017b. A hierarchical latent variable encoder-decoder model for generating di- alogues. In AAAI, pages 3295-3301.", |
| "links": null |
| }, |
| "BIBREF53": { |
| "ref_id": "b53", |
| "title": "Interactive reinforcement learning for taskoriented dialogue management", |
| "authors": [ |
| { |
| "first": "Pararth", |
| "middle": [], |
| "last": "Shah", |
| "suffix": "" |
| }, |
| { |
| "first": "Dilek", |
| "middle": [], |
| "last": "Hakkani-T\u00fcr", |
| "suffix": "" |
| }, |
| { |
| "first": "Larry", |
| "middle": [], |
| "last": "Heck", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "NIPS Deep Learning for Action and Interaction Workshop", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Pararth Shah, Dilek Hakkani-T\u00fcr, and Larry Heck. 2016. Interactive reinforcement learning for task- oriented dialogue management. In NIPS Deep Learning for Action and Interaction Workshop.", |
| "links": null |
| }, |
| "BIBREF54": { |
| "ref_id": "b54", |
| "title": "Reinforcement learning for spoken dialogue systems", |
| "authors": [ |
| { |
| "first": "P", |
| "middle": [], |
| "last": "Satinder", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Singh", |
| "suffix": "" |
| }, |
| { |
| "first": "J", |
| "middle": [], |
| "last": "Michael", |
| "suffix": "" |
| }, |
| { |
| "first": "Diane", |
| "middle": [ |
| "J" |
| ], |
| "last": "Kearns", |
| "suffix": "" |
| }, |
| { |
| "first": "Marilyn", |
| "middle": [ |
| "A" |
| ], |
| "last": "Litman", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Walker", |
| "suffix": "" |
| } |
| ], |
| "year": 2000, |
| "venue": "Advances in Neural Information Processing Systems", |
| "volume": "", |
| "issue": "", |
| "pages": "956--962", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Satinder P Singh, Michael J Kearns, Diane J Litman, and Marilyn A Walker. 2000. Reinforcement learn- ing for spoken dialogue systems. In Advances in Neural Information Processing Systems, pages 956- 962.", |
| "links": null |
| }, |
| "BIBREF55": { |
| "ref_id": "b55", |
| "title": "A neural network approach to context-sensitive generation of conversational responses", |
| "authors": [ |
| { |
| "first": "Alessandro", |
| "middle": [], |
| "last": "Sordoni", |
| "suffix": "" |
| }, |
| { |
| "first": "Michel", |
| "middle": [], |
| "last": "Galley", |
| "suffix": "" |
| }, |
| { |
| "first": "Michael", |
| "middle": [], |
| "last": "Auli", |
| "suffix": "" |
| }, |
| { |
| "first": "Chris", |
| "middle": [], |
| "last": "Brockett", |
| "suffix": "" |
| }, |
| { |
| "first": "Yangfeng", |
| "middle": [], |
| "last": "Ji", |
| "suffix": "" |
| }, |
| { |
| "first": "Margaret", |
| "middle": [], |
| "last": "Mitchell", |
| "suffix": "" |
| }, |
| { |
| "first": "Jian-Yun", |
| "middle": [], |
| "last": "Nie", |
| "suffix": "" |
| }, |
| { |
| "first": "Jianfeng", |
| "middle": [], |
| "last": "Gao", |
| "suffix": "" |
| }, |
| { |
| "first": "Bill", |
| "middle": [], |
| "last": "Dolan", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": { |
| "arXiv": [ |
| "arXiv:1506.06714" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Alessandro Sordoni, Michel Galley, Michael Auli, Chris Brockett, Yangfeng Ji, Margaret Mitchell, Jian-Yun Nie, Jianfeng Gao, and Bill Dolan. 2015. A neural network approach to context-sensitive gen- eration of conversational responses. arXiv preprint arXiv:1506.06714.", |
| "links": null |
| }, |
| "BIBREF56": { |
| "ref_id": "b56", |
| "title": "On-line active reward learning for policy optimisation in spoken dialogue systems", |
| "authors": [ |
| { |
| "first": "Pei-Hao", |
| "middle": [], |
| "last": "Su", |
| "suffix": "" |
| }, |
| { |
| "first": "Milica", |
| "middle": [], |
| "last": "Gasic", |
| "suffix": "" |
| }, |
| { |
| "first": "Nikola", |
| "middle": [], |
| "last": "Mrksic", |
| "suffix": "" |
| }, |
| { |
| "first": "Lina", |
| "middle": [], |
| "last": "Rojas-Barahona", |
| "suffix": "" |
| }, |
| { |
| "first": "Stefan", |
| "middle": [], |
| "last": "Ultes", |
| "suffix": "" |
| }, |
| { |
| "first": "David", |
| "middle": [], |
| "last": "Vandyke", |
| "suffix": "" |
| }, |
| { |
| "first": "Tsung-Hsien", |
| "middle": [], |
| "last": "Wen", |
| "suffix": "" |
| }, |
| { |
| "first": "Steve", |
| "middle": [], |
| "last": "Young", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "ACL", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Pei-Hao Su, Milica Gasic, Nikola Mrksic, Lina Rojas- Barahona, Stefan Ultes, David Vandyke, Tsung- Hsien Wen, and Steve Young. 2016. On-line active reward learning for policy optimisation in spoken di- alogue systems. In ACL.", |
| "links": null |
| }, |
| "BIBREF57": { |
| "ref_id": "b57", |
| "title": "Movieqa: Understanding stories in movies through question-answering", |
| "authors": [ |
| { |
| "first": "Makarand", |
| "middle": [], |
| "last": "Tapaswi", |
| "suffix": "" |
| }, |
| { |
| "first": "Yukun", |
| "middle": [], |
| "last": "Zhu", |
| "suffix": "" |
| }, |
| { |
| "first": "Rainer", |
| "middle": [], |
| "last": "Stiefelhagen", |
| "suffix": "" |
| }, |
| { |
| "first": "Antonio", |
| "middle": [], |
| "last": "Torralba", |
| "suffix": "" |
| }, |
| { |
| "first": "Raquel", |
| "middle": [], |
| "last": "Urtasun", |
| "suffix": "" |
| }, |
| { |
| "first": "Sanja", |
| "middle": [], |
| "last": "Fidler", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "CVPR", |
| "volume": "", |
| "issue": "", |
| "pages": "4631--4640", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Makarand Tapaswi, Yukun Zhu, Rainer Stiefelhagen, Antonio Torralba, Raquel Urtasun, and Sanja Fidler. 2016. Movieqa: Understanding stories in movies through question-answering. In CVPR, pages 4631- 4640.", |
| "links": null |
| }, |
| "BIBREF58": { |
| "ref_id": "b58", |
| "title": "Sequence to sequence-video to text", |
| "authors": [ |
| { |
| "first": "Subhashini", |
| "middle": [], |
| "last": "Venugopalan", |
| "suffix": "" |
| }, |
| { |
| "first": "Marcus", |
| "middle": [], |
| "last": "Rohrbach", |
| "suffix": "" |
| }, |
| { |
| "first": "Jeffrey", |
| "middle": [], |
| "last": "Donahue", |
| "suffix": "" |
| }, |
| { |
| "first": "Raymond", |
| "middle": [], |
| "last": "Mooney", |
| "suffix": "" |
| }, |
| { |
| "first": "Trevor", |
| "middle": [], |
| "last": "Darrell", |
| "suffix": "" |
| }, |
| { |
| "first": "Kate", |
| "middle": [], |
| "last": "Saenko", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "CVPR", |
| "volume": "", |
| "issue": "", |
| "pages": "4534--4542", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Subhashini Venugopalan, Marcus Rohrbach, Jeffrey Donahue, Raymond Mooney, Trevor Darrell, and Kate Saenko. 2015. Sequence to sequence-video to text. In CVPR, pages 4534-4542.", |
| "links": null |
| }, |
| "BIBREF59": { |
| "ref_id": "b59", |
| "title": "A neural conversational model", |
| "authors": [ |
| { |
| "first": "Oriol", |
| "middle": [], |
| "last": "Vinyals", |
| "suffix": "" |
| }, |
| { |
| "first": "Quoc", |
| "middle": [], |
| "last": "Le", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "Proceedings of ICML Deep Learning Workshop", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Oriol Vinyals and Quoc Le. 2015. A neural conversa- tional model. In Proceedings of ICML Deep Learn- ing Workshop.", |
| "links": null |
| }, |
| "BIBREF60": { |
| "ref_id": "b60", |
| "title": "Guesswhat?! visual object discovery through multi-modal dialogue", |
| "authors": [ |
| { |
| "first": "Florian", |
| "middle": [], |
| "last": "Harm De Vries", |
| "suffix": "" |
| }, |
| { |
| "first": "Sarath", |
| "middle": [], |
| "last": "Strub", |
| "suffix": "" |
| }, |
| { |
| "first": "Olivier", |
| "middle": [], |
| "last": "Chandar", |
| "suffix": "" |
| }, |
| { |
| "first": "Hugo", |
| "middle": [], |
| "last": "Pietquin", |
| "suffix": "" |
| }, |
| { |
| "first": "Aaron", |
| "middle": [], |
| "last": "Larochelle", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Courville", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "CVPR", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Harm de Vries, Florian Strub, Sarath Chandar, Olivier Pietquin, Hugo Larochelle, and Aaron Courville. 2017. Guesswhat?! visual object discovery through multi-modal dialogue. In CVPR.", |
| "links": null |
| }, |
| "BIBREF61": { |
| "ref_id": "b61", |
| "title": "Advances in automatic meeting record creation and access", |
| "authors": [ |
| { |
| "first": "Alex", |
| "middle": [], |
| "last": "Waibel", |
| "suffix": "" |
| }, |
| { |
| "first": "Michael", |
| "middle": [], |
| "last": "Bett", |
| "suffix": "" |
| }, |
| { |
| "first": "Florian", |
| "middle": [], |
| "last": "Metze", |
| "suffix": "" |
| }, |
| { |
| "first": "Klaus", |
| "middle": [], |
| "last": "Ries", |
| "suffix": "" |
| }, |
| { |
| "first": "Thomas", |
| "middle": [], |
| "last": "Schaaf", |
| "suffix": "" |
| }, |
| { |
| "first": "Tanja", |
| "middle": [], |
| "last": "Schultz", |
| "suffix": "" |
| }, |
| { |
| "first": "Hagen", |
| "middle": [], |
| "last": "Soltau", |
| "suffix": "" |
| }, |
| { |
| "first": "Hua", |
| "middle": [], |
| "last": "Yu", |
| "suffix": "" |
| }, |
| { |
| "first": "Klaus", |
| "middle": [], |
| "last": "Zechner", |
| "suffix": "" |
| } |
| ], |
| "year": 2001, |
| "venue": "Acoustics, Speech, and Signal Processing", |
| "volume": "1", |
| "issue": "", |
| "pages": "597--600", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Alex Waibel, Michael Bett, Florian Metze, Klaus Ries, Thomas Schaaf, Tanja Schultz, Hagen Soltau, Hua Yu, and Klaus Zechner. 2001. Advances in auto- matic meeting record creation and access. In Acous- tics, Speech, and Signal Processing, 2001. Proceed- ings.(ICASSP'01). 2001 IEEE International Confer- ence on, volume 1, pages 597-600. IEEE.", |
| "links": null |
| }, |
| "BIBREF62": { |
| "ref_id": "b62", |
| "title": "Elizaa computer program for the study of natural language communication between man and machine", |
| "authors": [ |
| { |
| "first": "Joseph", |
| "middle": [], |
| "last": "Weizenbaum", |
| "suffix": "" |
| } |
| ], |
| "year": 1966, |
| "venue": "Communications of the ACM", |
| "volume": "9", |
| "issue": "1", |
| "pages": "36--45", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Joseph Weizenbaum. 1966. Elizaa computer program for the study of natural language communication be- tween man and machine. Communications of the ACM, 9(1):36-45.", |
| "links": null |
| }, |
| "BIBREF63": { |
| "ref_id": "b63", |
| "title": "Semantically conditioned lstm-based natural language generation for spoken dialogue systems", |
| "authors": [ |
| { |
| "first": "Milica", |
| "middle": [], |
| "last": "Tsung-Hsien Wen", |
| "suffix": "" |
| }, |
| { |
| "first": "Nikola", |
| "middle": [], |
| "last": "Gasic", |
| "suffix": "" |
| }, |
| { |
| "first": "Pei-Hao", |
| "middle": [], |
| "last": "Mrksic", |
| "suffix": "" |
| }, |
| { |
| "first": "David", |
| "middle": [], |
| "last": "Su", |
| "suffix": "" |
| }, |
| { |
| "first": "Steve", |
| "middle": [], |
| "last": "Vandyke", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Young", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": { |
| "arXiv": [ |
| "arXiv:1508.01745" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Tsung-Hsien Wen, Milica Gasic, Nikola Mrksic, Pei- Hao Su, David Vandyke, and Steve Young. 2015. Semantically conditioned lstm-based natural lan- guage generation for spoken dialogue systems. arXiv preprint arXiv:1508.01745.", |
| "links": null |
| }, |
| "BIBREF64": { |
| "ref_id": "b64", |
| "title": "CHALET: Cornell house agent learning environment", |
| "authors": [ |
| { |
| "first": "Claudia", |
| "middle": [], |
| "last": "Yan", |
| "suffix": "" |
| }, |
| { |
| "first": "Dipendra", |
| "middle": [], |
| "last": "Misra", |
| "suffix": "" |
| }, |
| { |
| "first": "Andrew", |
| "middle": [], |
| "last": "Bennnett", |
| "suffix": "" |
| }, |
| { |
| "first": "Aaron", |
| "middle": [], |
| "last": "Walsman", |
| "suffix": "" |
| }, |
| { |
| "first": "Yonatan", |
| "middle": [], |
| "last": "Bisk", |
| "suffix": "" |
| }, |
| { |
| "first": "Yoav", |
| "middle": [], |
| "last": "Artzi", |
| "suffix": "" |
| } |
| ], |
| "year": 2018, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": { |
| "arXiv": [ |
| "arXiv:1801.07357" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Claudia Yan, Dipendra Misra, Andrew Bennnett, Aaron Walsman, Yonatan Bisk, and Yoav Artzi. 2018. CHALET: Cornell house agent learning en- vironment. arXiv preprint arXiv:1801.07357.", |
| "links": null |
| }, |
| "BIBREF65": { |
| "ref_id": "b65", |
| "title": "Pomdp-based statistical spoken dialog systems: A review", |
| "authors": [ |
| { |
| "first": "Steve", |
| "middle": [], |
| "last": "Young", |
| "suffix": "" |
| }, |
| { |
| "first": "Milica", |
| "middle": [], |
| "last": "Ga\u0161i\u0107", |
| "suffix": "" |
| }, |
| { |
| "first": "Blaise", |
| "middle": [], |
| "last": "Thomson", |
| "suffix": "" |
| }, |
| { |
| "first": "Jason", |
| "middle": [ |
| "D" |
| ], |
| "last": "Williams", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "Proceedings of the IEEE", |
| "volume": "101", |
| "issue": "5", |
| "pages": "1160--1179", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Steve Young, Milica Ga\u0161i\u0107, Blaise Thomson, and Ja- son D Williams. 2013. Pomdp-based statistical spo- ken dialog systems: A review. Proceedings of the IEEE, 101(5):1160-1179.", |
| "links": null |
| }, |
| "BIBREF66": { |
| "ref_id": "b66", |
| "title": "Probabilistic methods in spokendialogue systems", |
| "authors": [ |
| { |
| "first": "J", |
| "middle": [], |
| "last": "Steve", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Young", |
| "suffix": "" |
| } |
| ], |
| "year": null, |
| "venue": "Philosophical Transactions of the Royal Society of London A: Mathematical, Physical and Engineering Sciences", |
| "volume": "358", |
| "issue": "", |
| "pages": "1389--1402", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Steve J Young. 2000. Probabilistic methods in spoken- dialogue systems. Philosophical Transactions of the Royal Society of London A: Mathematical, Physical and Engineering Sciences, 358(1769):1389-1402.", |
| "links": null |
| }, |
| "BIBREF67": { |
| "ref_id": "b67", |
| "title": "A joint speaker-listener-reinforcer model for referring expressions", |
| "authors": [ |
| { |
| "first": "Licheng", |
| "middle": [], |
| "last": "Yu", |
| "suffix": "" |
| }, |
| { |
| "first": "Hao", |
| "middle": [], |
| "last": "Tan", |
| "suffix": "" |
| }, |
| { |
| "first": "Mohit", |
| "middle": [], |
| "last": "Bansal", |
| "suffix": "" |
| }, |
| { |
| "first": "Tamara", |
| "middle": [ |
| "L" |
| ], |
| "last": "Berg", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "CVPR", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Licheng Yu, Hao Tan, Mohit Bansal, and Tamara L Berg. 2017. A joint speaker-listener-reinforcer model for referring expressions. In CVPR.", |
| "links": null |
| } |
| }, |
| "ref_entries": { |
| "FIGREF0": { |
| "type_str": "figure", |
| "uris": null, |
| "text": "Sample page of live broadcast of FIFA-18 game on twitch.tv with concurrent user chat.", |
| "num": null |
| }, |
| "FIGREF2": { |
| "type_str": "figure", |
| "uris": null, |
| "text": "Frequent words in our Twitch-FIFA dataset.", |
| "num": null |
| }, |
| "FIGREF3": { |
| "type_str": "figure", |
| "uris": null, |
| "text": "Overview of our 'triple encoder' discriminative model, with bidirectional-LSTM-RNN encoders for video, chat context, and response.", |
| "num": null |
| }, |
| "FIGREF4": { |
| "type_str": "figure", |
| "uris": null, |
| "text": "Overview of our tridirectional attention flow (TriDAF) model with all pairwise modality attention modules, as well as self-attention on video context, chat context, and response as inputs.", |
| "num": null |
| }, |
| "FIGREF5": { |
| "type_str": "figure", |
| "uris": null, |
| "text": "a , and b v a are trainable self-attention parameters. The final representation vector of the full video context after self-attention is\u0109", |
| "num": null |
| }, |
| "TABREF0": { |
| "type_str": "table", |
| "html": null, |
| "text": "Human evaluation of our dataset, comparing our filtered responses versus the first response in the window (for relevance w.r.t. video and chat contexts).", |
| "content": "<table/>", |
| "num": null |
| }, |
| "TABREF2": { |
| "type_str": "table", |
| "html": null, |
| "text": "Twitch-FIFA dataset's chat statistics (lengths are defined in terms of number of words).", |
| "content": "<table/>", |
| "num": null |
| }, |
| "TABREF3": { |
| "type_str": "table", |
| "html": null, |
| "text": "Figure 3: Distribution of #utterances in chat context (w.r.t. the #training examples for each case).", |
| "content": "<table><tr><td/><td>3500</td><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td/><td>3000</td><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>Word Frequency</td><td>2000 2500</td><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td/><td>1500</td><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td/><td>1000</td><td>good</td><td>game</td><td>lol</td><td>team</td><td>fifa</td><td>play</td><td>lul</td><td>man</td><td>record</td><td>inceptionlove</td><td>bro</td><td>love</td><td>kappa</td><td>better</td><td>player</td><td>best</td><td>time</td><td>players</td><td>sell</td><td><3</td></tr></table>", |
| "num": null |
| }, |
| "TABREF4": { |
| "type_str": "table", |
| "html": null, |
| "text": ". . . . . . . . . . . .", |
| "content": "<table><tr><td/><td/><td/><td/><td/><td>. . . . . .</td></tr><tr><td>response-to-video</td><td>chat-to-video</td><td>video-to-chat</td><td>response-to-chat</td><td>video-to-response</td><td>chat-to-response</td></tr><tr><td>attention</td><td>attention</td><td>attention</td><td>attention</td><td>attention</td><td>attention</td></tr></table>", |
| "num": null |
| }, |
| "TABREF6": { |
| "type_str": "table", |
| "html": null, |
| "text": "Performance of our baselines, discriminative models, and generative models for recall@k metrics on our Twitch-FIFA test set. C and V represent chat and video context, respectively.", |
| "content": "<table/>", |
| "num": null |
| }, |
| "TABREF8": { |
| "type_str": "table", |
| "html": null, |
| "text": "Performance of our generative models on phrase matching metrics.", |
| "content": "<table><tr><td>Models</td><td>Relevance</td></tr><tr><td>Seq2seq + Atten. (C+V) wins</td><td>41.0 %</td></tr><tr><td>BiDAF wins</td><td>34.0 %</td></tr><tr><td>Non-distinguishable</td><td>25.0 %</td></tr></table>", |
| "num": null |
| }, |
| "TABREF9": { |
| "type_str": "table", |
| "html": null, |
| "text": "Human evaluation comparing the baseline and BiDAF generative models.", |
| "content": "<table/>", |
| "num": null |
| }, |
| "TABREF11": { |
| "type_str": "table", |
| "html": null, |
| "text": "Ablation (dev) of one vs. three negative examples for TriDAF self-attention discriminative model.", |
| "content": "<table/>", |
| "num": null |
| }, |
| "TABREF12": { |
| "type_str": "table", |
| "html": null, |
| "text": "bloodtrail bloodtrail bloodtrail bloodtrail bloodtrail || yoooo || kappapride || xxuxx skillzzzz , favourite player you have used this year ? || pl3ad aa9love || are you playin with ksi ? ? kappa xxuxx || bought okocha cuz of you ant . first game 2 goals 3 assists ! game changer thank you m8 || play || ! pause || resume || twerkchoke twerkchoke twerkchoke || lul is aids || where has all thr challenges gone aswell ? || did mat yet messi ? || hellllllllllllllllllllllllllllllllllllllllllo || put messi on get in behind if u can || chris is getting ronaldo and messi || no one wants jamies coctail sausage haha || free kick with messi Ground-truth: play it to messi he makes good runs Generated: get messi for the other teamFigure 8: Output retrieval (left) and generative (right) examples from TriDAF and BiDAF models, resp.", |
| "content": "<table><tr><td>1) good pass jebaited</td><td>6) do you have a main squad</td></tr><tr><td>2) shawn mendez kreygasm</td><td>7) otw nelson for 47k imma buy</td></tr><tr><td>kreygasm</td><td>right now on xbox</td></tr><tr><td>3) can say that i am american</td><td>8) do *</td></tr><tr><td>4) ! camera</td><td>9) inceptionderp inceptionlove</td></tr><tr><td>5) can you notice me</td><td>10) bpl is over priced</td></tr><tr><td/><td>Chat Context: xxuxx haha 19 is not bad brotha . i didnt even qualify lol feelbad ||</td></tr><tr><td/><td>pogchamp || siiiii pogchamp || boooooooooooooo lul || you guys think i</td></tr><tr><td/><td>should get dembele or if alessandrini</td></tr><tr><td/><td>Response: comeback goal</td></tr></table>", |
| "num": null |
| }, |
| "TABREF14": { |
| "type_str": "table", |
| "html": null, |
| "text": "Ablation of classification vs. max-margin loss on our TriDAF discriminative model (on dev set).", |
| "content": "<table/>", |
| "num": null |
| }, |
| "TABREF16": { |
| "type_str": "table", |
| "html": null, |
| "text": "Ablation of cross-entropy loss vs. cross-entropy+maxmargin loss for our BiDAF-based generative model (on dev set).", |
| "content": "<table/>", |
| "num": null |
| } |
| } |
| } |
| } |