| { |
| "paper_id": "E17-1041", |
| "header": { |
| "generated_with": "S2ORC 1.0.0", |
| "date_generated": "2023-01-19T10:53:39.033961Z" |
| }, |
| "title": "A Hierarchical Neural Model for Learning Sequences of Dialogue Acts", |
| "authors": [ |
| { |
| "first": "Quan", |
| "middle": [ |
| "Hung" |
| ], |
| "last": "Tran", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "Monash University Clayton", |
| "location": { |
| "postCode": "3800", |
| "region": "VICTORIA", |
| "country": "Australia" |
| } |
| }, |
| "email": "" |
| }, |
| { |
| "first": "Ingrid", |
| "middle": [], |
| "last": "Zukerman", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "Monash University Clayton", |
| "location": { |
| "postCode": "3800", |
| "region": "VICTORIA", |
| "country": "Australia" |
| } |
| }, |
| "email": "ingrid.zukerman@monash.edu" |
| }, |
| { |
| "first": "Gholamreza", |
| "middle": [], |
| "last": "Haffari", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "Monash University Clayton", |
| "location": { |
| "postCode": "3800", |
| "region": "VICTORIA", |
| "country": "Australia" |
| } |
| }, |
| "email": "gholamreza.haffari@monash.edu" |
| } |
| ], |
| "year": "", |
| "venue": null, |
| "identifiers": {}, |
| "abstract": "We propose a novel hierarchical Recurrent Neural Network (RNN) for learning sequences of Dialogue Acts (DAs). The input in this task is a sequence of utterances (i.e., conversational contributions) comprising a sequence of tokens, and the output is a sequence of DA labels (one label per utterance). Our model leverages the hierarchical nature of dialogue data by using two nested RNNs that capture long-range dependencies at the dialogue level and the utterance level. This model is combined with an attention mechanism that focuses on salient tokens in utterances. Our experimental results show that our model outperforms strong baselines on two popular datasets, Switchboard and MapTask; and our detailed empirical analysis highlights the impact of each aspect of our model.", |
| "pdf_parse": { |
| "paper_id": "E17-1041", |
| "_pdf_hash": "", |
| "abstract": [ |
| { |
| "text": "We propose a novel hierarchical Recurrent Neural Network (RNN) for learning sequences of Dialogue Acts (DAs). The input in this task is a sequence of utterances (i.e., conversational contributions) comprising a sequence of tokens, and the output is a sequence of DA labels (one label per utterance). Our model leverages the hierarchical nature of dialogue data by using two nested RNNs that capture long-range dependencies at the dialogue level and the utterance level. This model is combined with an attention mechanism that focuses on salient tokens in utterances. Our experimental results show that our model outperforms strong baselines on two popular datasets, Switchboard and MapTask; and our detailed empirical analysis highlights the impact of each aspect of our model.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Abstract", |
| "sec_num": null |
| } |
| ], |
| "body_text": [ |
| { |
| "text": "The sequence-labeling task involves learning a model that maps an input sequence to an output sequence. Many NLP problems can be treated as sequence-labeling tasks, e.g., part-of-speech (PoS) tagging (Toutanova et al., 2003; Toutanova and Manning, 2000) , machine translation (Brown et al., 1993) and automatic speech recognition (Gales and Young, 2008) . Recurrent Neural Nets (RNNs) have been the workhorse model for many NLP sequence-labeling tasks, e.g., machine translation and speech recognition (Amodei et al., 2015) , due to their ability to capture long-range dependencies inherent in natural language.", |
| "cite_spans": [ |
| { |
| "start": 200, |
| "end": 224, |
| "text": "(Toutanova et al., 2003;", |
| "ref_id": "BIBREF22" |
| }, |
| { |
| "start": 225, |
| "end": 253, |
| "text": "Toutanova and Manning, 2000)", |
| "ref_id": "BIBREF21" |
| }, |
| { |
| "start": 276, |
| "end": 296, |
| "text": "(Brown et al., 1993)", |
| "ref_id": "BIBREF3" |
| }, |
| { |
| "start": 330, |
| "end": 353, |
| "text": "(Gales and Young, 2008)", |
| "ref_id": "BIBREF5" |
| }, |
| { |
| "start": 378, |
| "end": 384, |
| "text": "(RNNs)", |
| "ref_id": null |
| }, |
| { |
| "start": 502, |
| "end": 523, |
| "text": "(Amodei et al., 2015)", |
| "ref_id": "BIBREF0" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "In this paper, we propose a hierarchical RNN for labeling a sequence of utterances (i.e., contributions) in a dialogue with their Dialogue Acts (DAs). This task is particularly useful for dialogue systems, as knowing the DA of an utterance supports its interpretation, and the generation of an appropriate response. The DA classification problem differs from the aforementioned tasks in the structure of the input and the immediacy of the output. The input in these tasks is a sequence of tokens, e.g., a sequence of words in PoS tagging; while in DA classification, the input is hierarchical, i.e., a conversation comprises a sequence of utterances, each of which has a sequence of tokens ( Figure 1 ). In addition, to be useful for dialogue systems, the DA of an utterance must be determined immediately, hence a bi-directional approach is not feasible.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 692, |
| "end": 700, |
| "text": "Figure 1", |
| "ref_id": "FIGREF0" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "As mentioned above, RNNs are able to capture long-range dependencies. This ability was harnessed by Shen and Lee (2016) for DA classification. However, they ignored the conversational dimension of the data, treating the utterances in a dialogue as separate instances -an assumption that results in loss of information. To overcome this limitation, we designed a two-layer RNN model that leverages the hierarchical nature of dialogue data: an outer-layer RNN encodes the conversational dimension, and an inner-layer RNN encodes the utterance dimension.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "One of the difficulties of sequence labeling is that different elements of an input sequence have different degrees of importance for the task at hand (Shen and Lee, 2016), and the noise introduced by less important elements might degrade the performance of a labeling model. To address this problem, we incorporate into our model the attention mechanism described in (Shen and Lee, 2016), which has yielded performance improvements in DA classification compared to traditional RNNs.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "Our empirical results show that our hierarchical RNN model with an attentional mechanism out- performs strong baselines on two popular datasets: Switchboard (Jurafsky et al., 1997; Stolcke et al., 2000) and MapTask (Anderson et al., 1991) . In addition, we provide an empirical analysis of the impact of the main aspects of our model on performance: utterance RNN, conversation RNN, and information source for the attention mechanism.", |
| "cite_spans": [ |
| { |
| "start": 157, |
| "end": 180, |
| "text": "(Jurafsky et al., 1997;", |
| "ref_id": "BIBREF10" |
| }, |
| { |
| "start": 181, |
| "end": 202, |
| "text": "Stolcke et al., 2000)", |
| "ref_id": "BIBREF17" |
| }, |
| { |
| "start": 215, |
| "end": 238, |
| "text": "(Anderson et al., 1991)", |
| "ref_id": "BIBREF1" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "This paper is organised as follows. In the next section, we discuss related research in DA classification. In Section 3, we describe our RNN. Our experiments and results are presented in Section 4, followed by our analysis and concluding remarks.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "Independent DA classification. In this approach, each utterance is treated as a separate instance, which allows the application of general classification algorithms. Julia et al. (2010) employed a Support Vector Machine (SVM) with n-gram features obtained from an utterance-level Hidden Markov Model (HMM) to ascribe DAs to audio signals and textual transcriptions of the MapTask corpus. Webb et al. (2005) used a similar approach, employing cue phrases as features.", |
| "cite_spans": [ |
| { |
| "start": 388, |
| "end": 406, |
| "text": "Webb et al. (2005)", |
| "ref_id": "BIBREF25" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Research", |
| "sec_num": "2" |
| }, |
| { |
| "text": "Sequence-based DA classification. This approach takes advantage of the sequential nature of conversations. In one of the earliest works in DA classification, Stolcke et al. (2000) used an HMM with a trigram language model to classify DAs in the Switchboard corpus, achieving an accuracy of 71.0%. In this work, the trigram language model was employed to calculate the symbol emission probability of the HMM. Surendran et al. (2006) also used an HMM, but employed output symbol probabilities produced by an SVM classifier, instead of emission probabilities obtained from a trigram language model. More recently, the Recurrent Convolutional Neural Network model proposed by Kalchbrenner and Blunsom (2013) achieved an accuracy of 73.9% on the Switchboard corpus. In this work, a Convolutional Neural Network encodes each utterance into a vector, which is then treated as input to a conversationlevel RNN. The DA is then classified using a softmax layer applied on top of the hidden states of the RNN.", |
| "cite_spans": [ |
| { |
| "start": 158, |
| "end": 179, |
| "text": "Stolcke et al. (2000)", |
| "ref_id": "BIBREF17" |
| }, |
| { |
| "start": 408, |
| "end": 431, |
| "text": "Surendran et al. (2006)", |
| "ref_id": "BIBREF18" |
| }, |
| { |
| "start": 672, |
| "end": 703, |
| "text": "Kalchbrenner and Blunsom (2013)", |
| "ref_id": "BIBREF11" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Research", |
| "sec_num": "2" |
| }, |
| { |
| "text": "Attention in Neural Models. Attentional Neural Models have been successfully applied to sequence-to-sequence mapping tasks, notably machine translation and DA classification. Bahdanau et al. (2014) proposed an attentional encoder-decoder architecture for machine translation. The encoder encodes the input sequence into a sequence of hidden vectors; the decoder decodes the information stored in the hidden sequence to generate the output; and the attentional mechanism is used to summarize a sentence into a context vector dynamically, helping the decoder decide which part of the sequence to attend to when generating a target word. As mentioned above, Shen and Lee (2016) employed an attentional RNN for independent DA classification; they achieved an accuracy of 72.6% on textual transcriptions of the Switchboard corpus. served sequence to its label sequence, based on the following decomposition:", |
| "cite_spans": [ |
| { |
| "start": 175, |
| "end": 197, |
| "text": "Bahdanau et al. (2014)", |
| "ref_id": "BIBREF2" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Research", |
| "sec_num": "2" |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "P ({y 1 , y 2 , ..., y m }|{o o o 1 , o o o 2 , ..., o o o m }) = m t=1 P (y t |y y y <t , o o o \u2264t )", |
| "eq_num": "(1)" |
| } |
| ], |
| "section": "Model Description", |
| "sec_num": "3" |
| }, |
| { |
| "text": "Note that our model conditions on the full history, rather than a finite history as done in Markov models, such as maximum entropy Markov models (McCallum et al., 2000) . We employ neural networks to model the constituent conditional distributions. Our model comprises three main elements ( Figure 2 ): (1) an utterance-level RNN that encodes the information within the utterances; (2) an attentional mechanism that highlights the important parts of an input utterance, and summarizes the information within the utterance into a real-valued vector; and (3) a conversation-level RNN that encodes the information of the whole dialogue sequence. As discussed in Section 1, our hierarchical-RNN design was motivated by the structure of the input data, while the attentional mechanism has proven to be effective in DA classification (Shen and Lee, 2016).", |
| "cite_spans": [ |
| { |
| "start": 145, |
| "end": 168, |
| "text": "(McCallum et al., 2000)", |
| "ref_id": "BIBREF13" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 291, |
| "end": 299, |
| "text": "Figure 2", |
| "ref_id": "FIGREF2" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Model Description", |
| "sec_num": "3" |
| }, |
| { |
| "text": "Utterance-level RNN. This RNN was implemented using LSTM (Hochreiter and Schmidhuber, 1997; Graves, 2013) . First, an embedding matrix maps each token (e.g., word or punctuation marker) into a dense vector representation. Let us denote the sequence of tokens in the tth utterance as o o o t := {o 1 t , o 2 t , . . . , o n t }, which is mapped into the sequence of embedding vectors", |
| "cite_spans": [ |
| { |
| "start": 57, |
| "end": 91, |
| "text": "(Hochreiter and Schmidhuber, 1997;", |
| "ref_id": "BIBREF8" |
| }, |
| { |
| "start": 92, |
| "end": 105, |
| "text": "Graves, 2013)", |
| "ref_id": "BIBREF7" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Model Description", |
| "sec_num": "3" |
| }, |
| { |
| "text": "x x x t := {x x x 1 t , x x x 2 t , .", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Model Description", |
| "sec_num": "3" |
| }, |
| { |
| "text": ". . , x x x n t } using the token embedding table w w w:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Model Description", |
| "sec_num": "3" |
| }, |
| { |
| "text": "x", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Model Description", |
| "sec_num": "3" |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "x x i t = e w e w e w (o i t )", |
| "eq_num": "(2)" |
| } |
| ], |
| "section": "Model Description", |
| "sec_num": "3" |
| }, |
| { |
| "text": "The utterance RNN then takes as input this sequence of vectors, and produces a sequence of corresponding hidden vectors h h h", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Model Description", |
| "sec_num": "3" |
| }, |
| { |
| "text": "t = {h h h 1 t , h h h 2 t , .", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Model Description", |
| "sec_num": "3" |
| }, |
| { |
| "text": ". . , h h h n t }, which capture the information within the tokens, and put the tokens in their sentential context:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Model Description", |
| "sec_num": "3" |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "h h h i t = RNN utter (h h h i\u22121 t , x x x i t )", |
| "eq_num": "(3)" |
| } |
| ], |
| "section": "Model Description", |
| "sec_num": "3" |
| }, |
| { |
| "text": "The parameters of the utterance RNN and the token embeddings are learned during training.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Model Description", |
| "sec_num": "3" |
| }, |
| { |
| "text": "Attentional mechanism. This mechanism summarizes the hidden vectors of the utterance-level RNN into a single vector representing the whole utterance. The attention vector is a sequence of positive numbers that sum to 1, where each number corresponds to a token in an utterance, and represents the importance of the token for understanding the DA associated with the utterance. The final representation z z z t of the t-th utterance is the sum of the corresponding elements of its hidden vectors weighted by attention weights:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Model Description", |
| "sec_num": "3" |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "z z z t = i \u03b1 i t h h h i t", |
| "eq_num": "(4)" |
| } |
| ], |
| "section": "Model Description", |
| "sec_num": "3" |
| }, |
| { |
| "text": "We posit that the main factors for determining the importance of a token for DA classification are: (1) the meaning of the token, as represented by its embedding vector; and (2) the full context of the conversation, particularly the previous DA. For example, if the DA of an utterance is Yes-No-Question, and there is a \"yes\" or \"no\" token in the next utterance, this token is likely to be important. Equation 5 integrates these factors to compute attention scores:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Model Description", |
| "sec_num": "3" |
| }, |
| { |
| "text": "s i t = U U U \u2022 tanh W (in) \u2022 x x x i t + W (co) \u2022 g g g t\u22121 + e a", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Model Description", |
| "sec_num": "3" |
| }, |
| { |
| "text": "e a e a (y t\u22121 ) + b b b (in) ", |
| "cite_spans": [ |
| { |
| "start": 25, |
| "end": 29, |
| "text": "(in)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Model Description", |
| "sec_num": "3" |
| }, |
| { |
| "text": "where vector e a e a e a (y t\u22121 ) denotes the embedding of the previous DA, which is similar to the embedding of tokens; and vector g g g t\u22121 is the previous hidden vector of the conversation-level RNN, detailed below, which summarizes the conversation so far. W (in) and W (co) are parameter matrices for the input tokens and the conversational context respectively, and U U U and b b b (in) are parameter vectors -all of which are learned during training. The scores s i t are mapped into a probability vector by means of a softmax function:", |
| "cite_spans": [ |
| { |
| "start": 263, |
| "end": 267, |
| "text": "(in)", |
| "ref_id": null |
| }, |
| { |
| "start": 274, |
| "end": 278, |
| "text": "(co)", |
| "ref_id": null |
| }, |
| { |
| "start": 388, |
| "end": 392, |
| "text": "(in)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Model Description", |
| "sec_num": "3" |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "\u03b1 \u03b1 \u03b1 t = softmax(s s s t )", |
| "eq_num": "(6)" |
| } |
| ], |
| "section": "Model Description", |
| "sec_num": "3" |
| }, |
| { |
| "text": "Conversation-level RNN. This RNN is structurally similar to the utterance-level RNN. The input to the conversation-level RNN is the sequence of vectors z z z generated for the utterances in a conversation, which is then encoded by the RNN into a sequence of hidden vectors g g g:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Model Description", |
| "sec_num": "3" |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "g g g t = RNN convers (g g g t\u22121 , z z z t )", |
| "eq_num": "(7)" |
| } |
| ], |
| "section": "Model Description", |
| "sec_num": "3" |
| }, |
| { |
| "text": "This information is then used in the generation of the output DA:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Model Description", |
| "sec_num": "3" |
| }, |
| { |
| "text": "y t |y y y <t , o o o \u2264t \u223c softmax(W W W (out) \u2022g g g t + b b b (out) ) (8)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Model Description", |
| "sec_num": "3" |
| }, |
| { |
| "text": "where the matrix W W W (out) , vector b b b (out) and the parameters of the conversation-level network RNN convers are learned during the training. During testing, ideally a given sequence of observed utterances o o o should be decoded to a label sequence y y y that maximizes the conditional probability P (y y y|o o o) according to the model. However, finding the highest-scoring label sequence is a computationally hard problem, since the conversation-level RNN does not lend itself to dynamic programming. Therefore, we employ a greedy decoding approach, where, going left-toright, at each step we choose the y t with the highest probability in the local DA distribution. This method is common practice in sequence-labeling RNNs, e.g., in neural machine translation (Bahdanau et al., 2014; Luong et al., 2015) .", |
| "cite_spans": [ |
| { |
| "start": 23, |
| "end": 28, |
| "text": "(out)", |
| "ref_id": null |
| }, |
| { |
| "start": 44, |
| "end": 49, |
| "text": "(out)", |
| "ref_id": null |
| }, |
| { |
| "start": 770, |
| "end": 793, |
| "text": "(Bahdanau et al., 2014;", |
| "ref_id": "BIBREF2" |
| }, |
| { |
| "start": 794, |
| "end": 813, |
| "text": "Luong et al., 2015)", |
| "ref_id": "BIBREF12" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Model Description", |
| "sec_num": "3" |
| }, |
| { |
| "text": "We tested our models on the Switchboard corpus (Jurafsky et al., 1997; Stolcke et al., 2000) and the MapTask corpus (Anderson et al., 1991) -two popular datasets used for DA classification. At this stage of our research, we consider only transcriptions of the conversations in both corpora (the incorporation of phonetic input (Taylor et al., 1998; Wright Hastie et al., 2002; Julia et al., 2010) is the subject of future work). Thus, we compare our results only with those obtained by systems that employ transcriptions exclusively.", |
| "cite_spans": [ |
| { |
| "start": 47, |
| "end": 70, |
| "text": "(Jurafsky et al., 1997;", |
| "ref_id": "BIBREF10" |
| }, |
| { |
| "start": 71, |
| "end": 92, |
| "text": "Stolcke et al., 2000)", |
| "ref_id": "BIBREF17" |
| }, |
| { |
| "start": 116, |
| "end": 139, |
| "text": "(Anderson et al., 1991)", |
| "ref_id": "BIBREF1" |
| }, |
| { |
| "start": 327, |
| "end": 348, |
| "text": "(Taylor et al., 1998;", |
| "ref_id": "BIBREF20" |
| }, |
| { |
| "start": 349, |
| "end": 376, |
| "text": "Wright Hastie et al., 2002;", |
| "ref_id": "BIBREF26" |
| }, |
| { |
| "start": 377, |
| "end": 396, |
| "text": "Julia et al., 2010)", |
| "ref_id": "BIBREF9" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Data sets", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "Switchboard corpus. This corpus contains DAannotated transcriptions of 1155 telephone conversations with no specific topic, which have an average of 176 utterances. Originally, there were approximately 226 DA tags in the corpus, but in the DA classification literature, the tags are usually clustered into 42 tags. 1 Table 1(a) shows percentages of the seven most frequent tags in the data. Following (Stolcke et al., 2000) , in our experiments we use 1115 conversations for training, 21 for development and 19 for testing.", |
| "cite_spans": [ |
| { |
| "start": 315, |
| "end": 316, |
| "text": "1", |
| "ref_id": null |
| }, |
| { |
| "start": 401, |
| "end": 423, |
| "text": "(Stolcke et al., 2000)", |
| "ref_id": "BIBREF17" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Data sets", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "MapTask corpus. This is a richly annotated corpus that comprises 128 dialogues about instruction following, containing 212 utterances on average. Each conversation has an instruction giver and an instruction follower. The instruction giver gives directions with reference to a map, which the instruction follower must follow. The MapTask corpus has 13 DA tags, including the \"unclassifiable\" tag. Table 1 (b) shows percentages of the seven most frequent tags in the data. We randomly split this data into 80% training, 10% development and 10% test sets, which contain 103, 12 and 13 conversations respectively.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 397, |
| "end": 404, |
| "text": "Table 1", |
| "ref_id": "TABREF1" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Data sets", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "We experimented with different embedding sizes and hidden layer dimensions for our model HA-RNN, and selected the following, which yielded the best performance with reasonable run times. The word-embedding size was set to 250, and the DA-embedding size to 180. The hidden dimension of the utterance-level RNN was set to 160, and the hidden dimension of the conversationlevel RNN was set to 250. Our model was implemented with the CNN package. 2 During training, the negative log-likelihood was optimized using Adagrad (Duchi et al., 2011) , with dropout rate 0.5 to prevent over-fitting (Srivastava et al., 2014) . Training terminated when the log-likelihood of the development set did not improve. As mentioned in Section 3, during testing, the sequence of output labels was generated with greedy decoding. Statistical significance was computed on the MapTask test data using McNemar's test with \u03b1 = 0.05 (we could not compute statistical significance for the Switchboard results, because they were obtained from the literature, and we did not have access to per-conversation labels).", |
| "cite_spans": [ |
| { |
| "start": 518, |
| "end": 538, |
| "text": "(Duchi et al., 2011)", |
| "ref_id": "BIBREF4" |
| }, |
| { |
| "start": 587, |
| "end": 612, |
| "text": "(Srivastava et al., 2014)", |
| "ref_id": "BIBREF16" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Results", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "Switchboard. We compare our model's performance with that of the following strong baselines: (RCNN) the recurrent convolutional neural network model from (Kalchbrenner and Blunsom, 2013) ; (RNN-Attentional-C) the attentionbased RNN classifier from (Shen and Lee, 2016); and (HMM-trigram-C) the HMM-based classifier from (Stolcke et al., 2000) . The results in Table 2 show that our model outperforms these baselines. 3 The higher ac-2 github.com/clab/cnn. 3 Two other works on Switchboard DA classification (Gamb\u00e4ck et al., 2011; Webb and Ferguson, 2010) used experimental setups that differ from ours, respectively obtaining curacy of our model compared to classifierbased approaches (i.e., RNN-Attentional-C and HMM-trigram-C) confirms that taking into account dependencies among the DAs through the conversation-level RNN improves accuracy. Furthermore, the better performance of our model compared to RCNN shows that summarizing utterances with an RNN augmented with an attention architecture is more effective than using a convolution architecture for DA sequence labeling.", |
| "cite_spans": [ |
| { |
| "start": 154, |
| "end": 186, |
| "text": "(Kalchbrenner and Blunsom, 2013)", |
| "ref_id": "BIBREF11" |
| }, |
| { |
| "start": 320, |
| "end": 342, |
| "text": "(Stolcke et al., 2000)", |
| "ref_id": "BIBREF17" |
| }, |
| { |
| "start": 507, |
| "end": 529, |
| "text": "(Gamb\u00e4ck et al., 2011;", |
| "ref_id": "BIBREF6" |
| }, |
| { |
| "start": 530, |
| "end": 554, |
| "text": "Webb and Ferguson, 2010)", |
| "ref_id": "BIBREF24" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 360, |
| "end": 367, |
| "text": "Table 2", |
| "ref_id": "TABREF2" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Results", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "MapTask. Due to the unavailability of standard training/development/test sets for this dataset, we compare the results obtained by our model with those obtained by our implementation of the following independent DA classifiers: HMMtrigram-C (Stolcke et al., 2000) ; Random Forest -an instance-based random forest classifier; and Random Forest + prev DA -a random forest classifier that uses the previous DA tag.", |
| "cite_spans": [ |
| { |
| "start": 241, |
| "end": 263, |
| "text": "(Stolcke et al., 2000)", |
| "ref_id": "BIBREF17" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Results", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "The results in Table 3 show that our model outperforms these baselines (statistically significant). These results reinforce the insights from the Switchboard corpus, whereby taking into account conversational dependencies between DAs substantially improves DA-labeling performance. 4 accuracies of 77.85% and 80.72%. However, these results are not directly comparable to Stolcke et al.'s (2000) or ours, and are therefore excluded from our comparison.", |
| "cite_spans": [ |
| { |
| "start": 371, |
| "end": 394, |
| "text": "Stolcke et al.'s (2000)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 15, |
| "end": 22, |
| "text": "Table 3", |
| "ref_id": "TABREF3" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Results", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "We investigate the influence of the main components of our model on performance by creating variants of our model through the addition or removal of connections or layers. We then compare the performance of these variants with that of the original model in terms of DA-classification accuracy and negative log-likelihood on the test, development and training partitions of our datasets. As done in Section 4, statistical significance is calculated for the test partitions of both datasets using McNemar's test with \u03b1 = 0.05.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Architectural analysis", |
| "sec_num": "5.1" |
| }, |
| { |
| "text": "To answer this question, we create a variant, denoted woUttRNN, where attentional coefficients are applied directly to the token embeddings. Thus, Equation 4 is changed to Equation 9:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Does an RNN at the utterance level help?", |
| "sec_num": null |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "z z z t = i \u03b1 i t x x x i t", |
| "eq_num": "(9)" |
| } |
| ], |
| "section": "Does an RNN at the utterance level help?", |
| "sec_num": null |
| }, |
| { |
| "text": "As seen in Tables 4 and 5, removing the utterance-level RNN (woUttRNN) reduces the accuracy and increases the negative log likelihood for the training, development and test partitions of both datasets. These changes are statistically significant for the test set.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Does an RNN at the utterance level help?", |
| "sec_num": null |
| }, |
| { |
| "text": "Which sources of information are critical for computing the attentional component? In our main model, HA-RNN, we calculate the attentional signal using information from the previous DA, the previous hidden vector representation of the conversation-level RNN, and the embeddings of the tokens. To determine the contribution of the first two resources to the performance of the model, we create two variants of HA-RNN: woDA2Attn, which employs only the previous conversation-level RNN hidden vector; and woHid2Attn, which employs only the previous DA. Thus, in woDA2Attn, Equation 5 becomes Equation 10, and in woHid2Attn, Equation 5 becomes Equation 11:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Does an RNN at the utterance level help?", |
| "sec_num": null |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "s i t = U U U\u2022tanh(W (in) \u2022x x x i t +W (co) \u2022g g g t\u22121 +b b b (in) ) (10) s i t = U U U\u2022tanh(W (in) \u2022x x x i t +e a e a e a (y t\u22121 )+b b b (in) )", |
| "eq_num": "(11)" |
| } |
| ], |
| "section": "Does an RNN at the utterance level help?", |
| "sec_num": null |
| }, |
| { |
| "text": "scription of their MapTask subset is not sufficient to replicate their experiment, and Surendran and Levow's data split is not accessible. Notwithstanding the difference in conditions, our model's accuracy is superior to theirs.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Does an RNN at the utterance level help?", |
| "sec_num": null |
| }, |
| { |
| "text": "As seen in Tables 4 and 5, both of these resources provide valuable information, but the changes in performance due to the omission of these resources are smaller than those obtained with woUttRNN. Removing the DA connection (woDA2Attn) or the previous conversation-level RNN hidden vector (woHid2Attn) leads to statistically significant drops in accuracy and increases in negative log-likelihood on the test partitions of both datasets. The changes in performance with respect to the development and training sets vary across the datasets. As seen in Table 4 , both models exhibit accuracy drops (and small increases in negative log-likelihood) on the Switchboard development set, but small accuracy increases (and negative log-likelihood drops) on the Switchboard training set -an indication of over-fitting. In contrast, as seen in Table 5 , both models yield a negligible or no drop in accuracy on the MapTask development set, while both yield a drop in accuracy on the training set.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 552, |
| "end": 559, |
| "text": "Table 4", |
| "ref_id": "TABREF5" |
| }, |
| { |
| "start": 835, |
| "end": 842, |
| "text": "Table 5", |
| "ref_id": "TABREF6" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Does an RNN at the utterance level help?", |
| "sec_num": null |
| }, |
| { |
| "text": "To answer this question, we create a variant of our HA-RNN model, denoted woConvRNN, where the recurrent connections between the units in the conversation-level RNN are removed. The LSTM basis function is calculated with a fixed vector g g g 0 instead of the previous time step's vector. Thus Equation 7 becomes Equation 12:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "How important is the RNN at the conversation level?", |
| "sec_num": null |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "g g g t = f f f (g g g 0 , z z z t )", |
| "eq_num": "(12)" |
| } |
| ], |
| "section": "How important is the RNN at the conversation level?", |
| "sec_num": null |
| }, |
| { |
| "text": "As seen in Tables 4 and 5 , HA-RNN outperforms woConvRNN on the training/development/test partitions of both datasets. The difference between the performance of HA-RNN and woConvRNN is statistically significant for the test set.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 11, |
| "end": 25, |
| "text": "Tables 4 and 5", |
| "ref_id": "TABREF5" |
| } |
| ], |
| "eq_spans": [], |
| "section": "How important is the RNN at the conversation level?", |
| "sec_num": null |
| }, |
| { |
| "text": "How effective are the DA connections? We have seen that the DA connections improve our model's performance when they are used to calculate the attentional signal. However, intuitively, the previous DA can also directly provide information about the current DA. For example, it is often the case that a Yes-No-Question is followed by Reply y or Reply n. To reflect this observation, we create another model, denoted wDA2DA, that has an additional direct connection between the previous DA and the current DA. That is, Equation 8 becomes Equation 13:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "How important is the RNN at the conversation level?", |
| "sec_num": null |
| }, |
| { |
| "text": "y t |y y y <t , o o o \u2264t \u223c softmax(W W W (out) \u2022g g g t +e o", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "How important is the RNN at the conversation level?", |
| "sec_num": null |
| }, |
| { |
| "text": "e o e o (y t\u22121 )+b b b (out) ) As seen in Tables 4 and 5, wDA2DA performs much worse than HA-RNN. We posit that this happens due to the exposure bias problem (Ranzato et al., 2015) . That is, during training, the model has access to the correct DA of the previous utterance. However, during testing, the decoding process has access only to predicted DAs, which may lead to the propagation of errors. To quantify the effect of this problem on our model, we designed another experiment where the variants of our model can access the correct DA even during testing; the results for the test partitions of both datasets appear in Table 6 .", |
| "cite_spans": [ |
| { |
| "start": 158, |
| "end": 180, |
| "text": "(Ranzato et al., 2015)", |
| "ref_id": "BIBREF14" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 626, |
| "end": 633, |
| "text": "Table 6", |
| "ref_id": "TABREF8" |
| } |
| ], |
| "eq_spans": [], |
| "section": "How important is the RNN at the conversation level?", |
| "sec_num": null |
| }, |
| { |
| "text": "The results in Table 6 show that exposure bias has different effects on the different variants of our model. As expected, woDA2Attn, which does not consider the previous DA, exhibits no change in performance between the oracle and greedy conditions. The models that employ a DA connection to compute the attention signal (HA-RNN, woUt-tRNN, woHid2Attn, woConvRNN) show a slight improvement in accuracy when using the correct DA as input, instead of the predicted DA. In contrast, wDA2DA shows large improvements when using the correct DA (3.5% on Switchboard and 6.8% on MapTask), becoming the best-performing model for both datasets. This improvement may be attributed to the direct connection between the DAs in this model, which increases the influence of previous DAs on the prediction of the current DA -previous DA predictions that are largely correct will substantially improve the performance of wDA2DA, while noisy DA predictions will have the opposite effect.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 15, |
| "end": 22, |
| "text": "Table 6", |
| "ref_id": "TABREF8" |
| }, |
| { |
| "start": 321, |
| "end": 363, |
| "text": "(HA-RNN, woUt-tRNN, woHid2Attn, woConvRNN)", |
| "ref_id": "FIGREF2" |
| } |
| ], |
| "eq_spans": [], |
| "section": "How important is the RNN at the conversation level?", |
| "sec_num": null |
| }, |
| { |
| "text": "We analyze how our model HA-RNN distributes attention over the tokens in an utterance in order to identify tokens in focus. Figure 3 shows how the attentional vector highlights the most important tokens in sample utterances in the context of the DA-classification task. For example, in \"yes I do\", the most important token that identifies the Reply y class is the token \"yes\", which receives most of the probability mass from the attention mechanism. Table 7 shows the most attended tokens for four classes of DA in MapTask. We compiled these lists by computing the average attention that a token received for all the utterances in a DA class (we excluded tokens that appear less than 5 times). As shown in Table 7 , both important tokens \"move\" and \"yes\" in Figure 3 appear labels, Acknowledge and Reply y, have very similar attended tokens. In fact, many utterances in Acknowledge and Reply y have the same text form. Thus, the distinction between the two classes is highly dependent upon the conversational context. Also, note that although Reply n is not one of the most common DAs in MapTask, our model can still learn the most important tokens for this DA.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 124, |
| "end": 132, |
| "text": "Figure 3", |
| "ref_id": null |
| }, |
| { |
| "start": 451, |
| "end": 458, |
| "text": "Table 7", |
| "ref_id": "TABREF9" |
| }, |
| { |
| "start": 707, |
| "end": 714, |
| "text": "Table 7", |
| "ref_id": "TABREF9" |
| }, |
| { |
| "start": 759, |
| "end": 774, |
| "text": "Figure 3 appear", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Attentional Analysis", |
| "sec_num": "5.2" |
| }, |
| { |
| "text": "In this paper, we proposed a novel hierarchical RNN for learning sequences of DAs. Our model leverages the hierarchical nature of dialogue data by using two nested RNNs that capture long-range dependencies at the conversation level and the utterance level. We further combine the model with an attention mechanism to focus on salient tokens in utterances. Our experimental results show that our model outperforms strong baselines on two popular datasets: Switchboard and MapTask. In the future, we plan to address the exposure bias problem, and incorporate acoustic features and speaker information into our model.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusions", |
| "sec_num": "6" |
| }, |
| { |
| "text": "The official manual stated that there were originally 220 tags. We follow the tag-clustering procedure by Christopher Potts described in compprag.christopherpotts. net/swda.html.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "Two studies on MapTask DA classification were performed under experimental setups that differ from ours: Julia et al. (2010) employed HMM+SVM on text transcriptions and audio signals, obtaining an accuracy of 55.4% for transcriptions only.Surendran and Levow (2006) used Viterbi+SVM, posting a classification accuracy of 59.1% for transcriptions -the best result among systems that employ transcription data exclusively. Unfortunately, Julia et al.'s de-", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| } |
| ], |
| "back_matter": [ |
| { |
| "text": "This research was supported in part by grant DP120100103 from the Australian Research Council.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Acknowledgments", |
| "sec_num": null |
| } |
| ], |
| "bib_entries": { |
| "BIBREF0": { |
| "ref_id": "b0", |
| "title": "Deep speech 2: End-to-end speech recognition in English and Mandarin", |
| "authors": [ |
| { |
| "first": "Dario", |
| "middle": [], |
| "last": "Amodei", |
| "suffix": "" |
| }, |
| { |
| "first": "Rishita", |
| "middle": [], |
| "last": "Anubhai", |
| "suffix": "" |
| }, |
| { |
| "first": "Eric", |
| "middle": [], |
| "last": "Battenberg", |
| "suffix": "" |
| }, |
| { |
| "first": "Carl", |
| "middle": [], |
| "last": "Case", |
| "suffix": "" |
| }, |
| { |
| "first": "Jared", |
| "middle": [], |
| "last": "Casper", |
| "suffix": "" |
| }, |
| { |
| "first": "Bryan", |
| "middle": [], |
| "last": "Catanzaro", |
| "suffix": "" |
| }, |
| { |
| "first": "Jingdong", |
| "middle": [], |
| "last": "Chen", |
| "suffix": "" |
| }, |
| { |
| "first": "Mike", |
| "middle": [], |
| "last": "Chrzanowski", |
| "suffix": "" |
| }, |
| { |
| "first": "Adam", |
| "middle": [], |
| "last": "Coates", |
| "suffix": "" |
| }, |
| { |
| "first": "Greg", |
| "middle": [], |
| "last": "Diamos", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": { |
| "arXiv": [ |
| "arXiv:1512.02595" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Dario Amodei, Rishita Anubhai, Eric Battenberg, Carl Case, Jared Casper, Bryan Catanzaro, Jingdong Chen, Mike Chrzanowski, Adam Coates, Greg Di- amos, et al. 2015. Deep speech 2: End-to-end speech recognition in English and Mandarin. arXiv preprint arXiv:1512.02595.", |
| "links": null |
| }, |
| "BIBREF1": { |
| "ref_id": "b1", |
| "title": "The HCRC MapTask corpus", |
| "authors": [ |
| { |
| "first": "Anne", |
| "middle": [ |
| "H" |
| ], |
| "last": "Anderson", |
| "suffix": "" |
| }, |
| { |
| "first": "Miles", |
| "middle": [], |
| "last": "Bader", |
| "suffix": "" |
| }, |
| { |
| "first": "Ellen", |
| "middle": [ |
| "Gurman" |
| ], |
| "last": "Bard", |
| "suffix": "" |
| }, |
| { |
| "first": "Elizabeth", |
| "middle": [], |
| "last": "Boyle", |
| "suffix": "" |
| }, |
| { |
| "first": "Gwyneth", |
| "middle": [], |
| "last": "Doherty", |
| "suffix": "" |
| }, |
| { |
| "first": "Simon", |
| "middle": [], |
| "last": "Garrod", |
| "suffix": "" |
| }, |
| { |
| "first": "Steven", |
| "middle": [], |
| "last": "Isard", |
| "suffix": "" |
| }, |
| { |
| "first": "Jacqueline", |
| "middle": [], |
| "last": "Kowtko", |
| "suffix": "" |
| }, |
| { |
| "first": "Jan", |
| "middle": [], |
| "last": "Mcallister", |
| "suffix": "" |
| }, |
| { |
| "first": "Jim", |
| "middle": [], |
| "last": "Miller", |
| "suffix": "" |
| } |
| ], |
| "year": 1991, |
| "venue": "Language and speech", |
| "volume": "34", |
| "issue": "4", |
| "pages": "351--366", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Anne H. Anderson, Miles Bader, Ellen Gurman Bard, Elizabeth Boyle, Gwyneth Doherty, Simon Garrod, Steven Isard, Jacqueline Kowtko, Jan McAllister, Jim Miller, et al. 1991. The HCRC MapTask cor- pus. Language and speech, 34(4):351-366.", |
| "links": null |
| }, |
| "BIBREF2": { |
| "ref_id": "b2", |
| "title": "Neural machine translation by jointly learning to align and translate", |
| "authors": [ |
| { |
| "first": "Dzmitry", |
| "middle": [], |
| "last": "Bahdanau", |
| "suffix": "" |
| }, |
| { |
| "first": "Kyunghyun", |
| "middle": [], |
| "last": "Cho", |
| "suffix": "" |
| }, |
| { |
| "first": "Yoshua", |
| "middle": [], |
| "last": "Bengio", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": { |
| "arXiv": [ |
| "arXiv:1409.0473" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Ben- gio. 2014. Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473.", |
| "links": null |
| }, |
| "BIBREF3": { |
| "ref_id": "b3", |
| "title": "The mathematics of statistical machine translation: Parameter estimation", |
| "authors": [ |
| { |
| "first": "F", |
| "middle": [], |
| "last": "Peter", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Brown", |
| "suffix": "" |
| }, |
| { |
| "first": "J", |
| "middle": [ |
| "Della" |
| ], |
| "last": "Vincent", |
| "suffix": "" |
| }, |
| { |
| "first": "Stephen", |
| "middle": [ |
| "A" |
| ], |
| "last": "Pietra", |
| "suffix": "" |
| }, |
| { |
| "first": "Robert", |
| "middle": [ |
| "L" |
| ], |
| "last": "Della Pietra", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Mercer", |
| "suffix": "" |
| } |
| ], |
| "year": 1993, |
| "venue": "Computational Linguistics", |
| "volume": "19", |
| "issue": "2", |
| "pages": "263--311", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Peter F. Brown, Vincent J. Della Pietra, Stephen A. Della Pietra, and Robert L. Mercer. 1993. The mathematics of statistical machine translation: Parameter estimation. Computational Linguistics, 19(2):263-311.", |
| "links": null |
| }, |
| "BIBREF4": { |
| "ref_id": "b4", |
| "title": "Adaptive subgradient methods for online learning and stochastic optimization", |
| "authors": [ |
| { |
| "first": "John", |
| "middle": [], |
| "last": "Duchi", |
| "suffix": "" |
| }, |
| { |
| "first": "Elad", |
| "middle": [], |
| "last": "Hazan", |
| "suffix": "" |
| }, |
| { |
| "first": "Yoram", |
| "middle": [], |
| "last": "Singer", |
| "suffix": "" |
| } |
| ], |
| "year": 2011, |
| "venue": "Journal of Machine Learning Research", |
| "volume": "12", |
| "issue": "", |
| "pages": "2121--2159", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "John Duchi, Elad Hazan, and Yoram Singer. 2011. Adaptive subgradient methods for online learning and stochastic optimization. Journal of Machine Learning Research, 12(Jul):2121-2159.", |
| "links": null |
| }, |
| "BIBREF5": { |
| "ref_id": "b5", |
| "title": "The application of hidden Markov models in speech recognition. Foundations and trends in signal processing", |
| "authors": [ |
| { |
| "first": "Mark", |
| "middle": [], |
| "last": "Gales", |
| "suffix": "" |
| }, |
| { |
| "first": "Steve", |
| "middle": [], |
| "last": "Young", |
| "suffix": "" |
| } |
| ], |
| "year": 2008, |
| "venue": "", |
| "volume": "1", |
| "issue": "", |
| "pages": "195--304", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Mark Gales and Steve Young. 2008. The application of hidden Markov models in speech recognition. Foun- dations and trends in signal processing, 1(3):195- 304.", |
| "links": null |
| }, |
| "BIBREF6": { |
| "ref_id": "b6", |
| "title": "Active learning for dialogue act classification", |
| "authors": [ |
| { |
| "first": "Bj\u00f6rn", |
| "middle": [], |
| "last": "Gamb\u00e4ck", |
| "suffix": "" |
| }, |
| { |
| "first": "Fredrik", |
| "middle": [], |
| "last": "Olsson", |
| "suffix": "" |
| }, |
| { |
| "first": "Oscar", |
| "middle": [], |
| "last": "T\u00e4ckstr\u00f6m", |
| "suffix": "" |
| } |
| ], |
| "year": 2011, |
| "venue": "Proceedings of Interspeech 2011", |
| "volume": "", |
| "issue": "", |
| "pages": "1329--1332", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Bj\u00f6rn Gamb\u00e4ck, Fredrik Olsson, and Oscar T\u00e4ckstr\u00f6m. 2011. Active learning for dialogue act classification. In Proceedings of Interspeech 2011, pages 1329- 1332, Florence, Italy.", |
| "links": null |
| }, |
| "BIBREF7": { |
| "ref_id": "b7", |
| "title": "Generating sequences with recurrent neural networks", |
| "authors": [ |
| { |
| "first": "Alex", |
| "middle": [], |
| "last": "Graves", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": { |
| "arXiv": [ |
| "arXiv:1308.0850" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Alex Graves. 2013. Generating sequences with recurrent neural networks. arXiv preprint arXiv:1308.0850.", |
| "links": null |
| }, |
| "BIBREF8": { |
| "ref_id": "b8", |
| "title": "Long short-term memory", |
| "authors": [ |
| { |
| "first": "Sepp", |
| "middle": [], |
| "last": "Hochreiter", |
| "suffix": "" |
| }, |
| { |
| "first": "J\u00fcrgen", |
| "middle": [], |
| "last": "Schmidhuber", |
| "suffix": "" |
| } |
| ], |
| "year": 1997, |
| "venue": "Neural computation", |
| "volume": "9", |
| "issue": "8", |
| "pages": "1735--1780", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Sepp Hochreiter and J\u00fcrgen Schmidhuber. 1997. Long short-term memory. Neural computation, 9(8):1735-1780.", |
| "links": null |
| }, |
| "BIBREF9": { |
| "ref_id": "b9", |
| "title": "Dialog act classification using acoustic and discourse information of MapTask data", |
| "authors": [ |
| { |
| "first": "N", |
| "middle": [], |
| "last": "Fatema", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Julia", |
| "suffix": "" |
| }, |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Khan", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Iftekharuddin", |
| "suffix": "" |
| }, |
| { |
| "first": "U", |
| "middle": [], |
| "last": "Atiq", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Islam", |
| "suffix": "" |
| } |
| ], |
| "year": 2010, |
| "venue": "International Journal of Computational Intelligence and Applications", |
| "volume": "9", |
| "issue": "4", |
| "pages": "289--311", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Fatema N. Julia, Khan M. Iftekharuddin, and Atiq U. Islam. 2010. Dialog act classification using acoustic and discourse information of MapTask data. Inter- national Journal of Computational Intelligence and Applications, 9(4):289-311.", |
| "links": null |
| }, |
| "BIBREF10": { |
| "ref_id": "b10", |
| "title": "Switchboard SWBD-DAMSL Shallow-Discourse-Function Annotation Coders Manual, Draft 13", |
| "authors": [ |
| { |
| "first": "Daniel", |
| "middle": [], |
| "last": "Jurafsky", |
| "suffix": "" |
| }, |
| { |
| "first": "Elizabeth", |
| "middle": [], |
| "last": "Shriberg", |
| "suffix": "" |
| }, |
| { |
| "first": "Debra", |
| "middle": [], |
| "last": "Biasca", |
| "suffix": "" |
| } |
| ], |
| "year": 1997, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Daniel Jurafsky, Elizabeth Shriberg, and Debra Bi- asca. 1997. Switchboard SWBD-DAMSL Shallow- Discourse-Function Annotation Coders Manual, Draft 13. Technical report, Stanford University.", |
| "links": null |
| }, |
| "BIBREF11": { |
| "ref_id": "b11", |
| "title": "Recurrent convolutional neural networks for discourse compositionality", |
| "authors": [ |
| { |
| "first": "Nal", |
| "middle": [], |
| "last": "Kalchbrenner", |
| "suffix": "" |
| }, |
| { |
| "first": "Phil", |
| "middle": [], |
| "last": "Blunsom", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": { |
| "arXiv": [ |
| "arXiv:1306.3584" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Nal Kalchbrenner and Phil Blunsom. 2013. Recurrent convolutional neural networks for discourse compo- sitionality. arXiv preprint arXiv:1306.3584.", |
| "links": null |
| }, |
| "BIBREF12": { |
| "ref_id": "b12", |
| "title": "Effective approaches to attentionbased neural machine translation", |
| "authors": [ |
| { |
| "first": "Minh-Thang", |
| "middle": [], |
| "last": "Luong", |
| "suffix": "" |
| }, |
| { |
| "first": "Hieu", |
| "middle": [], |
| "last": "Pham", |
| "suffix": "" |
| }, |
| { |
| "first": "Christopher", |
| "middle": [ |
| "D" |
| ], |
| "last": "Manning", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "EMNLP'2015 -Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing", |
| "volume": "", |
| "issue": "", |
| "pages": "1412--1421", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Minh-Thang Luong, Hieu Pham, and Christopher D. Manning. 2015. Effective approaches to attention- based neural machine translation. In EMNLP'2015 -Proceedings of the 2015 Conference on Empiri- cal Methods in Natural Language Processing, pages 1412-1421, Lisbon, Portugal.", |
| "links": null |
| }, |
| "BIBREF13": { |
| "ref_id": "b13", |
| "title": "Maximum entropy Markov models for information extraction and segmentation", |
| "authors": [ |
| { |
| "first": "Andrew", |
| "middle": [], |
| "last": "Mccallum", |
| "suffix": "" |
| }, |
| { |
| "first": "Dayne", |
| "middle": [], |
| "last": "Freitag", |
| "suffix": "" |
| }, |
| { |
| "first": "Fernando", |
| "middle": [ |
| "C N" |
| ], |
| "last": "Pereira", |
| "suffix": "" |
| } |
| ], |
| "year": 2000, |
| "venue": "ICML'00 -Proceedings of the 17th International Conference on Machine Learning", |
| "volume": "", |
| "issue": "", |
| "pages": "591--598", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Andrew McCallum, Dayne Freitag, and Fernando C.N. Pereira. 2000. Maximum entropy Markov mod- els for information extraction and segmentation. In ICML'00 -Proceedings of the 17th International Conference on Machine Learning, pages 591-598, Stanford, California.", |
| "links": null |
| }, |
| "BIBREF14": { |
| "ref_id": "b14", |
| "title": "Sequence level training with recurrent neural networks", |
| "authors": [ |
| { |
| "first": "Aurelio", |
| "middle": [], |
| "last": "Marc", |
| "suffix": "" |
| }, |
| { |
| "first": "Sumit", |
| "middle": [], |
| "last": "Ranzato", |
| "suffix": "" |
| }, |
| { |
| "first": "Michael", |
| "middle": [], |
| "last": "Chopra", |
| "suffix": "" |
| }, |
| { |
| "first": "Wojciech", |
| "middle": [], |
| "last": "Auli", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Zaremba", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": { |
| "arXiv": [ |
| "arXiv:1511.06732" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Marc'Aurelio Ranzato, Sumit Chopra, Michael Auli, and Wojciech Zaremba. 2015. Sequence level train- ing with recurrent neural networks. arXiv preprint arXiv:1511.06732.", |
| "links": null |
| }, |
| "BIBREF15": { |
| "ref_id": "b15", |
| "title": "Neural attention models for sequence classification: Analysis and application to key term extraction and dialogue act detection", |
| "authors": [ |
| { |
| "first": "Hung-Yi", |
| "middle": [], |
| "last": "Sheng-Syun Shen", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Lee", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": { |
| "arXiv": [ |
| "arXiv:1604.00077" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Sheng-syun Shen and Hung-yi Lee. 2016. Neural at- tention models for sequence classification: Analysis and application to key term extraction and dialogue act detection. arXiv preprint arXiv:1604.00077.", |
| "links": null |
| }, |
| "BIBREF16": { |
| "ref_id": "b16", |
| "title": "Dropout: A simple way to prevent neural networks from overfitting", |
| "authors": [ |
| { |
| "first": "Nitish", |
| "middle": [], |
| "last": "Srivastava", |
| "suffix": "" |
| }, |
| { |
| "first": "Geoffrey", |
| "middle": [], |
| "last": "Hinton", |
| "suffix": "" |
| }, |
| { |
| "first": "Alex", |
| "middle": [], |
| "last": "Krizhevsky", |
| "suffix": "" |
| }, |
| { |
| "first": "Ilya", |
| "middle": [], |
| "last": "Sutskever", |
| "suffix": "" |
| }, |
| { |
| "first": "Ruslan", |
| "middle": [], |
| "last": "Salakhutdinov", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "Journal of Machine Learning Research", |
| "volume": "15", |
| "issue": "1", |
| "pages": "1929--1958", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014. Dropout: A simple way to prevent neural networks from overfitting. Journal of Machine Learning Re- search, 15(1):1929-1958.", |
| "links": null |
| }, |
| "BIBREF17": { |
| "ref_id": "b17", |
| "title": "Dialogue act modeling for automatic tagging and recognition of conversational speech", |
| "authors": [ |
| { |
| "first": "Andreas", |
| "middle": [], |
| "last": "Stolcke", |
| "suffix": "" |
| }, |
| { |
| "first": "Noah", |
| "middle": [], |
| "last": "Coccaro", |
| "suffix": "" |
| }, |
| { |
| "first": "Rebecca", |
| "middle": [], |
| "last": "Bates", |
| "suffix": "" |
| }, |
| { |
| "first": "Paul", |
| "middle": [], |
| "last": "Taylor", |
| "suffix": "" |
| }, |
| { |
| "first": "Carol", |
| "middle": [], |
| "last": "Van Ess-Dykema", |
| "suffix": "" |
| }, |
| { |
| "first": "Klaus", |
| "middle": [], |
| "last": "Ries", |
| "suffix": "" |
| }, |
| { |
| "first": "Elizabeth", |
| "middle": [], |
| "last": "Shriberg", |
| "suffix": "" |
| }, |
| { |
| "first": "Daniel", |
| "middle": [], |
| "last": "Jurafsky", |
| "suffix": "" |
| }, |
| { |
| "first": "Rachel", |
| "middle": [], |
| "last": "Martin", |
| "suffix": "" |
| }, |
| { |
| "first": "Marie", |
| "middle": [], |
| "last": "Meteer", |
| "suffix": "" |
| } |
| ], |
| "year": 2000, |
| "venue": "Computational linguistics", |
| "volume": "26", |
| "issue": "3", |
| "pages": "339--373", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Andreas Stolcke, Noah Coccaro, Rebecca Bates, Paul Taylor, Carol Van Ess-Dykema, Klaus Ries, Eliza- beth Shriberg, Daniel Jurafsky, Rachel Martin, and Marie Meteer. 2000. Dialogue act modeling for automatic tagging and recognition of conversational speech. Computational linguistics, 26(3):339-373.", |
| "links": null |
| }, |
| "BIBREF18": { |
| "ref_id": "b18", |
| "title": "Dialog act tagging with Support Vector Machines and hidden Markov models", |
| "authors": [ |
| { |
| "first": "Dinoj", |
| "middle": [], |
| "last": "Surendran", |
| "suffix": "" |
| }, |
| { |
| "first": "Gina-Anne", |
| "middle": [], |
| "last": "Levow", |
| "suffix": "" |
| } |
| ], |
| "year": 2006, |
| "venue": "Proceedings of Interspeech", |
| "volume": "", |
| "issue": "", |
| "pages": "1950--1953", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Dinoj Surendran and Gina-Anne Levow. 2006. Dialog act tagging with Support Vector Machines and hid- den Markov models. In Proceedings of Interspeech 2006, pages 1950-1953, Pittsburgh, Pennsylvania.", |
| "links": null |
| }, |
| "BIBREF19": { |
| "ref_id": "b19", |
| "title": "Sequence to sequence learning with neural networks", |
| "authors": [ |
| { |
| "first": "Ilya", |
| "middle": [], |
| "last": "Sutskever", |
| "suffix": "" |
| }, |
| { |
| "first": "Oriol", |
| "middle": [], |
| "last": "Vinyals", |
| "suffix": "" |
| }, |
| { |
| "first": "V", |
| "middle": [], |
| "last": "Quoc", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Le", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "Advances in neural information processing systems", |
| "volume": "", |
| "issue": "", |
| "pages": "3104--3112", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Ilya Sutskever, Oriol Vinyals, and Quoc V. Le. 2014. Sequence to sequence learning with neural net- works. In Advances in neural information process- ing systems, pages 3104-3112.", |
| "links": null |
| }, |
| "BIBREF20": { |
| "ref_id": "b20", |
| "title": "Intonation and dialogue context as constraints for speech recognition", |
| "authors": [ |
| { |
| "first": "Paul", |
| "middle": [ |
| "A" |
| ], |
| "last": "Taylor", |
| "suffix": "" |
| }, |
| { |
| "first": "Simon", |
| "middle": [], |
| "last": "King", |
| "suffix": "" |
| }, |
| { |
| "first": "Steve", |
| "middle": [ |
| "D" |
| ], |
| "last": "Isard", |
| "suffix": "" |
| }, |
| { |
| "first": "Helen", |
| "middle": [ |
| "Wright" |
| ], |
| "last": "Hastie", |
| "suffix": "" |
| } |
| ], |
| "year": 1998, |
| "venue": "Language and Speech", |
| "volume": "41", |
| "issue": "3-4", |
| "pages": "493--512", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Paul A. Taylor, Simon King, Steve D. Isard, and He- len Wright Hastie. 1998. Intonation and dialogue context as constraints for speech recognition. Lan- guage and Speech, 41(3-4):493-512.", |
| "links": null |
| }, |
| "BIBREF21": { |
| "ref_id": "b21", |
| "title": "Enriching the knowledge sources used in a maximum entropy part-of-speech tagger", |
| "authors": [ |
| { |
| "first": "Kristina", |
| "middle": [], |
| "last": "Toutanova", |
| "suffix": "" |
| }, |
| { |
| "first": "D", |
| "middle": [], |
| "last": "Christopher", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Manning", |
| "suffix": "" |
| } |
| ], |
| "year": 2000, |
| "venue": "Proceedings of the 2000 Joint SIGDAT conference on Empirical Methods in Natural Language Processing and Very Large Corpora", |
| "volume": "", |
| "issue": "", |
| "pages": "63--70", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Kristina Toutanova and Christopher D Manning. 2000. Enriching the knowledge sources used in a maxi- mum entropy part-of-speech tagger. In Proceedings of the 2000 Joint SIGDAT conference on Empirical Methods in Natural Language Processing and Very Large Corpora, pages 63-70, Hong Kong.", |
| "links": null |
| }, |
| "BIBREF22": { |
| "ref_id": "b22", |
| "title": "Feature-rich part-ofspeech tagging with a cyclic dependency network", |
| "authors": [ |
| { |
| "first": "Kristina", |
| "middle": [], |
| "last": "Toutanova", |
| "suffix": "" |
| }, |
| { |
| "first": "Dan", |
| "middle": [], |
| "last": "Klein", |
| "suffix": "" |
| }, |
| { |
| "first": "Christopher", |
| "middle": [ |
| "D" |
| ], |
| "last": "Manning", |
| "suffix": "" |
| }, |
| { |
| "first": "Yoram", |
| "middle": [], |
| "last": "Singer", |
| "suffix": "" |
| } |
| ], |
| "year": 2003, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Kristina Toutanova, Dan Klein, Christopher D. Man- ning, and Yoram Singer. 2003. Feature-rich part-of- speech tagging with a cyclic dependency network.", |
| "links": null |
| }, |
| "BIBREF23": { |
| "ref_id": "b23", |
| "title": "NAACL'2003 -Proceedings of the 2003 Conference of the North American Chapter of the Association for Computational Linguistics on Human Language Technology", |
| "authors": [], |
| "year": null, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "173--180", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "In NAACL'2003 -Proceedings of the 2003 Con- ference of the North American Chapter of the As- sociation for Computational Linguistics on Human Language Technology, pages 173-180, Edmonton, Canada.", |
| "links": null |
| }, |
| "BIBREF24": { |
| "ref_id": "b24", |
| "title": "Automatic extraction of cue phrases for cross-corpus dialogue act classification", |
| "authors": [ |
| { |
| "first": "Nick", |
| "middle": [], |
| "last": "Webb", |
| "suffix": "" |
| }, |
| { |
| "first": "Michael", |
| "middle": [], |
| "last": "Ferguson", |
| "suffix": "" |
| } |
| ], |
| "year": 2010, |
| "venue": "Proceedings of the 23rd International Conference on Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "1310--1317", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Nick Webb and Michael Ferguson. 2010. Automatic extraction of cue phrases for cross-corpus dialogue act classification. In Proceedings of the 23rd Inter- national Conference on Computational Linguistics, pages 1310-1317, Uppsala, Sweden.", |
| "links": null |
| }, |
| "BIBREF25": { |
| "ref_id": "b25", |
| "title": "Dialogue act classification based on intra-utterance features", |
| "authors": [ |
| { |
| "first": "Nick", |
| "middle": [], |
| "last": "Webb", |
| "suffix": "" |
| }, |
| { |
| "first": "Mark", |
| "middle": [], |
| "last": "Hepple", |
| "suffix": "" |
| }, |
| { |
| "first": "Yorick", |
| "middle": [], |
| "last": "Wilks", |
| "suffix": "" |
| } |
| ], |
| "year": 2005, |
| "venue": "Proceedings of the AAAI Workshop on Spoken Language Understanding", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Nick Webb, Mark Hepple, and Yorick Wilks. 2005. Dialogue act classification based on intra-utterance features. In Proceedings of the AAAI Workshop on Spoken Language Understanding, Pittsburgh, Penn- sylvania.", |
| "links": null |
| }, |
| "BIBREF26": { |
| "ref_id": "b26", |
| "title": "Automatically predicting dialogue structure using prosodic features. Speech Communication", |
| "authors": [ |
| { |
| "first": "Helen", |
| "middle": [ |
| "Wright" |
| ], |
| "last": "Hastie", |
| "suffix": "" |
| }, |
| { |
| "first": "Massimo", |
| "middle": [], |
| "last": "Poesio", |
| "suffix": "" |
| }, |
| { |
| "first": "Steve", |
| "middle": [ |
| "D" |
| ], |
| "last": "Isard", |
| "suffix": "" |
| } |
| ], |
| "year": 2002, |
| "venue": "", |
| "volume": "36", |
| "issue": "", |
| "pages": "63--79", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Helen Wright Hastie, Massimo Poesio, and Steve D. Is- ard. 2002. Automatically predicting dialogue struc- ture using prosodic features. Speech Communica- tion, 36(1):63-79.", |
| "links": null |
| } |
| }, |
| "ref_entries": { |
| "FIGREF0": { |
| "type_str": "figure", |
| "text": "Switchboard data example.", |
| "num": null, |
| "uris": null |
| }, |
| "FIGREF1": { |
| "type_str": "figure", |
| "text": "Suppose we have a sequence of observations o o o := {o o o 1 , o o o 2 , . . . , o o o m } and the corresponding sequence of labels y y y := {y 1 , y 2 , . . . , y m }, where each observation o o o t is a sequence. Our hierarchicalattentional model, denoted HA-RNN, learns the conditional probability P (y y y|o o o) relating the ob-", |
| "num": null, |
| "uris": null |
| }, |
| "FIGREF2": { |
| "type_str": "figure", |
| "text": "HA-RNN -Hierarchical-attentional RNN model.", |
| "num": null, |
| "uris": null |
| }, |
| "TABREF1": { |
| "num": null, |
| "type_str": "table", |
| "text": "Seven most frequent DAs and examples for (a) Switchboard and (b) MapTask.", |
| "content": "<table><tr><td>Model</td><td>Accuracy</td></tr><tr><td>RCNN</td><td>73.9%</td></tr><tr><td>RNN-Attentional-C</td><td>72.6%</td></tr><tr><td>HMM-trigram-C</td><td>71.0%</td></tr><tr><td>HA-RNN</td><td>74.5%</td></tr></table>", |
| "html": null |
| }, |
| "TABREF2": { |
| "num": null, |
| "type_str": "table", |
| "text": "Performance on Switchboard.", |
| "content": "<table><tr><td>Model</td><td>Accuracy</td></tr><tr><td>HMM-trigram-C</td><td>52.3%</td></tr><tr><td>Random Forest</td><td>52.5%</td></tr><tr><td>Random Forest + prev DA</td><td>55.3%</td></tr><tr><td>HA-RNN</td><td>63.3%</td></tr></table>", |
| "html": null |
| }, |
| "TABREF3": { |
| "num": null, |
| "type_str": "table", |
| "text": "Performance on MapTask.", |
| "content": "<table/>", |
| "html": null |
| }, |
| "TABREF5": { |
| "num": null, |
| "type_str": "table", |
| "text": "Performance of variants of the HA-RNN model on Switchboard.", |
| "content": "<table><tr><td/><td/><td>Accuracy</td><td/><td>Neg log likelihood</td></tr><tr><td/><td>Test</td><td>Dev</td><td>Train</td><td>Test Dev Train</td></tr><tr><td>HA-RNN</td><td colspan=\"4\">63.3% 61.9% 73.4% 3486 3228 18191</td></tr><tr><td>woUttRNN</td><td colspan=\"4\">56.9% 58.0% 62.2% 3823 3445 25074</td></tr><tr><td>woDA2Attn</td><td colspan=\"4\">61.4% 61.7% 70.1% 3539 3212 19780</td></tr><tr><td>woHid2Attn</td><td colspan=\"4\">62.2% 61.9% 71.8% 3487 3248 19132</td></tr><tr><td colspan=\"5\">woConvRNN 58.9% 60.0% 66.9% 3579 3248 20961</td></tr><tr><td>wDA2DA</td><td colspan=\"4\">58.2% 58.4% 69.3% 4014 3663 21135</td></tr></table>", |
| "html": null |
| }, |
| "TABREF6": { |
| "num": null, |
| "type_str": "table", |
| "text": "Performance of variants of the HA-RNN model on MapTask.", |
| "content": "<table/>", |
| "html": null |
| }, |
| "TABREF7": { |
| "num": null, |
| "type_str": "table", |
| "text": "in their respective DA columns. Two of the most common Switchboard MapTask Oracle Greedy Oracle Greedy HA-RNN 74.6% 74.5% 64.1% 63.3%", |
| "content": "<table><tr><td>woUttRNN</td><td>73.2%</td><td>71.8%</td><td>56.9%</td><td>57.1%</td></tr><tr><td>woDA2Attn</td><td>73.7%</td><td>73.7%</td><td>61.4%</td><td>61.4%</td></tr><tr><td>woHid2Attn</td><td>73.8%</td><td>72.8%</td><td>62.4%</td><td>62.2%</td></tr><tr><td>woConvRNN</td><td>72.2%</td><td>71.8%</td><td>58.9%</td><td>58.9%</td></tr><tr><td>wDA2DA</td><td>75.0%</td><td>71.5%</td><td>65.0%</td><td>58.2%</td></tr></table>", |
| "html": null |
| }, |
| "TABREF8": { |
| "num": null, |
| "type_str": "table", |
| "text": "Performance of oracle and greedy decoding on Switchboard and MapTask test data.Figure 3: Sample DAs with highlighted attention vectors for MapTask.", |
| "content": "<table><tr><td colspan=\"2\">Acknowledge Instruct</td><td colspan=\"2\">Reply y Reply n</td></tr><tr><td>mmhmm</td><td>move</td><td>mmhmm</td><td>nope</td></tr><tr><td>uh-huh</td><td>continue</td><td>uh-huh</td><td>i've</td></tr><tr><td>yes</td><td>drop</td><td>yes</td><td>no</td></tr><tr><td>yeah</td><td>starting</td><td>yep</td><td>it's</td></tr><tr><td>see</td><td>pass</td><td>aye</td><td>you</td></tr><tr><td>go</td><td>reach</td><td>i've</td><td>go</td></tr><tr><td>aye</td><td>stop</td><td>yeah</td><td>don't</td></tr><tr><td>no</td><td>coming</td><td>i'm</td><td>not</td></tr><tr><td>you</td><td>go</td><td>you</td><td>haven't</td></tr><tr><td>i'm</td><td>whatever</td><td>go</td><td>just</td></tr></table>", |
| "html": null |
| }, |
| "TABREF9": { |
| "num": null, |
| "type_str": "table", |
| "text": "Sample DA-specific high-focus tokens for MapTask.", |
| "content": "<table/>", |
| "html": null |
| } |
| } |
| } |
| } |