ACL-OCL / Base_JSON /prefixN /json /N16 /N16-1015.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "N16-1015",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T14:36:13.955737Z"
},
"title": "Multi-domain Neural Network Language Generation for Spoken Dialogue Systems",
"authors": [
{
"first": "Tsung-Hsien",
"middle": [],
"last": "Wen",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Milica",
"middle": [],
"last": "Ga\u0161i\u0107",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Nikola",
"middle": [],
"last": "Mrk\u0161i\u0107",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Lina",
"middle": [
"M"
],
"last": "Rojas-Barahona",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Pei-Hao",
"middle": [],
"last": "Su",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "David",
"middle": [],
"last": "Vandyke",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Steve",
"middle": [],
"last": "Young",
"suffix": "",
"affiliation": {},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Moving from limited-domain natural language generation (NLG) to open domain is difficult because the number of semantic input combinations grows exponentially with the number of domains. Therefore, it is important to leverage existing resources and exploit similarities between domains to facilitate domain adaptation. In this paper, we propose a procedure to train multi-domain, Recurrent Neural Network-based (RNN) language generators via multiple adaptation steps. In this procedure, a model is first trained on counterfeited data synthesised from an out-of-domain dataset, and then fine tuned on a small set of in-domain utterances with a discriminative objective function. Corpus-based evaluation results show that the proposed procedure can achieve competitive performance in terms of BLEU score and slot error rate while significantly reducing the data needed to train generators in new, unseen domains. In subjective testing, human judges confirm that the procedure greatly improves generator performance when only a small amount of data is available in the domain.",
"pdf_parse": {
"paper_id": "N16-1015",
"_pdf_hash": "",
"abstract": [
{
"text": "Moving from limited-domain natural language generation (NLG) to open domain is difficult because the number of semantic input combinations grows exponentially with the number of domains. Therefore, it is important to leverage existing resources and exploit similarities between domains to facilitate domain adaptation. In this paper, we propose a procedure to train multi-domain, Recurrent Neural Network-based (RNN) language generators via multiple adaptation steps. In this procedure, a model is first trained on counterfeited data synthesised from an out-of-domain dataset, and then fine tuned on a small set of in-domain utterances with a discriminative objective function. Corpus-based evaluation results show that the proposed procedure can achieve competitive performance in terms of BLEU score and slot error rate while significantly reducing the data needed to train generators in new, unseen domains. In subjective testing, human judges confirm that the procedure greatly improves generator performance when only a small amount of data is available in the domain.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Modern Spoken Dialogue Systems (SDS) are typically developed according to a well-defined ontology, which provides a structured representation of the domain data that the dialogue system can talk about, such as searching for a restaurant or shopping for a laptop. Unlike conventional approaches employing a substantial amount of handcrafting for each individual processing component (Ward and Issar, 1994; Bohus and Rudnicky, 2009) , statistical approaches to SDS promise a domain-scalable framework which requires a minimal amount of human intervention (Young et al., 2013) . showed improved performance in belief tracking by training a general model and adapting it to specific domains. Similar benefit can be observed in , in which a Bayesian committee machine (Tresp, 2000) was used to model policy learning in a multi-domain SDS regime.",
"cite_spans": [
{
"start": 382,
"end": 404,
"text": "(Ward and Issar, 1994;",
"ref_id": "BIBREF41"
},
{
"start": 405,
"end": 430,
"text": "Bohus and Rudnicky, 2009)",
"ref_id": "BIBREF6"
},
{
"start": 553,
"end": 573,
"text": "(Young et al., 2013)",
"ref_id": "BIBREF47"
},
{
"start": 763,
"end": 776,
"text": "(Tresp, 2000)",
"ref_id": "BIBREF37"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In past decades, adaptive NLG has been studied from linguistic perspectives, such as systems that learn to tailor user preferences (Walker et al., 2007) , convey a specific personality trait (Mairesse and Walker, 2008; Mairesse and Walker, 2011) , or align with their conversational partner (Isard et al., 2006) . Domain adaptation was first addressed by Hogan et al. (2008) using a generator based on the Lexical Functional Grammar (LFG) fstructures (Kaplan and Bresnan, 1982) . Although these approaches can model rich linguistic phenomenon, they are not readily adaptable to data since they still require many handcrafted rules to define the search space. Recently, RNN-based language generation has been introduced (Wen et al., 2015a; Wen et al., 2015b) . This class of statistical generators can learn generation decisions directly from dialogue act (DA)-utterance pairs without any semantic annotations (Mairesse and Young, 2014) or hand-coded grammars (Langkilde and Knight, 1998; Walker et al., 2002) . Many existing adaptation approaches (Wen et al., 2013; Shi et al., 2015; Chen et al., 2015) can be directly applied due to the flexibility of the underlying RNN language model (RNNLM) architecture (Mikolov et al., 2010) .",
"cite_spans": [
{
"start": 131,
"end": 152,
"text": "(Walker et al., 2007)",
"ref_id": "BIBREF40"
},
{
"start": 191,
"end": 218,
"text": "(Mairesse and Walker, 2008;",
"ref_id": "BIBREF25"
},
{
"start": 219,
"end": 245,
"text": "Mairesse and Walker, 2011)",
"ref_id": "BIBREF26"
},
{
"start": 291,
"end": 311,
"text": "(Isard et al., 2006)",
"ref_id": "BIBREF18"
},
{
"start": 355,
"end": 374,
"text": "Hogan et al. (2008)",
"ref_id": "BIBREF17"
},
{
"start": 451,
"end": 477,
"text": "(Kaplan and Bresnan, 1982)",
"ref_id": "BIBREF19"
},
{
"start": 727,
"end": 738,
"text": "al., 2015a;",
"ref_id": "BIBREF44"
},
{
"start": 739,
"end": 757,
"text": "Wen et al., 2015b)",
"ref_id": "BIBREF45"
},
{
"start": 909,
"end": 935,
"text": "(Mairesse and Young, 2014)",
"ref_id": "BIBREF27"
},
{
"start": 959,
"end": 987,
"text": "(Langkilde and Knight, 1998;",
"ref_id": "BIBREF22"
},
{
"start": 988,
"end": 1008,
"text": "Walker et al., 2002)",
"ref_id": "BIBREF39"
},
{
"start": 1047,
"end": 1065,
"text": "(Wen et al., 2013;",
"ref_id": "BIBREF43"
},
{
"start": 1066,
"end": 1083,
"text": "Shi et al., 2015;",
"ref_id": "BIBREF34"
},
{
"start": 1084,
"end": 1102,
"text": "Chen et al., 2015)",
"ref_id": "BIBREF7"
},
{
"start": 1208,
"end": 1230,
"text": "(Mikolov et al., 2010)",
"ref_id": "BIBREF30"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Discriminative training (DT) has been successfully used to train RNNs for various tasks. By optimising directly against the desired objective function such as BLEU score or Word Error Rate (Kuo et al., 2002) , the model can explore its output space and learn to discriminate between good and bad hypotheses. In this paper we show that DT can enable a generator to learn more efficiently when in-domain data is scarce.",
"cite_spans": [
{
"start": 189,
"end": 207,
"text": "(Kuo et al., 2002)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The paper presents an incremental recipe for training multi-domain language generators based on a purely data-driven, RNN-based generation model. Following a review of related work in section 2, section 3 describes the detailed RNN generator architecture. The data counterfeiting approach for synthesising an in-domain dataset is introduced in section 4, where it is compared to the simple model fine-tuning approach. In section 5, we describe our proposed DT procedure for training natural language generators. Following a brief review of the data sets used in section 6, corpus-based evaluation results are presented in section 7. In order to assess the subjective performance of our system, a quality test and a pairwise preference test are presented in section 8. The results show that the proposed adaptation recipe improves not only the objective scores but also the user's perceived quality of the system. We conclude with a brief summary in section 9.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Domain adaptation problems arise when we have a sufficient amount of labeled data in one domain (the source domain), but have little or no labeled data in another related domain (the target domain). Domain adaptability for real world speech and language applications is especially important because both language usage and the topics of interest are constantly evolving. Historically, domain adaptation has been less well studied in the NLG community. The most relevant work was done by Hogan et al. (2008) . They showed that an LFG f-structure based generator could yield better performance when trained on in-domain sentences paired with pseudo parse tree inputs generated from a state-of-the-art, but out-ofdomain parser. The SPoT-based generator proposed by Walker et al. (2002) has the potential to address domain adaptation problems. However, their published work has focused on tailoring user preferences (Walker et al., 2007) and mimicking personality traits (Mairesse and Walker, 2011) . Lemon (2008) proposed a Reinforcement Learning (RL) framework in which policy and NLG components can be jointly optimised and adapted based on online user feedback. In contrast, Mairesse et al. (2010) has proposed using active learning to mitigate the data sparsity problem when training datadriven NLG systems. Furthermore, Cuayhuitl et al. (2014) trained statistical surface realisers from unlabelled data by an automatic slot labelling technique.",
"cite_spans": [
{
"start": 487,
"end": 506,
"text": "Hogan et al. (2008)",
"ref_id": "BIBREF17"
},
{
"start": 762,
"end": 782,
"text": "Walker et al. (2002)",
"ref_id": "BIBREF39"
},
{
"start": 912,
"end": 933,
"text": "(Walker et al., 2007)",
"ref_id": "BIBREF40"
},
{
"start": 967,
"end": 994,
"text": "(Mairesse and Walker, 2011)",
"ref_id": "BIBREF26"
},
{
"start": 997,
"end": 1009,
"text": "Lemon (2008)",
"ref_id": "BIBREF24"
},
{
"start": 1175,
"end": 1197,
"text": "Mairesse et al. (2010)",
"ref_id": "BIBREF28"
},
{
"start": 1322,
"end": 1345,
"text": "Cuayhuitl et al. (2014)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "In general, feature-based adaptation is perhaps the most widely used technique (Blitzer et al., 2007; Pan and Yang, 2010; Duan et al., 2012) . By exploiting correlations and similarities between data points, it has been successfully applied to problems like speaker adaptation (Gauvain and Lee, 1994; Leggetter and Woodland, 1995) and various tasks in natural language processing (Daum\u00e9 III, 2009) . In contrast, model-based adaptation is particularly useful for language modeling (LM) (Bellegarda, 2004) . Mixture-based topic LMs (Gildea and Hofmann, 1999) are widely used in N-gram LMs for domain adaptation. Similar ideas have been applied to applications that require adapting LMs, such as machine translation (MT) (Koehn and Schroeder, 2007) and personalised speech recognition (Wen et al., 2012) .",
"cite_spans": [
{
"start": 79,
"end": 101,
"text": "(Blitzer et al., 2007;",
"ref_id": "BIBREF5"
},
{
"start": 102,
"end": 121,
"text": "Pan and Yang, 2010;",
"ref_id": "BIBREF32"
},
{
"start": 122,
"end": 140,
"text": "Duan et al., 2012)",
"ref_id": "BIBREF11"
},
{
"start": 277,
"end": 300,
"text": "(Gauvain and Lee, 1994;",
"ref_id": "BIBREF13"
},
{
"start": 301,
"end": 330,
"text": "Leggetter and Woodland, 1995)",
"ref_id": "BIBREF23"
},
{
"start": 380,
"end": 397,
"text": "(Daum\u00e9 III, 2009)",
"ref_id": "BIBREF10"
},
{
"start": 486,
"end": 504,
"text": "(Bellegarda, 2004)",
"ref_id": "BIBREF3"
},
{
"start": 531,
"end": 557,
"text": "(Gildea and Hofmann, 1999)",
"ref_id": "BIBREF14"
},
{
"start": 719,
"end": 746,
"text": "(Koehn and Schroeder, 2007)",
"ref_id": "BIBREF20"
},
{
"start": 783,
"end": 801,
"text": "(Wen et al., 2012)",
"ref_id": "BIBREF42"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Domain adaptation for Neural Network (NN)based LMs has also been studied in the past. A feature augmented RNNLM was first proposed by Mikolov and Zweig (2012) , but later applied to multi-genre broadcast speech recognition (Chen et al., 2015) and personalised language modeling (Wen et al., 2013) . These methods are based on finetuning existing network parameters on adaptation data. However, careful regularisation is often necessary . In a slightly different area, Shi et al. (2015) applied curriculum learning to RNNLM adaptation.",
"cite_spans": [
{
"start": 134,
"end": 158,
"text": "Mikolov and Zweig (2012)",
"ref_id": "BIBREF29"
},
{
"start": 223,
"end": 242,
"text": "(Chen et al., 2015)",
"ref_id": "BIBREF7"
},
{
"start": 278,
"end": 296,
"text": "(Wen et al., 2013)",
"ref_id": "BIBREF43"
},
{
"start": 468,
"end": 485,
"text": "Shi et al. (2015)",
"ref_id": "BIBREF34"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Discriminative training (DT) (Collins, 2002) is an alternative to the maximum likelihood (ML) criterion. For classification, DT can be split into two phases: (1) decoding training examples using the current model and scoring them, and (2) adjusting the model parameters to maximise the separation between the correct target annotation and the competing incorrect annotations. It has been successfully applied to many research problems, such as speech recognition (Kuo et al., 2002; Voigtlaender et al., 2015) and MT (He and Deng, 2012; . Recently, trained an RNNLM with a DT objective and showed improved performance on an MT task. However, their RNN probabilities only served as input features to a phrase-based MT system.",
"cite_spans": [
{
"start": 29,
"end": 44,
"text": "(Collins, 2002)",
"ref_id": "BIBREF8"
},
{
"start": 463,
"end": 481,
"text": "(Kuo et al., 2002;",
"ref_id": "BIBREF21"
},
{
"start": 482,
"end": 508,
"text": "Voigtlaender et al., 2015)",
"ref_id": "BIBREF38"
},
{
"start": 516,
"end": 535,
"text": "(He and Deng, 2012;",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "The neural language generation model (Wen et al., 2015a; Wen et al., 2015b) is a RNNLM (Mikolov et al., 2010) augmented with semantic input features such as a dialogue act 1 (DA) denoting the required semantics of the generated output. At every time step t, the model consumes the 1-hot representation of both the DA d t and a token w t 2 to update its internal state h t . Based on this new state, the output distribution over the next output token is calculated. The model can thus generate a sequence of tokens by repeatedly sampling the current output distribution to obtain the next input token until an end-ofsentence sign is generated. Finally, the generated sequence is lexicalised 3 to form the target utterance.",
"cite_spans": [
{
"start": 87,
"end": 109,
"text": "(Mikolov et al., 2010)",
"ref_id": "BIBREF30"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "The Neural Language Generator",
"sec_num": "3"
},
{
"text": "The Semantically Conditioned Long Short-term Memory Network (SC-LSTM) (Wen et al., 2015b) is a specialised extension of the LSTM network (Hochreiter and Schmidhuber, 1997) for language generation which has previously been shown capable of learning generation decisions from paired DA-utterances end-to-end without a modular pipeline (Walker et al., 2002; Stent et al., 2004) . Like LSTM, SC-LSTM relies on a vector of memory cells c t \u2208 R n and a set of elementwise multiplication gates to control how information is stored, forgotten, and exploited inside the network. The SC-LSTM architecture used in this paper is defined by 1 A combination of an action type and a set of slot-value pairs. e.g. inform(name=\"Seven days\",food=\"chinese\")",
"cite_spans": [
{
"start": 137,
"end": 171,
"text": "(Hochreiter and Schmidhuber, 1997)",
"ref_id": "BIBREF16"
},
{
"start": 333,
"end": 354,
"text": "(Walker et al., 2002;",
"ref_id": "BIBREF39"
},
{
"start": 355,
"end": 374,
"text": "Stent et al., 2004)",
"ref_id": "BIBREF35"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "The Neural Language Generator",
"sec_num": "3"
},
{
"text": "2 We use token instead of word because our model operates on text for which slot values are replaced by their corresponding slot tokens. We call this procedure delexicalisation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Neural Language Generator",
"sec_num": "3"
},
{
"text": "3 The process of replacing slot token by its value.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Neural Language Generator",
"sec_num": "3"
},
{
"text": "the following equations,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Neural Language Generator",
"sec_num": "3"
},
{
"text": "\uf8eb \uf8ec \uf8ec \uf8ec \uf8ec \uf8ed i t f t o t r t c t \uf8f6 \uf8f7 \uf8f7 \uf8f7 \uf8f7 \uf8f8 = \uf8eb \uf8ec \uf8ec \uf8ec \uf8ec \uf8ed sigmoid sigmoid sigmoid sigmoid tanh \uf8f6 \uf8f7 \uf8f7 \uf8f7 \uf8f7 \uf8f8 W 5n,2n w t h t\u22121 d t = r t d t\u22121 c t = f t c t\u22121 + i t \u0109 t + tanh(W dc d t ) h t = o t tanh(c t )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Neural Language Generator",
"sec_num": "3"
},
{
"text": "where n is the hidden layer size, i t , f t , o t , r t \u2208 [0, 1] n are input, forget, output, and reading gates respectively,\u0109 t and c t are proposed cell value and true cell value at time t, W 5n,2n and W dc are the model parameters to be learned. The major difference of the SC-LSTM compared to the vanilla LSTM is the introduction of the reading gates for controlling the semantic input features presented to the network. It was shown in Wen et al. (2015b) that these reading gates act like keyword and key phrase detectors that learn the alignments between individual semantic input features and their corresponding realisations without additional supervision. After the hidden layer state is obtained, the computation of the next word distribution and sampling from it is straightforward,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Neural Language Generator",
"sec_num": "3"
},
{
"text": "p(w t+1 |w t , w t\u22121 , ...w 0 , d t ) = sof tmax(W ho h t ) w t+1 \u223c p(w t+1 |w t , w t\u22121 , ...w 0 , d t ).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Neural Language Generator",
"sec_num": "3"
},
{
"text": "where W ho is another weight matrix to learn. The entire network is trained end-to-end using a cross entropy cost function, between the predicted word distribution p t and the actual word distribution y t , with regularisations on DA transition dynamics,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Neural Language Generator",
"sec_num": "3"
},
{
"text": "F (\u03b8) = t p t log(y t ) + d T + T\u22121 t=0 \u03b7\u03be dt+1\u2212dt (1) where \u03b8 = {W 5n,2n , W dc , W ho }, d",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Neural Language Generator",
"sec_num": "3"
},
{
"text": "T is the DA vector at the last index T, and \u03b7 and \u03be are constants set to 10 \u22124 and 100, respectively.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Neural Language Generator",
"sec_num": "3"
},
{
"text": "Given training instances (represented by DA and sentence tuples {d i , \u2126 i }) from the source domain S (rich) and the target domain T (limited), the goal is to find a set of SC-LSTM parameters \u03b8 T that can perform acceptably well in the target domain. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training Multi-domain Models",
"sec_num": "4"
},
{
"text": "A straightforward way to adapt NN-based models to a target domain is to continue training or fine-tuning a well-trained generator on whatever new target domain data is available. This training procedure is as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model Fine-Tuning",
"sec_num": "4.1"
},
{
"text": "1. Train a source domain generator \u03b8 S on source domain data {d i , \u2126 i } \u2208 S with all values delex- icalised 4 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model Fine-Tuning",
"sec_num": "4.1"
},
{
"text": "2. Divide the adaptation data into training and validation sets. Refine parameters by training on adaptation data {d i , \u2126 i } \u2208 T with early stopping and a smaller starting learning rate. This yields the target domain generator \u03b8 T .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model Fine-Tuning",
"sec_num": "4.1"
},
{
"text": "Although this method can benefit from parameter sharing of the LM part of the network, the parameters of similar input slot-value pairs are not shared 4 . In other words, realisation of any unseen slot-value pair in the target domain can only be learned from scratch. Adaptation offers no benefit in this case.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model Fine-Tuning",
"sec_num": "4.1"
},
{
"text": "In order to maximise the effect of domain adaptation, the model should be able to (1) generate acceptable realisations for unseen slot-value pairs based on similar slot-value pairs seen in the training data, and (2) continue to distinguish slot-value pairs that are similar but nevertheless distinct. Instead of exploring weight tying strategies in different training stages (which is complex to implement and typically relies on ad hoc tying rules), we propose instead a data counterfeiting approach to synthesise target domain data from source domain data. The procedure is shown in Figure 1 and described as following:",
"cite_spans": [],
"ref_spans": [
{
"start": 585,
"end": 593,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Data Counterfeiting",
"sec_num": "4.2"
},
{
"text": "1. Categorise slots in both source and target domain into classes, according to some similarity measure. In our case, we categorise them based on their functional type to yield three classes: informable, requestable, and binary 5 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data Counterfeiting",
"sec_num": "4.2"
},
{
"text": "2. Delexicalise all slots and values.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data Counterfeiting",
"sec_num": "4.2"
},
{
"text": "3. For each slot s in a source instance of rare slot-value pairs in the target domain. Furthermore, the approach also preserves the co-occurrence statistics of slot-value pairs and their realisations. This allows the model to learn the gating mechanism even before adaptation data is introduced.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data Counterfeiting",
"sec_num": "4.2"
},
{
"text": "(d i , \u2126 i ) \u2208 S,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data Counterfeiting",
"sec_num": "4.2"
},
{
"text": "In contrast to the traditional ML criteria Equation 1whose goal is to maximise the log-likelihood of correct examples, DT aims at separating correct examples from competing incorrect examples. Given a training instance (d i , \u2126 i ), the training process starts by generating a set of candidate sentences Gen(d i ) using the current model parameter \u03b8 and DA d i . The discriminative cost function can therefore be written as",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discriminative Training",
"sec_num": "5"
},
{
"text": "F (\u03b8) = \u2212E[L(\u03b8)] = \u2212 \u2126\u2208Gen(d i ) p \u03b8 (\u2126|d i )L(\u2126, \u2126 i ) (2)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discriminative Training",
"sec_num": "5"
},
{
"text": "where L(\u2126, \u2126 i ) is the scoring function evaluating candidate \u2126 by taking ground truth \u2126 i as reference. p \u03b8 (\u2126|d i ) is the normalised probability of the candidate and is calculated by",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discriminative Training",
"sec_num": "5"
},
{
"text": "p \u03b8 (\u2126|d i ) = exp[\u03b3 log p(\u2126|d i ,\u03b8)] \u2126 \u2208Gen(d i ) exp[\u03b3 log p(\u2126 |d i ,\u03b8)] (3) \u03b3 \u2208 [0, \u221e]",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discriminative Training",
"sec_num": "5"
},
{
"text": "is a tuned scaling factor that flattens the distribution for \u03b3 < 1 and sharpens it for \u03b3 > 1. The unnormalised candidate likelihood log p(\u2126|d i , \u03b8) is produced by summing token likelihoods from the RNN generator output,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discriminative Training",
"sec_num": "5"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "log p(\u2126|d i , \u03b8) = wt\u2208\u2126 log p(w t |d i , \u03b8)",
"eq_num": "(4)"
}
],
"section": "Discriminative Training",
"sec_num": "5"
},
{
"text": "The scoring function L(\u2126, \u2126 i ) can be further generalised to take several scoring functions into account",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discriminative Training",
"sec_num": "5"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "L(\u2126, \u2126 i ) = j L j (\u2126, \u2126 i )\u03b2 j",
"eq_num": "(5)"
}
],
"section": "Discriminative Training",
"sec_num": "5"
},
{
"text": "where \u03b2 j is the weight for j-th scoring function.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discriminative Training",
"sec_num": "5"
},
{
"text": "Since the cost function presented here Equation 2is differentiable everywhere, back propagation can be applied to calculate the gradients and update parameters directly.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discriminative Training",
"sec_num": "5"
},
{
"text": "In order to test our proposed recipe for training multi-domain language generators, we conducted experiments using four different domains: finding a restaurant, finding a hotel, buying a laptop, and buying a television. Datasets for the restaurant and hotel domains have been previously released by Wen et al. (2015b). These were created by workers recruited by Amazon Mechanical Turk (AMT) by asking them to propose an appropriate natural language realisation corresponding to each system dialogue act actually generated by a dialogue system. However, the number of actually occurring DA combinations in the restaurant and hotel domains were rather limited (\u223c200) and since multiple references were collected for each DA, the resulting datasets are not sufficiently diverse to enable the assessment of the generalisation capability of the different training methods over unseen semantic inputs. In order to create more diverse datasets for the laptop and TV domains, we enumerated all possible combinations of dialogue act types and slots based on the ontology shown in Table 1 . This yielded about 13K distinct DAs in the laptop domain and 7K distinct DAs in the TV domain. We then used AMT workers to collect just one realisation for each DA. Since the resulting datasets have a much larger input space but only one training example for each DA, the system must learn partial realisations of concepts and be able to recombine and apply them to unseen DAs. Also note that the number of act types and slots of the new ontology is larger, which makes NLG in both laptop and TV domains much harder.",
"cite_spans": [],
"ref_spans": [
{
"start": 1071,
"end": 1078,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Datasets",
"sec_num": "6"
},
{
"text": "We first assess generator performance using two objective evaluation metrics, the BLEU-4 score (Papineni et al., 2002) and slot error rate ERR (Wen et al., 2015b). Slot error rates were calculated by averaging slot errors over each of the top 5 realisations in the entire corpus. We used multiple references to compute the BLEU scores when available (i.e. for the restaurant and hotel domains). In order to better Figure 2 , but the results were evaluated by adapting from SF restaurant and hotel joint dataset to laptop and TV joint dataset. 10% \u2248 2K examples.",
"cite_spans": [
{
"start": 95,
"end": 118,
"text": "(Papineni et al., 2002)",
"ref_id": "BIBREF33"
}
],
"ref_spans": [
{
"start": 414,
"end": 422,
"text": "Figure 2",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Corpus-based Evaluation",
"sec_num": "7"
},
{
"text": "compare results across different methods, we plotted the BLEU and slot error rate curves against different amounts of adaptation data. Note that in the graphs the x-axis is presented on a log-scale.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Corpus-based Evaluation",
"sec_num": "7"
},
{
"text": "The generators were implemented using the Theano library (Bergstra et al., 2010; Bastien et al., 2012) , and trained by partitioning each of the collected corpora into a training, validation, and testing set in the ratio 3:1:1. All the generators were trained by treating each sentence as a mini-batch. An l 2 regularisation term was added to the objective function for every 10 training examples. The hidden layer size was set to be 100 for all cases. Stochastic gradient descent and back propagation through time (Werbos, 1990) were used to optimise the parameters. In order to prevent overfitting, early stopping was implemented using the validation set.",
"cite_spans": [
{
"start": 57,
"end": 80,
"text": "(Bergstra et al., 2010;",
"ref_id": "BIBREF4"
},
{
"start": 81,
"end": 102,
"text": "Bastien et al., 2012)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Setup",
"sec_num": "7.1"
},
{
"text": "During decoding, we over-generated 20 utterances and selected the top 5 realisations for each DA according to the following reranking criteria,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Setup",
"sec_num": "7.1"
},
{
"text": "R = \u2212(F (\u03b8) + \u03bbERR) (6)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Setup",
"sec_num": "7.1"
},
{
"text": "where \u03bb is a tradeoff constant, F (\u03b8) is the cost generated by network parameters \u03b8, and the slot error rate ERR is computed by exact matching of the slot tokens in the candidate utterances. \u03bb is set to a large value (10) in order to severely penalise nonsensical outputs. Since our generator works stochastically and the trained networks can differ depending on the initialisation, all the results shown below were averaged over 5 randomly initialised networks.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Setup",
"sec_num": "7.1"
},
{
"text": "We first compared the data counterfeiting (counterfeit) approach with the model fine-tuning (tune) method and models trained from scratch (scratch). Figure 2 shows the result of adapting models between similar domains, from laptop to TV. Because of the parameter sharing in the LM part of the network, model fine-tuning (tune) achieves a better BLEU score than training from scratch (scratch) when target domain data is limited. However, if we apply the data counterfeiting (counterfeit) method, we obtain an even greater BLEU score gain. This is mainly due to the better realisation of unseen slotvalue pairs. On the other hand, data counterfeiting (counterfeit) also brings a substantial reduction in slot error rate. This is because it preserves the co-occurrence statistics between slot-value pairs and realisations, which allows the model to learn good semantic alignments even before adaptation data is introduced. Similar results can be seen in Figure 3 , in which adaptation was performed on more disjoint domains: restaurant and hotel joint domain to laptop and TV joint domain. The data counterfeiting (counterfeit) method is still superior to the other methods.",
"cite_spans": [],
"ref_spans": [
{
"start": 149,
"end": 157,
"text": "Figure 2",
"ref_id": "FIGREF2"
},
{
"start": 952,
"end": 960,
"text": "Figure 3",
"ref_id": "FIGREF4"
}
],
"eq_spans": [],
"section": "Data Counterfeiting",
"sec_num": "7.2"
},
{
"text": "The generator parameters obtained from data counterfeiting and ML adaptation were further tuned by applying DT. In each case, the models were optimised using two objective functions: BLEU-4 score and slot error rate. However, we used a soft version of BLEU called sentence BLEU as described in , to mitigate the sparse n-gram match problem of BLEU at the sentence level. In our experiments, we set \u03b3 to 5.0 and \u03b2 j to 1.0 and -1.0 for BLEU and ERR, respectively. For each DA, we applied our generator 50 times to generate candidate sentences. Repeated candidates were removed. We treated the remaining candidates as a single batch and updated the model parameters by the procedure described in section 5. We evaluated performance of the algorithm on the laptop to TV adaptation scenario, and compared models with and without discriminative training (ML+DT & ML). The results are shown in Figure 4 where it can be seen that DT consistently improves generator performance on both metrics. Another interesting point to note is that slot error rate is easier to optimise compared to BLEU (ERR\u2192 0 after DT). This is probably because the sentence BLEU optimisation criterion is only an approximation of the corpus BLEU score used for evaluation.",
"cite_spans": [],
"ref_spans": [
{
"start": 888,
"end": 896,
"text": "Figure 4",
"ref_id": "FIGREF6"
}
],
"eq_spans": [],
"section": "Discriminative Training",
"sec_num": "7.3"
},
{
"text": "Since automatic metrics may not consistently agree with human perception (Stent et al., 2005) Table 2 : Human evaluation for utterance quality in two domains. Results are shown in two metrics (rating out of 3). Statistical significance was computed using a two-tailed Student's t-test, between the model trained with full data (scrALL) and all others.",
"cite_spans": [
{
"start": 73,
"end": 93,
"text": "(Stent et al., 2005)",
"ref_id": "BIBREF36"
}
],
"ref_spans": [
{
"start": 94,
"end": 101,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Human Evaluation",
"sec_num": "8"
},
{
"text": "this, a set of judges were recruited using AMT. We tested our models on two adaptation scenarios: laptop to TV and TV to laptop. For each task, two systems among the four were compared: training from scratch using full dataset (scrALL), adapting with DT training but only 10% of target domain data (DT-10%), adapting with ML training but only 10% of target domain data (ML-10%), and training from scratch using only 10% of target domain data (scr-10%). In order to evaluate system performance in the presence of language variation, each system generated 5 different surface realisations for each input DA and the human judges were asked to score each of them in terms of informativeness and naturalness (rating out of 3), and also asked to state a preference between the two. Here informativeness is defined as whether the utterance contains all the information specified in the DA, and naturalness is defined as whether the utterance could plausibly have been produced by a human. In order to decrease the amount of information presented to the judges, utterances that appeared identically in both systems were filtered out. We tested about 2000 DAs for each scenario distributed uniformly between contrasts except that allowed 50% more comparisons between ML-10% and DT-10% because they were close. Table 2 shows the subjective quality assessments which exhibit the same general trend as the objective results. If a large amount of target domain data is available, training everything from scratch (scrALL) achieves a very good performance and adaptation is not necessary. However, if only a limited amount of in-domain data is available, efficient adaptation is critical (DT-10% & ML-10% > scr-10%). More-Pref.% scr-10% ML-10% DT-10% scrALL scr-10% -34.5 ** 33.9 ** 22.4 ** ML-10% 65. over, judges also preferred the DT trained generator (DT-10%) compared to the ML trained generator (ML-10%), especially for informativeness. In the laptop to TV scenario, the informativeness score of DT method (DT-10%) was considered indistinguishable when comparing to the method trained with full training set (scrALL). The preference test results are shown in Table 3 . Again, adaptation methods (DT-10% & ML-10%) are crucial to bridge the gap between domains when the target domain data is scarce (DT-10% & ML-10% > scr-10%). The results also suggest that the DT training approach (DT-10%) was preferred compared to ML training (ML-10%), even though the preference in this case was not statistically significant.",
"cite_spans": [],
"ref_spans": [
{
"start": 1301,
"end": 1308,
"text": "Table 2",
"ref_id": null
},
{
"start": 2151,
"end": 2158,
"text": "Table 3",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Human Evaluation",
"sec_num": "8"
},
{
"text": "In this paper we have proposed a procedure for training multi-domain, RNN-based language generators, by data counterfeiting and discriminative training. The procedure is general and applicable to any datadriven language generator. Both corpus-based evaluation and human assessment were performed. Objective measures on corpus data have demonstrated that by applying this procedure to adapt models between four different dialogue domains, good performance can be achieved with much less training data. Subjective assessment by human judges confirm the effectiveness of the approach.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Future Work",
"sec_num": "9"
},
{
"text": "The proposed domain adaptation method requires a small amount of annotated data to be collected offline. In our future work, we intend to focus on training the generator on the fly with real user feedback during conversation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Future Work",
"sec_num": "9"
},
{
"text": "We have tried training with both slots and values delexicalised and then using the weights to initialise unseen slot-value pairs in the target domain. However, this yielded even worse results since the learned semantic alignment stuck at local minima. Pre-training only the LM parameters did not produce better results either.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Informable class include all non-binary informable slots while binary class includes all binary informable slots.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "Tsung-Hsien Wen and David Vandyke are supported by Toshiba Research Europe Ltd, Cambridge Research Laboratory.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Decoder integration and expected bleu training for recurrent neural network language models",
"authors": [
{
"first": "Michael",
"middle": [],
"last": "Auli",
"suffix": ""
},
{
"first": "Jianfeng",
"middle": [],
"last": "Gao",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of ACL. Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michael Auli and Jianfeng Gao. 2014. Decoder inte- gration and expected bleu training for recurrent neural network language models. In Proceedings of ACL. As- sociation for Computational Linguistics.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Large-scale expected bleu training of phrase-based reordering models",
"authors": [
{
"first": "Michael",
"middle": [],
"last": "Auli",
"suffix": ""
},
{
"first": "Michel",
"middle": [],
"last": "Galley",
"suffix": ""
},
{
"first": "Jianfeng",
"middle": [],
"last": "Gao",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of EMNLP. Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michael Auli, Michel Galley, and Jianfeng Gao. 2014. Large-scale expected bleu training of phrase-based re- ordering models. In Proceedings of EMNLP. Associa- tion for Computational Linguistics.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Arnaud Bergeron, Nicolas Bouchard, and Yoshua Bengio. 2012. Theano: new features and speed improvements. Deep Learning and Unsupervised Feature Learning NIPS 2012 Workshop",
"authors": [
{
"first": "Fr\u00e9d\u00e9ric",
"middle": [],
"last": "Bastien",
"suffix": ""
},
{
"first": "Pascal",
"middle": [],
"last": "Lamblin",
"suffix": ""
},
{
"first": "Razvan",
"middle": [],
"last": "Pascanu",
"suffix": ""
},
{
"first": "James",
"middle": [],
"last": "Bergstra",
"suffix": ""
},
{
"first": "Ian",
"middle": [
"J"
],
"last": "Goodfellow",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Fr\u00e9d\u00e9ric Bastien, Pascal Lamblin, Razvan Pascanu, James Bergstra, Ian J. Goodfellow, Arnaud Berg- eron, Nicolas Bouchard, and Yoshua Bengio. 2012. Theano: new features and speed improvements. Deep Learning and Unsupervised Feature Learning NIPS 2012 Workshop.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Statistical language model adaptation: review and perspectives",
"authors": [
{
"first": "Jerome",
"middle": [
"R"
],
"last": "Bellegarda",
"suffix": ""
}
],
"year": 2004,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jerome R. Bellegarda. 2004. Statistical language model adaptation: review and perspectives. Speech Commu- nication.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Theano: a CPU and GPU math expression compiler",
"authors": [
{
"first": "James",
"middle": [],
"last": "Bergstra",
"suffix": ""
},
{
"first": "Olivier",
"middle": [],
"last": "Breuleux",
"suffix": ""
},
{
"first": "Fr\u00e9d\u00e9ric",
"middle": [],
"last": "Bastien",
"suffix": ""
},
{
"first": "Pascal",
"middle": [],
"last": "Lamblin",
"suffix": ""
},
{
"first": "Razvan",
"middle": [],
"last": "Pascanu",
"suffix": ""
},
{
"first": "Guillaume",
"middle": [],
"last": "Desjardins",
"suffix": ""
},
{
"first": "Joseph",
"middle": [],
"last": "Turian",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Warde-Farley",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the Python for Scientific Computing Conference",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "James Bergstra, Olivier Breuleux, Fr\u00e9d\u00e9ric Bastien, Pas- cal Lamblin, Razvan Pascanu, Guillaume Desjardins, Joseph Turian, David Warde-Farley, and Yoshua Ben- gio. 2010. Theano: a CPU and GPU math expression compiler. In Proceedings of the Python for Scientific Computing Conference.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Biographies, bollywood, boom-boxes and blenders: Domain adaptation for sentiment classification",
"authors": [
{
"first": "John",
"middle": [],
"last": "Blitzer",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Dredze",
"suffix": ""
},
{
"first": "Fernando",
"middle": [],
"last": "Pereira",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "John Blitzer, Mark Dredze, and Fernando Pereira. 2007. Biographies, bollywood, boom-boxes and blenders: Domain adaptation for sentiment classification. In Proceedings of ACL. Association for Computational Linguistics.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "The ravenclaw dialog management framework: Architecture and systems",
"authors": [
{
"first": "Dan",
"middle": [],
"last": "Bohus",
"suffix": ""
},
{
"first": "Alexander",
"middle": [
"I"
],
"last": "Rudnicky",
"suffix": ""
}
],
"year": 2009,
"venue": "Computer Speech and Language",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dan Bohus and Alexander I. Rudnicky. 2009. The raven- claw dialog management framework: Architecture and systems. Computer Speech and Language.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Recurrent neural network language model adaptation for multi-genre broadcast speech recognition",
"authors": [
{
"first": "Xie",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Tan",
"middle": [],
"last": "Tian",
"suffix": ""
},
{
"first": "Liu",
"middle": [],
"last": "Xunying",
"suffix": ""
},
{
"first": "Lanchantin",
"middle": [],
"last": "Pierre",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of InterSpeech",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xie Chen, Tan Tian, Liu Xunying, Lanchantin Pierre, Wan Moquan, Mark Gales, and Woodland Phil. 2015. Recurrent neural network language model adaptation for multi-genre broadcast speech recognition. In Pro- ceedings of InterSpeech.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Discriminative training methods for hidden markov models: Theory and experiments with perceptron algorithms",
"authors": [
{
"first": "Michael",
"middle": [],
"last": "Collins",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michael Collins. 2002. Discriminative training meth- ods for hidden markov models: Theory and experi- ments with perceptron algorithms. In Proceedings of EMNLP. Association for Computational Linguistics.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Training a statistical surface realiser from automatic slot labelling",
"authors": [
{
"first": "Heriberto",
"middle": [],
"last": "Cuayhuitl",
"suffix": ""
},
{
"first": "Nina",
"middle": [],
"last": "Dethlefs",
"suffix": ""
},
{
"first": "Helen",
"middle": [],
"last": "Hastie",
"suffix": ""
},
{
"first": "Xingkun",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2014,
"venue": "Spoken Language Technology Workshop (SLT)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Heriberto Cuayhuitl, Nina Dethlefs, Helen Hastie, and Xingkun Liu. 2014. Training a statistical surface re- aliser from automatic slot labelling. In Spoken Lan- guage Technology Workshop (SLT), 2014 IEEE.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Frustratingly easy domain adaptation",
"authors": [
{
"first": "Hal",
"middle": [],
"last": "Daum\u00e9",
"suffix": ""
},
{
"first": "Iii",
"middle": [],
"last": "",
"suffix": ""
}
],
"year": 2009,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hal Daum\u00e9 III. 2009. Frustratingly easy domain adapta- tion. CoRR, abs/0907.1815.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Learning with augmented features for heterogeneous domain adaptation",
"authors": [
{
"first": "Lixin",
"middle": [],
"last": "Duan",
"suffix": ""
},
{
"first": "Dong",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Ivor",
"middle": [
"W"
],
"last": "Tsang",
"suffix": ""
}
],
"year": 2012,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lixin Duan, Dong Xu, and Ivor W. Tsang. 2012. Learn- ing with augmented features for heterogeneous do- main adaptation. CoRR, abs/1206.4660.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Policy committee for adaptation in multidomain spoken dialogue systems",
"authors": [
{
"first": "Milica",
"middle": [],
"last": "Ga\u0161i\u0107",
"suffix": ""
},
{
"first": "Nikola",
"middle": [],
"last": "Mrk\u0161i\u0107",
"suffix": ""
},
{
"first": "Pei-Hao",
"middle": [],
"last": "Su",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Vandyke",
"suffix": ""
},
{
"first": "Tsung-Hsien",
"middle": [],
"last": "Wen",
"suffix": ""
},
{
"first": "Steve",
"middle": [
"J"
],
"last": "Young",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of ASRU",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Milica Ga\u0161i\u0107, Nikola Mrk\u0161i\u0107, Pei-hao Su, David Vandyke, Tsung-Hsien Wen, and Steve J. Young. 2015. Policy committee for adaptation in multi- domain spoken dialogue systems. In Proceedings of ASRU.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Maximum a posteriori estimation for multivariate gaussian mixture observations of markov chains. Speech and Audio Processing",
"authors": [
{
"first": "Jean-Luc",
"middle": [],
"last": "Gauvain",
"suffix": ""
},
{
"first": "Chin-Hui",
"middle": [],
"last": "Lee",
"suffix": ""
}
],
"year": 1994,
"venue": "IEEE Transactions on",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jean-Luc Gauvain and Chin-Hui Lee. 1994. Maximum a posteriori estimation for multivariate gaussian mix- ture observations of markov chains. Speech and Audio Processing, IEEE Transactions on.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Topic-based language models using em",
"authors": [
{
"first": "Daniel",
"middle": [],
"last": "Gildea",
"suffix": ""
},
{
"first": "Thomas",
"middle": [],
"last": "Hofmann",
"suffix": ""
}
],
"year": 1999,
"venue": "Proceedings of Eu-roSpeech",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Daniel Gildea and Thomas Hofmann. 1999. Topic-based language models using em. In Proceedings of Eu- roSpeech.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Maximum expected bleu training of phrase and lexicon translation models",
"authors": [
{
"first": "Xiaodong",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Li",
"middle": [],
"last": "Deng",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xiaodong He and Li Deng. 2012. Maximum expected bleu training of phrase and lexicon translation models. In Proceedings of ACL. Association for Computational Linguistics.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Long short-term memory",
"authors": [
{
"first": "Sepp",
"middle": [],
"last": "Hochreiter",
"suffix": ""
},
{
"first": "J\u00fcrgen",
"middle": [],
"last": "Schmidhuber",
"suffix": ""
}
],
"year": 1997,
"venue": "Neural Computation",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sepp Hochreiter and J\u00fcrgen Schmidhuber. 1997. Long short-term memory. Neural Computation.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Parser-based retraining for domain adaptation of probabilistic generators",
"authors": [
{
"first": "Deirdre",
"middle": [],
"last": "Hogan",
"suffix": ""
},
{
"first": "Jennifer",
"middle": [],
"last": "Foster",
"suffix": ""
},
{
"first": "Joachim",
"middle": [],
"last": "Wagner",
"suffix": ""
},
{
"first": "Josef",
"middle": [],
"last": "Van Genabith",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of INLG",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Deirdre Hogan, Jennifer Foster, Joachim Wagner, and Josef van Genabith. 2008. Parser-based retraining for domain adaptation of probabilistic generators. In Proceedings of INLG. Association for Computational Linguistics.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Individuality and alignment in generated dialogues",
"authors": [
{
"first": "Amy",
"middle": [],
"last": "Isard",
"suffix": ""
},
{
"first": "Carsten",
"middle": [],
"last": "Brockmann",
"suffix": ""
},
{
"first": "Jon",
"middle": [],
"last": "Oberlander",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of INLG. Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Amy Isard, Carsten Brockmann, and Jon Oberlander. 2006. Individuality and alignment in generated dia- logues. In Proceedings of INLG. Association for Com- putational Linguistics.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Lexical-Functional Grammar: a formal system for grammatical representation",
"authors": [
{
"first": "M",
"middle": [],
"last": "Ronald",
"suffix": ""
},
{
"first": "Joan",
"middle": [],
"last": "Kaplan",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Bresnan",
"suffix": ""
}
],
"year": 1982,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ronald M. Kaplan and Joan Bresnan. 1982. Lexical- Functional Grammar: a formal system for grammati- cal representation. In Joan Bresnan, editor, The mental representation of grammatical relations. MIT Press.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Experiments in domain adaptation for statistical machine translation",
"authors": [
{
"first": "Philipp",
"middle": [],
"last": "Koehn",
"suffix": ""
},
{
"first": "Josh",
"middle": [],
"last": "Schroeder",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of StatMT. Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Philipp Koehn and Josh Schroeder. 2007. Experiments in domain adaptation for statistical machine translation. In Proceedings of StatMT. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Discriminative training of language models for speech recognition",
"authors": [
{
"first": "Hong-Kwang",
"middle": [],
"last": "Kuo",
"suffix": ""
},
{
"first": "Eric",
"middle": [],
"last": "Fosler-Lussier",
"suffix": ""
},
{
"first": "Hui",
"middle": [],
"last": "Jiang",
"suffix": ""
},
{
"first": "Chin-Hui",
"middle": [],
"last": "Lee",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of ICASSP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hong-kwang Kuo, Eric Fosler-lussier, Hui Jiang, and Chin-hui Lee. 2002. Discriminative training of lan- guage models for speech recognition. In Proceedings of ICASSP.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Generation that exploits corpus-based statistical knowledge",
"authors": [
{
"first": "Irene",
"middle": [],
"last": "Langkilde",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Knight",
"suffix": ""
}
],
"year": 1998,
"venue": "Proceedings of ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Irene Langkilde and Kevin Knight. 1998. Generation that exploits corpus-based statistical knowledge. In Proceedings of ACL. Association for Computational Linguistics.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Maximum likelihood linear regression for speaker adaptation of continuous density hidden Markov models",
"authors": [
{
"first": "Chris",
"middle": [],
"last": "Leggetter",
"suffix": ""
},
{
"first": "Philip",
"middle": [],
"last": "Woodland",
"suffix": ""
}
],
"year": 1995,
"venue": "Computer Speech and Language",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chris Leggetter and Philip Woodland. 1995. Maximum likelihood linear regression for speaker adaptation of continuous density hidden Markov models. Computer Speech and Language.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Adaptive natural language generation in dialogue using reinforcement learning",
"authors": [
{
"first": "Oliver",
"middle": [],
"last": "Lemon",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of SemDial",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Oliver Lemon. 2008. Adaptive natural language gen- eration in dialogue using reinforcement learning. In Proceedings of SemDial.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Trainable generation of big-five personality styles through datadriven parameter estimation",
"authors": [
{
"first": "Franois",
"middle": [],
"last": "Mairesse",
"suffix": ""
},
{
"first": "Marilyn",
"middle": [],
"last": "Walker",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Franois Mairesse and Marilyn Walker. 2008. Trainable generation of big-five personality styles through data- driven parameter estimation. In Proceedings of ACL. Association for Computational Linguistics.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Controlling user perceptions of linguistic style: Trainable generation of personality traits",
"authors": [
{
"first": "Fran\u00e7ois",
"middle": [],
"last": "Mairesse",
"suffix": ""
},
{
"first": "Marilyn",
"middle": [
"A"
],
"last": "Walker",
"suffix": ""
}
],
"year": 2011,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Fran\u00e7ois Mairesse and Marilyn A. Walker. 2011. Con- trolling user perceptions of linguistic style: Trainable generation of personality traits. Computer Linguistics.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Stochastic language generation in dialogue using factored language models",
"authors": [
{
"first": "Fran\u00e7ois",
"middle": [],
"last": "Mairesse",
"suffix": ""
},
{
"first": "Steve",
"middle": [],
"last": "Young",
"suffix": ""
}
],
"year": 2014,
"venue": "Computer Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Fran\u00e7ois Mairesse and Steve Young. 2014. Stochastic language generation in dialogue using factored lan- guage models. Computer Linguistics.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Phrase-based statistical language generation using graphical models and active learning",
"authors": [
{
"first": "Fran\u00e7ois",
"middle": [],
"last": "Mairesse",
"suffix": ""
},
{
"first": "Milica",
"middle": [],
"last": "Ga\u0161i\u0107",
"suffix": ""
},
{
"first": "Filip",
"middle": [],
"last": "Jur\u010d\u00ed\u010dek",
"suffix": ""
},
{
"first": "Simon",
"middle": [],
"last": "Keizer",
"suffix": ""
},
{
"first": "Blaise",
"middle": [],
"last": "Thomson",
"suffix": ""
},
{
"first": "Kai",
"middle": [],
"last": "Yu",
"suffix": ""
},
{
"first": "Steve",
"middle": [
"Young"
],
"last": "",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of ACL. Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Fran\u00e7ois Mairesse, Milica Ga\u0161i\u0107, Filip Jur\u010d\u00ed\u010dek, Simon Keizer, Blaise Thomson, Kai Yu, and Steve Young. 2010. Phrase-based statistical language generation us- ing graphical models and active learning. In Proceed- ings of ACL. Association for Computational Linguis- tics.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Context dependent recurrent neural network language model",
"authors": [
{
"first": "Tom\u00e1\u0161",
"middle": [],
"last": "Mikolov",
"suffix": ""
},
{
"first": "Geoffrey",
"middle": [],
"last": "Zweig",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of SLT",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tom\u00e1\u0161 Mikolov and Geoffrey Zweig. 2012. Context de- pendent recurrent neural network language model. In Proceedings of SLT.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Recurrent neural network based language model",
"authors": [
{
"first": "Tom\u00e1\u0161",
"middle": [],
"last": "Mikolov",
"suffix": ""
},
{
"first": "Martin",
"middle": [],
"last": "Karafit",
"suffix": ""
},
{
"first": "Luk\u00e1\u0161",
"middle": [],
"last": "Burget",
"suffix": ""
},
{
"first": "Ja\u0148",
"middle": [],
"last": "Cernock\u00fd",
"suffix": ""
},
{
"first": "Sanjeev",
"middle": [],
"last": "Khudanpur",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of InterSpeech",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tom\u00e1\u0161 Mikolov, Martin Karafit, Luk\u00e1\u0161 Burget, Ja\u0148 Cernock\u00fd, and Sanjeev Khudanpur. 2010. Recurrent neural network based language model. In Proceedings of InterSpeech.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Multi-domain Dialog State Tracking using Recurrent Neural Networks",
"authors": [
{
"first": "Nikola",
"middle": [],
"last": "Mrk\u0161i\u0107",
"suffix": ""
},
{
"first": "Diarmuid\u00f3",
"middle": [],
"last": "S\u00e9aghdha",
"suffix": ""
},
{
"first": "Blaise",
"middle": [],
"last": "Thomson",
"suffix": ""
},
{
"first": "Milica",
"middle": [],
"last": "Ga\u0161i\u0107",
"suffix": ""
},
{
"first": "Pei-Hao",
"middle": [],
"last": "Su",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Vandyke",
"suffix": ""
},
{
"first": "Tsung-Hsien",
"middle": [],
"last": "Wen",
"suffix": ""
},
{
"first": "Steve",
"middle": [],
"last": "Young",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nikola Mrk\u0161i\u0107, Diarmuid\u00d3 S\u00e9aghdha, Blaise Thomson, Milica Ga\u0161i\u0107, Pei-Hao Su, David Vandyke, Tsung- Hsien Wen, and Steve Young. 2015. Multi-domain Dialog State Tracking using Recurrent Neural Net- works. In Proceedings of ACL.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "A survey on transfer learning",
"authors": [
{
"first": "Qiang",
"middle": [],
"last": "Sinno Jialin Pan",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Yang",
"suffix": ""
}
],
"year": 2010,
"venue": "IEEE Trans. on Knowledge and Data Engineering",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sinno Jialin Pan and Qiang Yang. 2010. A survey on transfer learning. IEEE Trans. on Knowledge and Data Engineering.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Bleu: a method for automatic evaluation of machine translation",
"authors": [
{
"first": "Kishore",
"middle": [],
"last": "Papineni",
"suffix": ""
},
{
"first": "Salim",
"middle": [],
"last": "Roukos",
"suffix": ""
},
{
"first": "Todd",
"middle": [],
"last": "Ward",
"suffix": ""
},
{
"first": "Wei-Jing",
"middle": [],
"last": "Zhu",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kishore Papineni, Salim Roukos, Todd Ward, and Wei- Jing Zhu. 2002. Bleu: a method for automatic evalu- ation of machine translation. In Proceedings of ACL. Association for Computational Linguistics.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "Recurrent neural network language model adaptation with curriculum learning",
"authors": [
{
"first": "Yangyang",
"middle": [],
"last": "Shi",
"suffix": ""
},
{
"first": "Martha",
"middle": [],
"last": "Larson",
"suffix": ""
},
{
"first": "Catholijn",
"middle": [
"M"
],
"last": "Jonker",
"suffix": ""
}
],
"year": 2015,
"venue": "Computer, Speech and Language",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yangyang Shi, Martha Larson, and Catholijn M. Jonker. 2015. Recurrent neural network language model adap- tation with curriculum learning. Computer, Speech and Language.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "Trainable sentence planning for complex information presentation in spoken dialog systems",
"authors": [
{
"first": "Amanda",
"middle": [],
"last": "Stent",
"suffix": ""
},
{
"first": "Rashmi",
"middle": [],
"last": "Prasad",
"suffix": ""
},
{
"first": "Marilyn",
"middle": [],
"last": "Walker",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of ACL. Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Amanda Stent, Rashmi Prasad, and Marilyn Walker. 2004. Trainable sentence planning for complex infor- mation presentation in spoken dialog systems. In Pro- ceedings of ACL. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "Evaluating evaluation methods for generation in the presence of variation",
"authors": [
{
"first": "Amanda",
"middle": [],
"last": "Stent",
"suffix": ""
},
{
"first": "Matthew",
"middle": [],
"last": "Marge",
"suffix": ""
},
{
"first": "Mohit",
"middle": [],
"last": "Singhai",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of CICLing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Amanda Stent, Matthew Marge, and Mohit Singhai. 2005. Evaluating evaluation methods for generation in the presence of variation. In Proceedings of CICLing 2005.",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "A bayesian committee machine",
"authors": [
{
"first": "",
"middle": [],
"last": "Volker Tresp",
"suffix": ""
}
],
"year": 2000,
"venue": "Neural Computation",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Volker Tresp. 2000. A bayesian committee machine. Neural Computation.",
"links": null
},
"BIBREF38": {
"ref_id": "b38",
"title": "Sequencediscriminative training of recurrent neural networks",
"authors": [
{
"first": "Paul",
"middle": [],
"last": "Voigtlaender",
"suffix": ""
},
{
"first": "Patrick",
"middle": [],
"last": "Doetsch",
"suffix": ""
},
{
"first": "Simon",
"middle": [],
"last": "Wiesler",
"suffix": ""
},
{
"first": "Ralf",
"middle": [],
"last": "Schluter",
"suffix": ""
},
{
"first": "Hermann",
"middle": [],
"last": "Ney",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of ICASSP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Paul Voigtlaender, Patrick Doetsch, Simon Wiesler, Ralf Schluter, and Hermann Ney. 2015. Sequence- discriminative training of recurrent neural networks. In Proceedings of ICASSP.",
"links": null
},
"BIBREF39": {
"ref_id": "b39",
"title": "Training a sentence planner for spoken dialogue using boosting",
"authors": [
{
"first": "A",
"middle": [],
"last": "Marilyn",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Walker",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Owen",
"suffix": ""
},
{
"first": "Monica",
"middle": [],
"last": "Rambow",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Rogati",
"suffix": ""
}
],
"year": 2002,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marilyn A Walker, Owen C Rambow, and Monica Ro- gati. 2002. Training a sentence planner for spoken dialogue using boosting. Computer Speech and Lan- guage.",
"links": null
},
"BIBREF40": {
"ref_id": "b40",
"title": "Individual and domain adaptation in sentence planning for dialogue",
"authors": [
{
"first": "Marilyn",
"middle": [],
"last": "Walker",
"suffix": ""
},
{
"first": "Amanda",
"middle": [],
"last": "Stent",
"suffix": ""
},
{
"first": "Franois",
"middle": [],
"last": "Mairesse",
"suffix": ""
},
{
"first": "Rashmi",
"middle": [],
"last": "Prasad",
"suffix": ""
}
],
"year": 2007,
"venue": "Journal of Artificial Intelligence Research",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marilyn Walker, Amanda Stent, Franois Mairesse, and Rashmi Prasad. 2007. Individual and domain adap- tation in sentence planning for dialogue. Journal of Artificial Intelligence Research (JAIR).",
"links": null
},
"BIBREF41": {
"ref_id": "b41",
"title": "Recent improvements in the cmu spoken language understanding system",
"authors": [
{
"first": "Wayne",
"middle": [],
"last": "Ward",
"suffix": ""
},
{
"first": "Sunil",
"middle": [],
"last": "Issar",
"suffix": ""
}
],
"year": 1994,
"venue": "Proceedings of Workshop on HLT",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wayne Ward and Sunil Issar. 1994. Recent improve- ments in the cmu spoken language understanding sys- tem. In Proceedings of Workshop on HLT. Association for Computational Linguistics.",
"links": null
},
"BIBREF42": {
"ref_id": "b42",
"title": "Personalized language modeling by crowd sourcing with social network data for voice access of cloud applications",
"authors": [
{
"first": "Hung-Yi",
"middle": [],
"last": "Tsung-Hsien Wen",
"suffix": ""
},
{
"first": "Tai-Yuan",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Lin-Shan",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Lee",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of SLT",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tsung-Hsien Wen, Hung-Yi Lee, Tai-Yuan Chen, and Lin-Shan Lee. 2012. Personalized language modeling by crowd sourcing with social network data for voice access of cloud applications. In Proceedings of SLT.",
"links": null
},
"BIBREF43": {
"ref_id": "b43",
"title": "Recurrent neural network based language model personalization by social network crowdsourcing",
"authors": [
{
"first": "Aaron",
"middle": [],
"last": "Tsung-Hsien Wen",
"suffix": ""
},
{
"first": "Hung",
"middle": [],
"last": "Heidel",
"suffix": ""
},
{
"first": "Yu",
"middle": [],
"last": "Yi Lee",
"suffix": ""
},
{
"first": "Lin-Shan",
"middle": [],
"last": "Tsao",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Lee",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of InterSpeech",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tsung-Hsien Wen, Aaron Heidel, Hung yi Lee, Yu Tsao, and Lin-Shan Lee. 2013. Recurrent neural network based language model personalization by social net- work crowdsourcing. In Proceedings of InterSpeech.",
"links": null
},
"BIBREF44": {
"ref_id": "b44",
"title": "Stochastic language generation in dialogue using recurrent neural networks with convolutional sentence reranking",
"authors": [
{
"first": "Milica",
"middle": [],
"last": "Tsung-Hsien Wen",
"suffix": ""
},
{
"first": "Dongho",
"middle": [],
"last": "Ga\u0161i\u0107",
"suffix": ""
},
{
"first": "Nikola",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "Pei-Hao",
"middle": [],
"last": "Mrk\u0161i\u0107",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Su",
"suffix": ""
},
{
"first": "Steve",
"middle": [],
"last": "Vandyke",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Young",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of SIGdial",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tsung-Hsien Wen, Milica Ga\u0161i\u0107, Dongho Kim, Nikola Mrk\u0161i\u0107, Pei-Hao Su, David Vandyke, and Steve Young. 2015a. Stochastic language generation in di- alogue using recurrent neural networks with convolu- tional sentence reranking. In Proceedings of SIGdial. Association for Computational Linguistics.",
"links": null
},
"BIBREF45": {
"ref_id": "b45",
"title": "Semantically conditioned lstm-based natural language generation for spoken dialogue systems",
"authors": [
{
"first": "Milica",
"middle": [],
"last": "Tsung-Hsien Wen",
"suffix": ""
},
{
"first": "Nikola",
"middle": [],
"last": "Ga\u0161i\u0107",
"suffix": ""
},
{
"first": "Pei-Hao",
"middle": [],
"last": "Mrk\u0161i\u0107",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Su",
"suffix": ""
},
{
"first": "Steve",
"middle": [],
"last": "Vandyke",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Young",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tsung-Hsien Wen, Milica Ga\u0161i\u0107, Nikola Mrk\u0161i\u0107, Pei-Hao Su, David Vandyke, and Steve Young. 2015b. Seman- tically conditioned lstm-based natural language gener- ation for spoken dialogue systems. In Proceedings of EMNLP. Association for Computational Linguistics.",
"links": null
},
"BIBREF46": {
"ref_id": "b46",
"title": "Backpropagation through time: what it does and how to do it",
"authors": [
{
"first": "J",
"middle": [],
"last": "Paul",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Werbos",
"suffix": ""
}
],
"year": 1990,
"venue": "Proceedings of the IEEE",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Paul J Werbos. 1990. Backpropagation through time: what it does and how to do it. Proceedings of the IEEE.",
"links": null
},
"BIBREF47": {
"ref_id": "b47",
"title": "Pomdp-based statistical spoken dialog systems: A review",
"authors": [
{
"first": "Steve",
"middle": [],
"last": "Young",
"suffix": ""
},
{
"first": "Milica",
"middle": [],
"last": "Ga\u0161i\u0107",
"suffix": ""
},
{
"first": "Blaise",
"middle": [],
"last": "Thomson",
"suffix": ""
},
{
"first": "Jason",
"middle": [
"D"
],
"last": "Williams",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the IEEE",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Steve Young, Milica Ga\u0161i\u0107, Blaise Thomson, and Ja- son D. Williams. 2013. Pomdp-based statistical spo- ken dialog systems: A review. Proceedings of the IEEE.",
"links": null
},
"BIBREF48": {
"ref_id": "b48",
"title": "Kl-divergence regularized deep neural network adaptation for improved large vocabulary speech recognition",
"authors": [
{
"first": "Dong",
"middle": [],
"last": "Yu",
"suffix": ""
},
{
"first": "Kaisheng",
"middle": [],
"last": "Yao",
"suffix": ""
},
{
"first": "Hang",
"middle": [],
"last": "Su",
"suffix": ""
},
{
"first": "Gang",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Frank",
"middle": [],
"last": "Seide",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of ICASSP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dong Yu, Kaisheng Yao, Hang Su, Gang Li, and Frank Seide. 2013. Kl-divergence regularized deep neu- ral network adaptation for improved large vocabulary speech recognition. In Proceedings of ICASSP.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"uris": null,
"text": "An example of data counterfeiting algorithm. Both slots and values are delexicalised. Slots and values that are not in the target domain are replaced during data counterfeiting (shown in red with * sign). The prefix inside bracket <> indicates the slot's functional class (I for informable and R for requestable).",
"num": null,
"type_str": "figure"
},
"FIGREF2": {
"uris": null,
"text": "Results evaluated on TV domain by adapting models from laptop domain. Comparing train-from-scratch model (scratch) with model fine-tuning approach (tune) and data counterfeiting method (counterfeit). 10% \u2248 700 examples.",
"num": null,
"type_str": "figure"
},
"FIGREF4": {
"uris": null,
"text": "The same set of comparison as in",
"num": null,
"type_str": "figure"
},
"FIGREF5": {
"uris": null,
"text": "(a) Effect of DT on BLEU (b) Effect of DT on slot error rate",
"num": null,
"type_str": "figure"
},
"FIGREF6": {
"uris": null,
"text": "Effect of applying DT training after ML adaptation. The results were evaluated on laptop to TV adaptation. 10% \u2248 700 examples.",
"num": null,
"type_str": "figure"
},
"TABREF0": {
"content": "<table><tr><td/><td>Laptop</td><td>Television</td></tr><tr><td>informable slots</td><td>family, *pricerange, batteryrating, driverange, weightrange, isforbusinesscomputing</td><td>family, *pricerange, screensizerange, ecorating, hdmiport, hasusbport</td></tr><tr><td/><td>*name, *type, *price, warranty, battery,</td><td>*name, *type, *price, resolution,</td></tr><tr><td>requestable slots</td><td>design, dimension, utility, weight,</td><td>powerconsumption, accessories, color,</td></tr><tr><td/><td>platform, memory, drive, processor</td><td>screensize, audio</td></tr><tr><td/><td colspan=\"2\">*inform, *inform only match, *inform on match, inform all, *inform count,</td></tr><tr><td>act type</td><td colspan=\"2\">inform no info, *recommend, compare, *select, suggest, *confirm, *request,</td></tr><tr><td/><td>*request more, *goodbye</td><td/></tr><tr><td/><td colspan=\"2\">This approach allows the generator to share realisa-</td></tr><tr><td/><td colspan=\"2\">tions among slot-value pairs that have similar func-</td></tr><tr><td/><td colspan=\"2\">tionalities, therefore facilitates the transfer learning</td></tr></table>",
"text": "randomly select a new slot s that belongs to both the target ontology and the class of s to replace s. Repeat this process for every slot in the instance and yield a new pseudo instance (d i ,\u03a9 i ) \u2208 T in the target domain.4. Train a generator\u03b8 T on the counterfeiteddataset {d i ,\u03a9 i } \u2208 T.5. Refine parameters on real in-domain data. This yields final model parameters \u03b8 T .",
"html": null,
"type_str": "table",
"num": null
},
"TABREF1": {
"content": "<table/>",
"text": "Ontologies for Laptop and TV domains",
"html": null,
"type_str": "table",
"num": null
},
"TABREF3": {
"content": "<table><tr><td/><td>5 **</td><td>-</td><td>44.9</td><td>36.8 **</td></tr><tr><td>DT-10%</td><td>66.1 **</td><td>55.1</td><td>-</td><td>35.9 **</td></tr><tr><td>scrALL</td><td>77.6 **</td><td>63.2 **</td><td>64.1 **</td><td>-</td></tr><tr><td>* p Pref.%</td><td colspan=\"4\">scr-10% ML-10% DT-10% scrALL</td></tr><tr><td>scr-10%</td><td>-</td><td>17.4 **</td><td>14.2 **</td><td>14.8 **</td></tr><tr><td>ML-10%</td><td>82.6 **</td><td>-</td><td>48.1</td><td>37.1 **</td></tr><tr><td>DT-10%</td><td>85.8 **</td><td>51.9</td><td>-</td><td>41.6 *</td></tr><tr><td>scrALL</td><td>85.2 **</td><td>62.9 **</td><td>58.4 *</td><td>-</td></tr><tr><td colspan=\"2\">* p &lt;0.05, ** p &lt;0.005</td><td/><td/><td/></tr><tr><td colspan=\"5\">(b) Preference test on laptop to TV adaptation scenario</td></tr></table>",
"text": "<0.05, ** p <0.005 (a) Preference test on TV to laptop adaptation scenario",
"html": null,
"type_str": "table",
"num": null
},
"TABREF4": {
"content": "<table/>",
"text": "Pairwise preference test among four approaches in two domains. Statistical significance was computed using two-tailed binomial test.",
"html": null,
"type_str": "table",
"num": null
}
}
}
}