| { |
| "paper_id": "W18-0502", |
| "header": { |
| "generated_with": "S2ORC 1.0.0", |
| "date_generated": "2023-01-19T05:25:45.673269Z" |
| }, |
| "title": "Using Paraphrasing and Memory-Augmented Models to Combat Data Sparsity in Question Interpretation with a Virtual Patient Dialogue System", |
| "authors": [ |
| { |
| "first": "Lifeng", |
| "middle": [], |
| "last": "Jin", |
| "suffix": "", |
| "affiliation": {}, |
| "email": "jin@ling.osu.edu" |
| }, |
| { |
| "first": "David", |
| "middle": [], |
| "last": "King", |
| "suffix": "", |
| "affiliation": {}, |
| "email": "king@ling.osu.edu" |
| }, |
| { |
| "first": "Amad", |
| "middle": [], |
| "last": "Hussein", |
| "suffix": "", |
| "affiliation": {}, |
| "email": "" |
| }, |
| { |
| "first": "Michael", |
| "middle": [], |
| "last": "White", |
| "suffix": "", |
| "affiliation": {}, |
| "email": "mwhite@ling.osu.edu" |
| }, |
| { |
| "first": "Douglas", |
| "middle": [], |
| "last": "Danforth", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "The Ohio State University", |
| "location": { |
| "settlement": "Columbus", |
| "region": "OH", |
| "country": "USA" |
| } |
| }, |
| "email": "doug.danforth@osumc.edu" |
| } |
| ], |
| "year": "", |
| "venue": null, |
| "identifiers": {}, |
| "abstract": "When interpreting questions in a virtual patient dialogue system, one must inevitably tackle the challenge of a long tail of relatively infrequently asked questions. To make progress on this challenge, we investigate the use of paraphrasing for data augmentation and neural memory-based classification, finding that the two methods work best in combination. In particular, we find that the neural memory-based approach not only outperforms a straight CNN classifier on low frequency questions, but also takes better advantage of the augmented data created by paraphrasing, together yielding a nearly 10% absolute improvement in accuracy on the least frequently asked questions.", |
| "pdf_parse": { |
| "paper_id": "W18-0502", |
| "_pdf_hash": "", |
| "abstract": [ |
| { |
| "text": "When interpreting questions in a virtual patient dialogue system, one must inevitably tackle the challenge of a long tail of relatively infrequently asked questions. To make progress on this challenge, we investigate the use of paraphrasing for data augmentation and neural memory-based classification, finding that the two methods work best in combination. In particular, we find that the neural memory-based approach not only outperforms a straight CNN classifier on low frequency questions, but also takes better advantage of the augmented data created by paraphrasing, together yielding a nearly 10% absolute improvement in accuracy on the least frequently asked questions.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Abstract", |
| "sec_num": null |
| } |
| ], |
| "body_text": [ |
| { |
| "text": "To develop skills such as taking a patient history and developing a differential diagnosis, medical students interact with actors who play the part of a patient with a specific medical history and pathology, known as Standardized Patients (SPs). Although SPs remain the standard way to test medical students on such skills, SPs are expensive and can behave inconsistently from student to student. A virtual patient dialogue system aims to overcome these issues as well as provide a means of supplying automated feedback on the quality of the medical student's interaction with the patient (see Figure 1) .", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 594, |
| "end": 603, |
| "text": "Figure 1)", |
| "ref_id": "FIGREF0" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "In previous work, Danforth et al. (2009 Danforth et al. ( , 2013 ; Maicher et al. (2017) used a hand-crafted patternmatching system called ChatScript together with a 3D avatar in order to collect chatted dialogues and provide useful student feedback (Danforth et al., 2016) . ChatScript matches input text using handwritten patterns and outputs a scripted response for each dialogue turn. With sufficient pattern-writing skill and effort, pattern matching with ChatScript can achieve relatively high accuracy, but it is unable to easily leverage increasing amounts of training data, somewhat brittle regarding misspellings, and can be difficult to maintain as new questions and patterns are added.", |
| "cite_spans": [ |
| { |
| "start": 18, |
| "end": 39, |
| "text": "Danforth et al. (2009", |
| "ref_id": "BIBREF4" |
| }, |
| { |
| "start": 40, |
| "end": 64, |
| "text": "Danforth et al. ( , 2013", |
| "ref_id": "BIBREF3" |
| }, |
| { |
| "start": 67, |
| "end": 88, |
| "text": "Maicher et al. (2017)", |
| "ref_id": "BIBREF15" |
| }, |
| { |
| "start": 250, |
| "end": 273, |
| "text": "(Danforth et al., 2016)", |
| "ref_id": "BIBREF5" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "To address these issues, Jin et al. (2017) developed an ensemble of word-and characterbased convolutional neural networks (CNNs) for question identification in the system that attained 79% accuracy, comparable to the hand-crafted ChatScript patterns. Moreover, they found that since the CNN ensemble's error profile was very different from the pattern-based approach, combining the two systems yielded a nearly 10% boost in system accuracy and an error reduction of 47% in comparison to using ChatScript alone. Perhaps not surprisingly, the CNN-based classifier outperformed the pattern-matching system on frequently asked questions, but on the least frequently asked questions-where data sparsity was an issuethe CNN performed much worse, only achieving 46.5% accuracy on the quintile of questions asked least often.", |
| "cite_spans": [ |
| { |
| "start": 25, |
| "end": 42, |
| "text": "Jin et al. (2017)", |
| "ref_id": "BIBREF11" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "In this paper, we aim to combat this data sparsity issue by investigating (1) whether paraphrasing can be used to create novel synthetic training items, examining in particular lexical substitution from several resources (Miller, 1995; Le and Mikolov, 2014; Ganitkevitch et al., 2013; Cocos and Callison-Burch, 2016) and neural MT for back-translation (Mallinson et al., 2017) ; and (2) whether neural memory-based approaches developed for one-shot learning (Kaiser et al., 2017) perform better on low-frequency questions. We find that the two methods work best in combination, as the neural memory-based approach not only outperforms the straight CNN classifier on low frequency questions, but also takes better advantage of the augmented data created by paraphrasing. Together, the two methods yield nearly ", |
| "cite_spans": [ |
| { |
| "start": 221, |
| "end": 235, |
| "text": "(Miller, 1995;", |
| "ref_id": "BIBREF20" |
| }, |
| { |
| "start": 236, |
| "end": 257, |
| "text": "Le and Mikolov, 2014;", |
| "ref_id": "BIBREF14" |
| }, |
| { |
| "start": 258, |
| "end": 284, |
| "text": "Ganitkevitch et al., 2013;", |
| "ref_id": "BIBREF9" |
| }, |
| { |
| "start": 285, |
| "end": 316, |
| "text": "Cocos and Callison-Burch, 2016)", |
| "ref_id": "BIBREF0" |
| }, |
| { |
| "start": 352, |
| "end": 376, |
| "text": "(Mallinson et al., 2017)", |
| "ref_id": "BIBREF16" |
| }, |
| { |
| "start": 458, |
| "end": 479, |
| "text": "(Kaiser et al., 2017)", |
| "ref_id": "BIBREF12" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "Question identification is a task that can be approached in at least two ways. One way is to treat it as a multiclass classification problem (e.g., using logistic regression), which can take advantage of class-specific features but tends to require a substantial amount of training data for each class. Formally, letting q be the candidate question, Y be a set of question classes and \u03c6 a feature extractor, we seek to find the most likely label\u0177: q,y) y \u2208Y e \u03c6(q,y ) . Alternatively, a pairwise setup can be used. For example, for each class a binary classification decision can be made as to whether a given question represents a paraphrase of a member of the class, choosing the highest confidence match. More generally, let q y i \u2208 L y be the i-th question variant for label y (where the question variants are the paraphrases of the label appearing in the training data); given some similarity metric \u03c3, we seek to find the label\u0177 with the most similar question variant q\u02c6y i in the set L\u02c6y to the candidate question q:", |
| "cite_spans": [ |
| { |
| "start": 448, |
| "end": 452, |
| "text": "q,y)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Work", |
| "sec_num": "2" |
| }, |
| { |
| "text": "y = argmax y\u2208Y e \u03c6(", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Work", |
| "sec_num": "2" |
| }, |
| { |
| "text": "y = argmax y\u2208Y max q y i \u2208L y \u03c3(q, q y i )", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Work", |
| "sec_num": "2" |
| }, |
| { |
| "text": "Early work on question answering (Ravichandran et al., 2003) found that treating the task as a maximum entropy re-ranking problem outperformed using the same system as a multiclass classifier. By contrast, DeVault et al. (2011) observed that maximum entropy multiclass classifiers performed well with simple n-gram features when each class had a sufficient number of training examples. Jaffe et al. (2015) explored a log-linear pairwise ranking model for question identification in a virtual patient dialogue system and found it outperformed a multiclass baseline along the lines of DeVault et al. (2011). However, Jaffe et al. used a much smaller dataset with only about 915 user turns, less than one-fourth as many as in the current dataset. For this larger dataset, a straightforward logistic regression multiclass classifier outperforms a pairwise ranking model.", |
| "cite_spans": [ |
| { |
| "start": 33, |
| "end": 60, |
| "text": "(Ravichandran et al., 2003)", |
| "ref_id": "BIBREF25" |
| }, |
| { |
| "start": 193, |
| "end": 227, |
| "text": "By contrast, DeVault et al. (2011)", |
| "ref_id": null |
| }, |
| { |
| "start": 386, |
| "end": 405, |
| "text": "Jaffe et al. (2015)", |
| "ref_id": "BIBREF10" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Work", |
| "sec_num": "2" |
| }, |
| { |
| "text": "In general it appears reasonable to expect that the comparative effectiveness of multiclass vs. pairwise approaches depends on the amount of training data, and that pairwise ranking methods have potential advantages for cross-domain and one-shot learning tasks (Vinyals et al., 2016; Kaiser et al., 2017) where data is sparse or nonexistent. Notably, in the closely related task of short-answer scoring, Sakaguchi et al. (2015) found that pairwise methods could be effectively combined with regression-based approaches to improve performance in sparse-data cases.", |
| "cite_spans": [ |
| { |
| "start": 261, |
| "end": 283, |
| "text": "(Vinyals et al., 2016;", |
| "ref_id": "BIBREF30" |
| }, |
| { |
| "start": 284, |
| "end": 304, |
| "text": "Kaiser et al., 2017)", |
| "ref_id": "BIBREF12" |
| }, |
| { |
| "start": 404, |
| "end": 427, |
| "text": "Sakaguchi et al. (2015)", |
| "ref_id": "BIBREF27" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Work", |
| "sec_num": "2" |
| }, |
| { |
| "text": "Other work involving dialogue utterance classification has traditionally required a large amount of data. For example, Suendermann-Oeft et al. (2009) acquired 500,000 dialogues with over 2 million utterances, observing that statistical systems outperform rule-based ones as the amount of data increases. Crowdsourcing for collecting additional dialogues (Ramanarayanan et al., 2017) could alleviate data sparsity problems for rare categories by providing additional training examples, but this technique is limited to more general domains that do not require special training/skills. In the current medical domain, workers on common crowdsourcing platforms are unlikely to have the expertise required to take a patient's medical history in a natural way, so any data collected with this method would likely suffer quality issues and fail to generalize to real medical student dialogues. Rossen and Lok (2012) have developed an approach for collecting dialogue data for virtual patient systems, but their approach does not directly address the issue that even as the number of dialogues collected increases, there can remain a long As an alternative to crowdsourcing, we pursue paraphrasing for data augmentation in this paper, focusing on the simplest methods to employ, namely lexical substitution and neural backtranslation (see Section 5). The idea is to augment the observed question instances for questions with infrequent labels in the dataset with automatically generated paraphrases, with the aim of making such questions easier to recognize using machinelearned models. In future work, we plan to explore more complex paraphrasing methods, including syntactic paraphrasing (Duan et al., 2016) and inducing paraphrase templates from aligned paraphrases (Fader et al., 2013) .", |
| "cite_spans": [ |
| { |
| "start": 143, |
| "end": 149, |
| "text": "(2009)", |
| "ref_id": null |
| }, |
| { |
| "start": 354, |
| "end": 382, |
| "text": "(Ramanarayanan et al., 2017)", |
| "ref_id": "BIBREF24" |
| }, |
| { |
| "start": 887, |
| "end": 908, |
| "text": "Rossen and Lok (2012)", |
| "ref_id": "BIBREF26" |
| }, |
| { |
| "start": 1682, |
| "end": 1701, |
| "text": "(Duan et al., 2016)", |
| "ref_id": "BIBREF7" |
| }, |
| { |
| "start": 1761, |
| "end": 1781, |
| "text": "(Fader et al., 2013)", |
| "ref_id": "BIBREF8" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Work", |
| "sec_num": "2" |
| }, |
| { |
| "text": "Our dataset currently consists of 4330 questionanswer pairs from 94 dialogues between first year medical students and the virtual patient. After classifying an asked question as having a certain label, the virtual patient replies with the canned response for that label, as illustrated in Table 1 . Unfortunately, the labels do not have a uniform distribution with regards to the number of variants each label has (that is, the number of question instances for that label in the dataset). In fact, most of the labels are underrepresented.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 289, |
| "end": 296, |
| "text": "Table 1", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Data Imbalance", |
| "sec_num": "3" |
| }, |
| { |
| "text": "On average, each question label has 12 variants, but 8 labels account for nearly 20% of the data, while 256 labels account for the bottom 20% (Figure 2) . We define a rare label to be any label that is in that set of 256 infrequent labels. Supplementing the data to account for this imbalance is the primary focus of our work.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 142, |
| "end": 152, |
| "text": "(Figure 2)", |
| "ref_id": "FIGREF1" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Data Imbalance", |
| "sec_num": "3" |
| }, |
| { |
| "text": "Because of the data sparsity issue, we cast the problem of sentence classification for infrequent labels as a problem of few-shot learning. In particular, we use Kaiser et al.'s (2017) memory module together with a CNN encoder (Kim, 2014; Jin et al., 2017 ) as our main model, the memoryaugmented CNN classifier (MA-CNN). Our aim is to take advantage of the MA-CNN's one-shot learning capability to mitigate the issue of data sparsity and also to make better use of data augmentation to achieve better performance.", |
| "cite_spans": [ |
| { |
| "start": 162, |
| "end": 184, |
| "text": "Kaiser et al.'s (2017)", |
| "ref_id": null |
| }, |
| { |
| "start": 227, |
| "end": 238, |
| "text": "(Kim, 2014;", |
| "ref_id": "BIBREF13" |
| }, |
| { |
| "start": 239, |
| "end": 255, |
| "text": "Jin et al., 2017", |
| "ref_id": "BIBREF11" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Memory-Augmented CNN Classifier", |
| "sec_num": "4" |
| }, |
| { |
| "text": "The CNN encoder follows Kim (2014) and Jin et al. (2017) . We briefly summarize the architecture here and direct interested readers to these two papers for implementation details. There are four layers in the encoder: an embedding layer, a convolution layer, a max-pooling layer and a linear layer. Let x i \u2208 R k be a k-dimensional embedding for the i-th element of the sentence s. We concatenate all of the element embeddings to get S \u2208 R |s|\u00d7k as the representation of the whole sentence.", |
| "cite_spans": [ |
| { |
| "start": 24, |
| "end": 34, |
| "text": "Kim (2014)", |
| "ref_id": "BIBREF13" |
| }, |
| { |
| "start": 39, |
| "end": 56, |
| "text": "Jin et al. (2017)", |
| "ref_id": "BIBREF11" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "The CNN encoder", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "The convolution layer may have many kernels, which are defined as weight matrices w j \u2208 R hk , where h is the width of the kernel. They slide across the sentence representation and then pass through a nonlinearity to produce a feature map c j \u2208 R |s|\u2212h+1 . Then the max-pooling layer uses max-over-time pooling (Collobert et al., 2011) on the feature maps to ensure fixed-dimensional outputs.", |
| "cite_spans": [ |
| { |
| "start": 311, |
| "end": 335, |
| "text": "(Collobert et al., 2011)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "The CNN encoder", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "Finally, we concatenate all the outputs from all the kernels into a single vector o, multiply it with the weight matrix W l and apply p2-normalization to it as the final fully-connected neural network layer for the CNN encoder:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "The CNN encoder", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "e = o \u2022 W l + b l o \u2022 W l + b l", |
| "eq_num": "(1)" |
| } |
| ], |
| "section": "The CNN encoder", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "Here W l and b l are the weight matrix and the bias term for the final layer, respectively.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "The CNN encoder", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "We follow Kaiser et al. (2017) for implementation of our memory module. The memory module is a tuple of three matrices K, V and A, which stores Table 1 : Sample interactions between a first year medical student and the virtual patient. The virtual patient's task is to accurately detect the kind of question the medical student is asking and then reply with the appropriate canned response.", |
| "cite_spans": [ |
| { |
| "start": 10, |
| "end": 30, |
| "text": "Kaiser et al. (2017)", |
| "ref_id": "BIBREF12" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 144, |
| "end": 151, |
| "text": "Table 1", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "The memory module", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "one key, one label and one age of one memory entry in each corresponding row. A key is an encoded presentation of a training item, a label is the class identifier that the key belongs to, and the age is the number of memory updates that have taken place since the key was inserted or updated. To use the memory, a normalized query item q is multiplied by the key matrix", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "The memory module", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "s = q \u2022 K (2)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "The memory module", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "to yield a vector of cosine similarities s between the query and every entry in the memory. The prediction made by the memory is thenv = V[n], wheren = argmax(s) andv is the predicted class label.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "The memory module", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "The memory operations include insert, update and erase, and loss calculation of the memory depends on the memory operations, therefore we briefly summarize them here. Letn be the row index in s with the highest similarity score such that V[n] is the true label of the query,\u00f1 be the row index of the entry with the highest similarity score that has a different label from the true label, and v be the true label. When s[n] > s[\u00f1], the memory loss is a margin loss between the similarity scores atn and at\u00f1 with some margin \u03b1:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "The memory module", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "loss = [s[\u00f1] \u2212 s[n] + \u03b1] +", |
| "eq_num": "(3)" |
| } |
| ], |
| "section": "The memory module", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "In this case, the memory entry atn will be updated by replacing it with the normalized average of itself and the query:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "The memory module", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "K[n] \u2190 q + K[n] q + K[n]", |
| "eq_num": "(4)" |
| } |
| ], |
| "section": "The memory module", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "When", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "The memory module", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "s[n] < s[\u00f1]", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "The memory module", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": ", the memory loss is:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "The memory module", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "loss = [s[n] \u2212 s[\u00f1] + \u03b1] +", |
| "eq_num": "(5)" |
| } |
| ], |
| "section": "The memory module", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "In this case, a new entry is inserted at a previously empty row n :", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "The memory module", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "K[n ] \u2190 q V[n ] \u2190 v", |
| "eq_num": "(6)" |
| } |
| ], |
| "section": "The memory module", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "In both cases, the entry in A at the update or insert site will be replaced by 0, and all the other entries in A will add 1. When the memory is full, a new insertion will take place where A[n ] is the biggest. Finally, if there is no entry in K that has the true label v, the insert operation is carried out without any loss calculation. The erase operation is to reset all three matrices to empty, which is used at the end of a training episode.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "The memory module", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "We train our memory-augmented CNN classifier using a novel episodic training scheme based on the episodic training scheme used in oneshot learning (Vinyals et al., 2016; Kaiser et al., 2017) . The main difference is that in one-shot learning, most tasks offer a balanced dataset with many classes but small numbers of instances per class. In our scenario, the dataset is imbalanced, and some classes may have a large number of instances. Moreover, in evaluation, there are no unseen classes in our case. We modify the episodic training scheme to accommodate these differences.", |
| "cite_spans": [ |
| { |
| "start": 147, |
| "end": 169, |
| "text": "(Vinyals et al., 2016;", |
| "ref_id": "BIBREF30" |
| }, |
| { |
| "start": 170, |
| "end": 190, |
| "text": "Kaiser et al., 2017)", |
| "ref_id": "BIBREF12" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Episodic training and evaluation", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "In training, we define an episode to be a complete k-shot learning trial with gradient updates. At the beginning of each episode, a batch of |C| \u00d7 (k + 1) samples, where |C| is the number of classes, is sampled from the training data. The first sample of each class is then encoded and inserted into the memory with no loss calculated, which we call loading the memory. From the second sample on, the encoder encodes each sample, and the memory calculates its loss according to its prediction. After all classes have had one sample to complete this process, the encoder is updated by the gradients calculated with the memory loss. The memory is then updated according to the operations corresponding to its predictions of the seen sam-ples in each shot. When all k shots have been processed, the memory is completely erased ready for the next episode (though naturally the updates to the encoder remain in effect).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Episodic training", |
| "sec_num": null |
| }, |
| { |
| "text": "It is easy to see that this process involves oversampling, which is a known technique for rebalancing imbalanced datasets. Because each class must have k + 1 samples for each episode, the minority classes have to be oversampled. However, experiments show that oversampling itself does not lead to better performance.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Episodic training", |
| "sec_num": null |
| }, |
| { |
| "text": "In evaluation, we define a support set to be a batch of |C| \u00d7 k samples from the training data. For a given test set, we first load the memory, then compare each test item to all the entries in the memory in order to generate the memory prediction for the test item based on the most similar memory entry. This forms the model's 1-shot predictions. Then we update the memory with the second sample for each class and redo the prediction step. We now have the model predictions with 2 shots. We continue to follow this routine until predictions from all k shots have been collected.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Episodic evaluation", |
| "sec_num": null |
| }, |
| { |
| "text": "Because there is some randomness in how a support set is sampled from the data, we use multiple support sets in evaluation. Since some of the classes have a large number of instances, each randomly sampled support set tends to be sufficiently different from other support sets that using multiple support sets becomes analogous to ensembling different models.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Episodic evaluation", |
| "sec_num": null |
| }, |
| { |
| "text": "Finally, letting p be the number of support sets, we have k \u00d7 p predicted labels for each item in the test set. We use majority voting across all the predicted labels to get the final model prediction. This capitalizes on the ensembled support sets and reduces the variance of the model predictions.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Episodic evaluation", |
| "sec_num": null |
| }, |
| { |
| "text": "Since previous work (Jin et al., 2017) showed that the majority of labels in our dataset have 11 variants or fewer, we explore using lexical substitution (McCarthy and Navigli, 2009) and neural machine translation (NMT) back-translation (Mallinson et al., 2017) for data augmentation. The main difference in our use of lexical substitution and previous works' is that our setup is unsupervised, as we have no gold test set for determining acceptable paraphrases. Similarly for the NMT system, we do not know which outputs are acceptable. To mediate this, we employ the use of both human and automatic filtering of the generated paraphrases with the end-goal of facilitating question label identification for infrequent labels.", |
| "cite_spans": [ |
| { |
| "start": 20, |
| "end": 38, |
| "text": "(Jin et al., 2017)", |
| "ref_id": "BIBREF11" |
| }, |
| { |
| "start": 154, |
| "end": 182, |
| "text": "(McCarthy and Navigli, 2009)", |
| "ref_id": "BIBREF17" |
| }, |
| { |
| "start": 237, |
| "end": 261, |
| "text": "(Mallinson et al., 2017)", |
| "ref_id": "BIBREF16" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Data Augmentation", |
| "sec_num": "5" |
| }, |
| { |
| "text": "We exploit advances in lexical substitution and NMT to automatically produce paraphrases. We also combine these approaches to determine their collective effectiveness in our downstream label identification task.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Paraphrase generation", |
| "sec_num": "5.1" |
| }, |
| { |
| "text": "Lexical substitution has often been held up as a exemplary task for paraphrase generation. In its simplest form, one must simply replace a given word with an appropriate paraphrase, i.e. one that retains most of the original sentence's meaning. As an example, in the question have you ever been seriously ill?, seriously could be replaced with severely, and we would consider this to be an appropriate substitution. However, if we instead substituted solemnly for the same word, we would not accept this as the meaning would have deviated too far.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Lexical substitution", |
| "sec_num": null |
| }, |
| { |
| "text": "For generating paraphrases, we employ three resources: WordNet (Miller, 1995) , Word2Vec (Le and Mikolov, 2014) , and paraphrase clusters from Cocos and Callison-Burch (2016) . To evaluate these resources, we took the mean average precision (MAP) of a given resource's ability to produce a lexical substitution which matched a word that already existed in another variant for the same label. That is, if the label how has the pain affected your work? had only two variants, has the injury made your job difficult? and is it hard for you to do your job?, and a resource successfully produces the swap of hard \u2192 difficult (producing the sentence is it difficult for you to do your job?), this would positively affect a resource's MAP score. We only performed this evaluation on labels with 30 or more variants as this form of evaluation disproportionately penalizes labels with fewer variants.", |
| "cite_spans": [ |
| { |
| "start": 63, |
| "end": 77, |
| "text": "(Miller, 1995)", |
| "ref_id": "BIBREF20" |
| }, |
| { |
| "start": 89, |
| "end": 111, |
| "text": "(Le and Mikolov, 2014)", |
| "ref_id": "BIBREF14" |
| }, |
| { |
| "start": 143, |
| "end": 174, |
| "text": "Cocos and Callison-Burch (2016)", |
| "ref_id": "BIBREF0" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Lexical substitution", |
| "sec_num": null |
| }, |
| { |
| "text": "These preliminary experiments indicated that pooling candidates from all three resources performed better than any given one alone did. We also found that in the case of multiple word senses (e.g. bug meaning an insect, an illness, or a flaw in a program), simply picking the first sense produced a higher MAP score than a variety of other selection algorithms. This is not surprising since, in the case of WordNet, the first synset is the most frequently used sense of a given word. For Cocos and Callison-Burch's semantic clusters, these were ordered by a given cluster's average mutual paraphrase score as annotated in the Paraphrase Database (Ganitkevitch et al., 2013) . Although our domain is medical, the dialogues are patient directed, less technical, and more colloquial, allowing us to use such a simple selection method for word sense disambiguation.", |
| "cite_spans": [ |
| { |
| "start": 646, |
| "end": 673, |
| "text": "(Ganitkevitch et al., 2013)", |
| "ref_id": "BIBREF9" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Lexical substitution", |
| "sec_num": null |
| }, |
| { |
| "text": "For augmenting the data in a way that would help the most sparse labels, we focused our lexical substitution task on labels with less that 11 variants. After pooling all the lexical substitution candidates from each resource, we ranked the substitutions by subtracting the original sentence's ngram log probability from its paraphrase's. 1 We then extracted the top 100 scoring paraphrases for our initial unfiltered data set.", |
| "cite_spans": [ |
| { |
| "start": 338, |
| "end": 339, |
| "text": "1", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Lexical substitution", |
| "sec_num": null |
| }, |
| { |
| "text": "We additionally use Neural Machine Translation (NMT) to generate paraphrases by pivoting between languages. In multiple back-translation, a method developed in Mallinson et al. 2017, we take a given English source sentence and generate n-best translations into a pivot language. This is the forward step. For each pivot translation we generate an m-best list of translations back into English. Thus this backward step yields n\u00d7m paraphrases for a given source sentence, where each paraphrase within this final set has a weight based on which of the original n translations it came from in the forward step and its ranking among the m translations in the back step. Any duplicates within this final set are collapsed and their weights are combined before the set is ranked according to weight. This method favors translations which come from high quality sources (highranking translations in the lists n and m) as well as translations which occur multiple times.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Neural machine translation", |
| "sec_num": null |
| }, |
| { |
| "text": "In our work we translated each given source sentence into 10-best forward translations and 10best back translations before finally collapsing and ranking the 100 paraphrases. We used a model from Sennrich et al. (2016) and chose German as our pivot language given the quality of the translations and paraphrases we observed. 2 Figure 3 : A graphical representation of the pseudooracle selection process. For a given test item (here Target), the n-gram overlap with the paraphrase must be greater than the overlap with the source sentence that paraphrase was derived from.", |
| "cite_spans": [ |
| { |
| "start": 196, |
| "end": 218, |
| "text": "Sennrich et al. (2016)", |
| "ref_id": "BIBREF28" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 327, |
| "end": 335, |
| "text": "Figure 3", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Neural machine translation", |
| "sec_num": null |
| }, |
| { |
| "text": "Since both the lexical substitution and NMT methods generate helpful and unhelpful paraphrases, we needed a way to select useful paraphrases. Although a typical next step might be to manually filter each system's output by hand, we were unsure if expensive human filtering would produce any gain in downstream performance. To explore this question, we experimented with a fully automatic pseudo-oracle.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Filtering", |
| "sec_num": "5.2" |
| }, |
| { |
| "text": "The pseudo-oracle is an automatic filter which we designed to look at a particular test item in a cross-validation setup and select the paraphrases whose n-gram recall with that test item was higher than the original source sentence's, as illustrated in Figure 3 . In using this initial step of filtering, we are able to isolate the paraphrases which are most likely to be helpful for classifying question labels. In preliminary experiments using logistic regression, we tested the performance of the pseudooracle selection process on the downstream classification task, where we found that the pseudooracle was able to facilitate classifying question labels, whereas using all the outputs from the lexical substitution and NMT paraphrase generations systems (without filtering) led to a drop in performance.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 254, |
| "end": 262, |
| "text": "Figure 3", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Filtering", |
| "sec_num": "5.2" |
| }, |
| { |
| "text": "Thus, to lessen the expense of human filtering, we used the pseudo-oracle as an automated first step, under the assumption that the selected paraphrases would mostly be kept as well using manual filtering. Next, using the same Gigaword trained language model from Section 5.1, we ranked the lexical substitution and NMT outtrained models. In future work, we plan to train our own models across various pivot languages to produce an increased variety of paraphrases. puts. From these ranked lists, we extracted the highest scoring subsets such that each paraphrase not only had a high log probability, but also contributed a unique n-gram (i.e., if two paraphrases contributed the same new n-gram, only the highest scoring paraphrase was selected). This diversityenhancing filtering reduced the size of the dataset to around 20% of the original raw lexical substitution output and 2.5% of the raw NMT output, greatly lessening human annotation costs.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Filtering", |
| "sec_num": "5.2" |
| }, |
| { |
| "text": "Since we instructed the annotators (a subset of the authors) to only select useful paraphrases which contributed novel n-grams not present in any other variant, their task was necessarily different from the pseudo-oracle's. Annotators required 16 hours per annotator to manually filter the data. We found that the annotators selected paraphrases which might not necessarily help the downstream task in a cross-validation setup, but which could be expected to help with completely unseen data. For this reason, we chose to combine the pre-selected paraphrases chosen by the pseudo-oracle together with the human-filtered paraphrases in our evaluation.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Filtering", |
| "sec_num": "5.2" |
| }, |
| { |
| "text": "We use the best model in Jin et al. (2017) , namely a stacked convolutional neural network (Stacked-CNN), together with the model proposed in this work (MA-CNN) in all of the experiments. Our task is to accurately predict a question's label based solely on the typed input from the medical student. With improved accuracy, the virtual patient will be able to more coherently answer the students' questions.", |
| "cite_spans": [ |
| { |
| "start": 25, |
| "end": 42, |
| "text": "Jin et al. (2017)", |
| "ref_id": "BIBREF11" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experiments", |
| "sec_num": "6" |
| }, |
| { |
| "text": "We shuffle the gold dataset first and use 10-fold cross-validation to evaluate our data augmentation process. We specifically focus our analysis on rare labels since that is also where we concentrate our data augmentation efforts. The model we propose here is targeted at improving performance for the rare labels, therefore we are interested in how the model performs on them. Paraphrases are not added to test sets, and paraphrases derived from those test items are filtered from training. Finally, we compute significance using the McNemar test (McNemar, 1947) .", |
| "cite_spans": [ |
| { |
| "start": 548, |
| "end": 563, |
| "text": "(McNemar, 1947)", |
| "ref_id": "BIBREF18" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experiments", |
| "sec_num": "6" |
| }, |
| { |
| "text": "We mostly follow Jin et al. (2017) in setting the hyperparameters of the CNN encoder in MA-CNN. We only use word-based features in the encoder. Following Jin et al. (2017) , we set the number of kernels of the encoder of MA-CNN to be 300. We use kernels of widths 3 to 5 for the CNN encoder. All non-linearities in the models are rectified linear units Nair and Hinton (2010) . We use Adadelta (Zeiler, 2012) as the optimizer for the whole MA-CNN, and use the recommended values for its hyperparameters (\u03c1 = 0.9, = 1 \u00d7 10 \u22126 , learning rate = 1.0). We initialize the embeddings with Word2Vec but allow them to be tuned by the system (Mikolov et al., 2013) .", |
| "cite_spans": [ |
| { |
| "start": 17, |
| "end": 34, |
| "text": "Jin et al. (2017)", |
| "ref_id": "BIBREF11" |
| }, |
| { |
| "start": 154, |
| "end": 171, |
| "text": "Jin et al. (2017)", |
| "ref_id": "BIBREF11" |
| }, |
| { |
| "start": 353, |
| "end": 375, |
| "text": "Nair and Hinton (2010)", |
| "ref_id": "BIBREF21" |
| }, |
| { |
| "start": 633, |
| "end": 655, |
| "text": "(Mikolov et al., 2013)", |
| "ref_id": "BIBREF19" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Hyperparameters", |
| "sec_num": "6.1" |
| }, |
| { |
| "text": "For episodic training, we set the number of shots to be 10. For the episodic evaluation, we use 5 support sets. For each support set, we also do 10-shot evaluation. Therefore for each test item, there are 50 predictions in total. We combine all predictions with majority voting, weighted by the similarity score of each prediction.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Hyperparameters", |
| "sec_num": "6.1" |
| }, |
| { |
| "text": "We first train our model MA-CNN and the stacked CNN model from Jin et al. (2017) using just the original VP dataset and explore how the model architecture affects rare label accuracy. Table 2 shows the test accuracy for both models. MA-CNN performs very well on the rare labels. The performance difference between the stacked CNN model and MA-CNN is highly significant, which shows that the pairwise-classification approach paired with episodic training is really powerful on the items which belong to labels with few training instances. We can also see that MA-CNN does not perform as well as the CNN ensemble on all labels, which is consistent with the previous observation that non-pairwise classifiers work better when training data is large. It is worth noting though that the stacked CNN ensemble consists of 10 CNNs that take in word-and characterbased features as their inputs, meanwhile the encoder of the MA-CNN is just a single word-based CNN. This further illustrates how a pairwise system which is designed specifically for dealing with classes with few training instances can help improve performance on those classes by using nearest neighbor comparison and episodic training inspired by one-shot learning.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 184, |
| "end": 191, |
| "text": "Table 2", |
| "ref_id": "TABREF2" |
| } |
| ], |
| "eq_spans": [], |
| "section": "MA-CNN on rare labels", |
| "sec_num": "6.2" |
| }, |
| { |
| "text": "We further explore the effect on model performance of using the generated paraphrases along with the gold training data in training. We use the (Jin et al., 2017) and the memory-augmented CNN classifier (MA-CNN) without any generated paraphrases. The difference of performance on the rare items is highly significant (p = 9.5 \u00d7 10 \u22125 , McNemar's test). manually filtered dataset with both paraphrasing methods, and train both the stacked CNN ensemble and MA-CNN with it plus the gold set. Table 3 shows the results on the test set. First, we can see that both models benefit in terms of rare label accuracy by using the augmented dataset. The difference between MA-CNN trained with only the gold dataset and the augmented dataset is highly significant, showing that the generated paraphrases are of high quality and help MA-CNN to achieve even better performance on the rare labels. It is interesting to note that for full accuracy, performance of both models does not significantly change, showing that the paraphrases are of high enough quality to not be harmful to the frequent labels. Table 4 shows the effect of using pseudo-oracle and manually filtered data on rare labels. We find that the MA-CNN is able to use the data augmentation in a way that directly benefits the rare labels. Specifically, the MA-CNN benefits from the human filtered data, indicating that it benefits from information provided to it that raw n-gram overlap does not capture. At the same time, however, filtering using the pseudo-oracle evidently provides a reasonable approximation of what improvements in accuracy can be obtained with human filtering of the generated paraphrases. ", |
| "cite_spans": [ |
| { |
| "start": 144, |
| "end": 162, |
| "text": "(Jin et al., 2017)", |
| "ref_id": "BIBREF11" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 489, |
| "end": 496, |
| "text": "Table 3", |
| "ref_id": "TABREF4" |
| }, |
| { |
| "start": 1089, |
| "end": 1096, |
| "text": "Table 4", |
| "ref_id": "TABREF6" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Generated paraphrases as training data", |
| "sec_num": "6.3" |
| }, |
| { |
| "text": "We also want to see how the performance on rare labels is connected to the method with which the paraphrases are generated. We use the individual subsets each of which is generated by a single method to augment the training data. Table 5 shows how these methods compete against each other. Surprisingly, simple lexical substitution is already good at providing information that is helpful to MA-CNN, but the neural machine back translation is an even better method at providing paraphrases that have positive impact on rare label accuracy. We inspect the paraphrases generated by both methods and find that paraphrases from back translation are generally more diverse in phrasal structure and contain more novel words than those generated with lexical substitution. The combined dataset gives further improvement, showing that lexical substitution and neural machine translation are at least partially complementary to each other as generation methods.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 230, |
| "end": 237, |
| "text": "Table 5", |
| "ref_id": "TABREF7" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Quality of generated paraphrases", |
| "sec_num": "6.5" |
| }, |
| { |
| "text": "Given the fact that the MA-CNN performs very well on rare labels, but not so well on all labels, it is interesting to see if a combined system with the stacked CNN and MA-CNN can provide a further performance increase. We here choose a relatively simple logistic regression model as our model combiner, though a more sophisticated model could be used in principle. Using 1-5 grams of words and stemmed words as well as 2-5 grams Table 6 : Test results for the combiner as well as the two combined subsystems: the stacked CNN ensemble trained with gold and the memory-augmented CNN classifier trained with gold and generated paraphrases. The gain compared to stacked CNN on full accuracy is highly significant (p = 1.9 \u00d7 10 \u22129 , McNemar's test).", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 429, |
| "end": 436, |
| "text": "Table 6", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Combining the stacked CNN and the MA-CNN", |
| "sec_num": "6.6" |
| }, |
| { |
| "text": "of characters, we trained the model to predict the rarity of a label for a question, i.e. if a candidate question belongs to a rare label or not. This rarity predictor gets 94.2% accuracy on all labels, and 78.1% accuracy on rare labels. Note that the majority baseline for all labels is 80%, but for rare labels it is 20%. This rarity predictor serves as our combiner; that is, we use the combiner to choose whose result to trust between the two classification systems. If the combiner predicts that an item belongs to a rare label, we choose the prediction from the MA-CNN; if the combiner instead predicts it belongs to a frequent label, we choose the prediction for it from the stacked CNN. This is done with 10-fold cross validation, just like how the classifiers were trained above. The stacked CNN model we use here is the one trained with only gold training data, which is the model with the best accuracy on all labels. We use the MA-CNN model trained with both gold and generated data. With the combiner, we get 50.98% accuracy on rare labels, and 79.86% accuracy on all labels, as shown in Table 6 . The result indicates that the two systems are complementary to each other, and simple combination is already effective in providing a significant performance boost. Although the accuracy on rare labels is not as high as the MA-CNN by itself, it is higher than the stacked CNN model by 5 points, and all of these points are translated into an accuracy increase on all labels that is close to 1 point.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 1101, |
| "end": 1108, |
| "text": "Table 6", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Combining the stacked CNN and the MA-CNN", |
| "sec_num": "6.6" |
| }, |
| { |
| "text": "In this paper, we have investigated the use of paraphrasing for data augmentation and neural memory-based classification in order to tackle the challenge of a long tail of relatively infrequently asked questions in a virtual patient dialogue system. We find that both lexical substitution and neural back-translation yield paraphrases of ob-served questions that improve system performance on rare labels once the generated paraphrases are manually filtered down to ones taken to be useful, with neural back-translation contributing more to gains in accuracy than lexical substitution. We also find that neural memory-based classification with a novel method of episodic training outperforms a straight CNN classifier on low frequency questions and takes better advantage of the generated paraphrases, together yielding a nearly 10% absolute improvement in accuracy on the least frequently asked questions. Finally, using a simple logistic regression model to combine the predictions of the straight CNN and memory-based classifier, we find that the combined system performs better on all labels, and the gain is from more accurate predictions of rare labels. We expect these gains to yield increased user engagement and ultimately better learning outcomes. In future work, we plan to investigate using the memory-based classifier for fully automatic paraphrase filtering as well as more advanced methods of paraphrasing, including deep generative paraphrasing, syntactic paraphrasing and using aligned paraphrases to induce paraphrase templates. More powerful models may also be explored to better combine the models.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusion", |
| "sec_num": "7" |
| }, |
| { |
| "text": "We used a 5-gram language model with back off, trained on the Gigaword(Parker et al., 2011).2 We found that the pretrained model for German produced the best back-translations when compared to other pre-", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| } |
| ], |
| "back_matter": [ |
| { |
| "text": "Thanks to Kellen Maicher for creating the virtual environment and to Evan Jaffe, Eric Fosler-Lussier and William Schuler for feedback and discussion. This project was supported by funding from the Department of Health and Human Services Health Resources and Services Administration (HRSA D56HP020687), the National Board of Medical Examiners Edward J. Stemmler Education Research Fund (NBME 1112-064), and the National Science Foundation (NSF IIS 1618336). The project does not necessarily reflect NBME policy, and NBME support provides no official endorsement. We thank Ohio Supercomputer Center (1987) for computation support.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Acknowledgments", |
| "sec_num": null |
| } |
| ], |
| "bib_entries": { |
| "BIBREF0": { |
| "ref_id": "b0", |
| "title": "Clustering paraphrases by word sense", |
| "authors": [ |
| { |
| "first": "Anne", |
| "middle": [], |
| "last": "Cocos", |
| "suffix": "" |
| }, |
| { |
| "first": "Chris", |
| "middle": [], |
| "last": "Callison-Burch", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", |
| "volume": "", |
| "issue": "", |
| "pages": "1463--1472", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Anne Cocos and Chris Callison-Burch. 2016. Cluster- ing paraphrases by word sense. In Proceedings of the 2016 Conference of the North American Chap- ter of the Association for Computational Linguistics: Human Language Technologies, pages 1463-1472.", |
| "links": null |
| }, |
| "BIBREF2": { |
| "ref_id": "b2", |
| "title": "Natural Language Processing (Almost) from Scratch", |
| "authors": [ |
| { |
| "first": "Koray", |
| "middle": [], |
| "last": "Karlen", |
| "suffix": "" |
| }, |
| { |
| "first": "Pavel", |
| "middle": [], |
| "last": "Kavukcuoglu", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Kuksa", |
| "suffix": "" |
| } |
| ], |
| "year": 2011, |
| "venue": "Journal of Machine Learning Research", |
| "volume": "12", |
| "issue": "", |
| "pages": "2493--2537", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Karlen, Koray Kavukcuoglu, and Pavel Kuksa. 2011. Natural Language Processing (Almost) from Scratch. Journal of Machine Learning Research, 12:2493-2537.", |
| "links": null |
| }, |
| "BIBREF3": { |
| "ref_id": "b3", |
| "title": "Can virtual standardized patients be used to assess communication skills in medical students", |
| "authors": [ |
| { |
| "first": "Douglas", |
| "middle": [], |
| "last": "Danforth", |
| "suffix": "" |
| }, |
| { |
| "first": "A", |
| "middle": [], |
| "last": "Price", |
| "suffix": "" |
| }, |
| { |
| "first": "Kellen", |
| "middle": [], |
| "last": "Maicher", |
| "suffix": "" |
| }, |
| { |
| "first": "D", |
| "middle": [], |
| "last": "Post", |
| "suffix": "" |
| }, |
| { |
| "first": "Beth", |
| "middle": [], |
| "last": "Liston", |
| "suffix": "" |
| }, |
| { |
| "first": "Daniel", |
| "middle": [], |
| "last": "Clinchot", |
| "suffix": "" |
| }, |
| { |
| "first": "Cynthia", |
| "middle": [], |
| "last": "Ledford", |
| "suffix": "" |
| }, |
| { |
| "first": "D", |
| "middle": [], |
| "last": "Way", |
| "suffix": "" |
| }, |
| { |
| "first": "Holly", |
| "middle": [], |
| "last": "Cronau", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "Proceedings of the 17th Annual IAMSE Meeting", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Douglas Danforth, A. Price, Kellen Maicher, D. Post, Beth Liston, Daniel Clinchot, Cynthia Ledford, D. Way, and Holly Cronau. 2013. Can virtual stan- dardized patients be used to assess communication skills in medical students. In Proceedings of the 17th Annual IAMSE Meeting, St. Andrews, Scotland.", |
| "links": null |
| }, |
| "BIBREF4": { |
| "ref_id": "b4", |
| "title": "Development of virtual patient simulations for medical education", |
| "authors": [ |
| { |
| "first": "Douglas", |
| "middle": [], |
| "last": "Danforth", |
| "suffix": "" |
| }, |
| { |
| "first": "Mike", |
| "middle": [], |
| "last": "Procter", |
| "suffix": "" |
| }, |
| { |
| "first": "Richard", |
| "middle": [], |
| "last": "Chen", |
| "suffix": "" |
| }, |
| { |
| "first": "Mary", |
| "middle": [], |
| "last": "Johnson", |
| "suffix": "" |
| }, |
| { |
| "first": "Robert", |
| "middle": [], |
| "last": "Heller", |
| "suffix": "" |
| } |
| ], |
| "year": 2009, |
| "venue": "Journal For Virtual Worlds Research", |
| "volume": "2", |
| "issue": "2", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Douglas Danforth, Mike Procter, Richard Chen, Mary Johnson, and Robert Heller. 2009. Development of virtual patient simulations for medical education. Journal For Virtual Worlds Research, 2(2).", |
| "links": null |
| }, |
| "BIBREF5": { |
| "ref_id": "b5", |
| "title": "Virtual standardized patients can accurately assess information gathering skills in medical students", |
| "authors": [ |
| { |
| "first": "Douglas", |
| "middle": [], |
| "last": "Danforth", |
| "suffix": "" |
| }, |
| { |
| "first": "Laura", |
| "middle": [], |
| "last": "Zimmerman", |
| "suffix": "" |
| }, |
| { |
| "first": "Kellen", |
| "middle": [], |
| "last": "Maicher", |
| "suffix": "" |
| }, |
| { |
| "first": "Holly", |
| "middle": [], |
| "last": "Cronau", |
| "suffix": "" |
| }, |
| { |
| "first": "Cynthia", |
| "middle": [], |
| "last": "Ledford", |
| "suffix": "" |
| }, |
| { |
| "first": "D", |
| "middle": [], |
| "last": "Post", |
| "suffix": "" |
| }, |
| { |
| "first": "Allison", |
| "middle": [], |
| "last": "Macerollo", |
| "suffix": "" |
| }, |
| { |
| "first": "D", |
| "middle": [], |
| "last": "Way", |
| "suffix": "" |
| }, |
| { |
| "first": "Beth", |
| "middle": [], |
| "last": "Liston", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "Proceedings of the American Association of Medical Colleges", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Douglas Danforth, Laura Zimmerman, Kellen Maicher, Holly Cronau, Cynthia Ledford, D. Post, Allison Macerollo, D. Way, and Beth Liston. 2016. Vir- tual standardized patients can accurately assess in- formation gathering skills in medical students. In Proceedings of the American Association of Medi- cal Colleges, Seattle, WA.", |
| "links": null |
| }, |
| "BIBREF6": { |
| "ref_id": "b6", |
| "title": "An evaluation of alternative strategies for implementing dialogue policies using statistical classification and rules", |
| "authors": [ |
| { |
| "first": "David", |
| "middle": [], |
| "last": "Devault", |
| "suffix": "" |
| }, |
| { |
| "first": "Anton", |
| "middle": [], |
| "last": "Leuski", |
| "suffix": "" |
| }, |
| { |
| "first": "Kenji", |
| "middle": [], |
| "last": "Sagae", |
| "suffix": "" |
| } |
| ], |
| "year": 2011, |
| "venue": "Proceedings of the 5th International Joint Conference on Natural Language Processing (IJCNLP)", |
| "volume": "", |
| "issue": "", |
| "pages": "1341--1345", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "David DeVault, Anton Leuski, and Kenji Sagae. 2011. An evaluation of alternative strategies for imple- menting dialogue policies using statistical classifi- cation and rules. In Proceedings of the 5th Interna- tional Joint Conference on Natural Language Pro- cessing (IJCNLP), pages 1341-1345.", |
| "links": null |
| }, |
| "BIBREF7": { |
| "ref_id": "b7", |
| "title": "Generating disambiguating paraphrases for structurally ambiguous sentences", |
| "authors": [ |
| { |
| "first": "Manjuan", |
| "middle": [], |
| "last": "Duan", |
| "suffix": "" |
| }, |
| { |
| "first": "Ethan", |
| "middle": [], |
| "last": "Hill", |
| "suffix": "" |
| }, |
| { |
| "first": "Michael", |
| "middle": [], |
| "last": "White", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "Proceedings of the 10th Linguistic Annotation Workshop held in conjunction with ACL 2016", |
| "volume": "", |
| "issue": "", |
| "pages": "160--170", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Manjuan Duan, Ethan Hill, and Michael White. 2016. Generating disambiguating paraphrases for struc- turally ambiguous sentences. In Proceedings of the 10th Linguistic Annotation Workshop held in con- junction with ACL 2016 (LAW-X 2016), pages 160- 170, Berlin, Germany. Association for Computa- tional Linguistics.", |
| "links": null |
| }, |
| "BIBREF8": { |
| "ref_id": "b8", |
| "title": "Paraphrase-driven learning for open question answering", |
| "authors": [ |
| { |
| "first": "Anthony", |
| "middle": [], |
| "last": "Fader", |
| "suffix": "" |
| }, |
| { |
| "first": "Luke", |
| "middle": [], |
| "last": "Zettlemoyer", |
| "suffix": "" |
| }, |
| { |
| "first": "Oren", |
| "middle": [], |
| "last": "Etzioni", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics", |
| "volume": "1", |
| "issue": "", |
| "pages": "1608--1618", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Anthony Fader, Luke Zettlemoyer, and Oren Etzioni. 2013. Paraphrase-driven learning for open question answering. In Proceedings of the 51st Annual Meet- ing of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1608-1618. Asso- ciation for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF9": { |
| "ref_id": "b9", |
| "title": "Ppdb: The paraphrase database", |
| "authors": [ |
| { |
| "first": "Juri", |
| "middle": [], |
| "last": "Ganitkevitch", |
| "suffix": "" |
| }, |
| { |
| "first": "Benjamin", |
| "middle": [], |
| "last": "Van Durme", |
| "suffix": "" |
| }, |
| { |
| "first": "Chris", |
| "middle": [], |
| "last": "Callison-Burch", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", |
| "volume": "", |
| "issue": "", |
| "pages": "758--764", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Juri Ganitkevitch, Benjamin Van Durme, and Chris Callison-Burch. 2013. Ppdb: The paraphrase database. In Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Tech- nologies, pages 758-764.", |
| "links": null |
| }, |
| "BIBREF10": { |
| "ref_id": "b10", |
| "title": "Interpreting questions with a log-linear ranking model in a virtual patient dialogue system", |
| "authors": [ |
| { |
| "first": "Evan", |
| "middle": [], |
| "last": "Jaffe", |
| "suffix": "" |
| }, |
| { |
| "first": "Michael", |
| "middle": [], |
| "last": "White", |
| "suffix": "" |
| }, |
| { |
| "first": "William", |
| "middle": [], |
| "last": "Schuler", |
| "suffix": "" |
| }, |
| { |
| "first": "Eric", |
| "middle": [], |
| "last": "Fosler-Lussier", |
| "suffix": "" |
| }, |
| { |
| "first": "Alex", |
| "middle": [], |
| "last": "Rosenfeld", |
| "suffix": "" |
| }, |
| { |
| "first": "Douglas", |
| "middle": [], |
| "last": "Danforth", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "Proceedings of the Tenth Workshop on Innovative Use of NLP for Building Educational Applications", |
| "volume": "", |
| "issue": "", |
| "pages": "86--96", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Evan Jaffe, Michael White, William Schuler, Eric Fosler-Lussier, Alex Rosenfeld, and Douglas Dan- forth. 2015. Interpreting questions with a log-linear ranking model in a virtual patient dialogue system. In Proceedings of the Tenth Workshop on Innovative Use of NLP for Building Educational Applications, pages 86-96.", |
| "links": null |
| }, |
| "BIBREF11": { |
| "ref_id": "b11", |
| "title": "Combining CNNs and Pattern Matching for Question Interpretation in a Virtual Patient Dialogue System", |
| "authors": [ |
| { |
| "first": "Lifeng", |
| "middle": [], |
| "last": "Jin", |
| "suffix": "" |
| }, |
| { |
| "first": "Michael", |
| "middle": [], |
| "last": "White", |
| "suffix": "" |
| }, |
| { |
| "first": "Evan", |
| "middle": [], |
| "last": "Jaffe", |
| "suffix": "" |
| }, |
| { |
| "first": "Laura", |
| "middle": [], |
| "last": "Zimmerman", |
| "suffix": "" |
| }, |
| { |
| "first": "Douglas", |
| "middle": [], |
| "last": "Danforth", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "Proceedings of the 12th Workshop on Innovative Use of NLP for Building Educational Applications", |
| "volume": "", |
| "issue": "", |
| "pages": "11--21", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Lifeng Jin, Michael White, Evan Jaffe, Laura Zim- merman, and Douglas Danforth. 2017. Combining CNNs and Pattern Matching for Question Interpre- tation in a Virtual Patient Dialogue System. In Pro- ceedings of the 12th Workshop on Innovative Use of NLP for Building Educational Applications, pages 11-21.", |
| "links": null |
| }, |
| "BIBREF12": { |
| "ref_id": "b12", |
| "title": "Learning to Remember Rare Events", |
| "authors": [ |
| { |
| "first": "Lukasz", |
| "middle": [], |
| "last": "Kaiser", |
| "suffix": "" |
| }, |
| { |
| "first": "Ofir", |
| "middle": [], |
| "last": "Nachum", |
| "suffix": "" |
| }, |
| { |
| "first": "Aurko", |
| "middle": [], |
| "last": "Roy", |
| "suffix": "" |
| }, |
| { |
| "first": "Samy", |
| "middle": [], |
| "last": "Bengio", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "Proceedings of the International Conference on Learning Representations", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Lukasz Kaiser, Ofir Nachum, Aurko Roy, and Samy Bengio. 2017. Learning to Remember Rare Events. In Proceedings of the International Conference on Learning Representations.", |
| "links": null |
| }, |
| "BIBREF13": { |
| "ref_id": "b13", |
| "title": "Convolutional Neural Networks for Sentence Classification", |
| "authors": [ |
| { |
| "first": "Yoon", |
| "middle": [], |
| "last": "Kim", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "Proceedings of the Conference on Empirical Methods in Natural Language Processing", |
| "volume": "", |
| "issue": "", |
| "pages": "1746--1751", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Yoon Kim. 2014. Convolutional Neural Networks for Sentence Classification. Proceedings of the Con- ference on Empirical Methods in Natural Language Processing, pages 1746-1751.", |
| "links": null |
| }, |
| "BIBREF14": { |
| "ref_id": "b14", |
| "title": "Distributed representations of sentences and documents", |
| "authors": [ |
| { |
| "first": "Quoc", |
| "middle": [], |
| "last": "Le", |
| "suffix": "" |
| }, |
| { |
| "first": "Tomas", |
| "middle": [], |
| "last": "Mikolov", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "International Conference on Machine Learning", |
| "volume": "", |
| "issue": "", |
| "pages": "1188--1196", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Quoc Le and Tomas Mikolov. 2014. Distributed rep- resentations of sentences and documents. In Inter- national Conference on Machine Learning, pages 1188-1196.", |
| "links": null |
| }, |
| "BIBREF15": { |
| "ref_id": "b15", |
| "title": "Developing a conversational virtual standardized patient to enable students to practice history taking skills", |
| "authors": [ |
| { |
| "first": "Kellen", |
| "middle": [], |
| "last": "Maicher", |
| "suffix": "" |
| }, |
| { |
| "first": "Douglas", |
| "middle": [], |
| "last": "Danforth", |
| "suffix": "" |
| }, |
| { |
| "first": "A", |
| "middle": [], |
| "last": "Price", |
| "suffix": "" |
| }, |
| { |
| "first": "Laura", |
| "middle": [], |
| "last": "Zimmerman", |
| "suffix": "" |
| }, |
| { |
| "first": "B", |
| "middle": [], |
| "last": "Wilcox", |
| "suffix": "" |
| }, |
| { |
| "first": "Beth", |
| "middle": [], |
| "last": "Liston", |
| "suffix": "" |
| }, |
| { |
| "first": "Holly", |
| "middle": [], |
| "last": "Cronau", |
| "suffix": "" |
| }, |
| { |
| "first": "Laurie", |
| "middle": [], |
| "last": "Belknap", |
| "suffix": "" |
| }, |
| { |
| "first": "Cynthia", |
| "middle": [], |
| "last": "Ledford", |
| "suffix": "" |
| }, |
| { |
| "first": "D", |
| "middle": [], |
| "last": "Way", |
| "suffix": "" |
| }, |
| { |
| "first": "D", |
| "middle": [], |
| "last": "Post", |
| "suffix": "" |
| }, |
| { |
| "first": "Allison", |
| "middle": [], |
| "last": "Macerollo", |
| "suffix": "" |
| }, |
| { |
| "first": "Milisa", |
| "middle": [], |
| "last": "Rizer", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "Simulation in Healthcare", |
| "volume": "12", |
| "issue": "2", |
| "pages": "124--131", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Kellen Maicher, Douglas Danforth, A. Price, Laura Zimmerman, B. Wilcox, Beth Liston, Holly Cronau, Laurie Belknap, Cynthia Ledford, D. Way, D. Post, Allison Macerollo, and Milisa Rizer. 2017. Devel- oping a conversational virtual standardized patient to enable students to practice history taking skills. Simulation in Healthcare, 12(2):124-131.", |
| "links": null |
| }, |
| "BIBREF16": { |
| "ref_id": "b16", |
| "title": "Paraphrasing revisited with neural machine translation", |
| "authors": [ |
| { |
| "first": "Jonathan", |
| "middle": [], |
| "last": "Mallinson", |
| "suffix": "" |
| }, |
| { |
| "first": "Rico", |
| "middle": [], |
| "last": "Sennrich", |
| "suffix": "" |
| }, |
| { |
| "first": "Mirella", |
| "middle": [], |
| "last": "Lapata", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "Proceedings of the 15th Conference of the European Chapter", |
| "volume": "1", |
| "issue": "", |
| "pages": "881--893", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Jonathan Mallinson, Rico Sennrich, and Mirella Lap- ata. 2017. Paraphrasing revisited with neural ma- chine translation. In Proceedings of the 15th Con- ference of the European Chapter of the Association for Computational Linguistics: Volume 1, Long Pa- pers, volume 1, pages 881-893.", |
| "links": null |
| }, |
| "BIBREF17": { |
| "ref_id": "b17", |
| "title": "The english lexical substitution task. Language resources and evaluation", |
| "authors": [ |
| { |
| "first": "Diana", |
| "middle": [], |
| "last": "Mccarthy", |
| "suffix": "" |
| }, |
| { |
| "first": "Roberto", |
| "middle": [], |
| "last": "Navigli", |
| "suffix": "" |
| } |
| ], |
| "year": 2009, |
| "venue": "", |
| "volume": "43", |
| "issue": "", |
| "pages": "139--159", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Diana McCarthy and Roberto Navigli. 2009. The en- glish lexical substitution task. Language resources and evaluation, 43(2):139-159.", |
| "links": null |
| }, |
| "BIBREF18": { |
| "ref_id": "b18", |
| "title": "Note on the sampling error of the difference between correlated proportions or percentages", |
| "authors": [ |
| { |
| "first": "Quinn", |
| "middle": [], |
| "last": "Mcnemar", |
| "suffix": "" |
| } |
| ], |
| "year": 1947, |
| "venue": "Psychometrika", |
| "volume": "12", |
| "issue": "2", |
| "pages": "153--157", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Quinn McNemar. 1947. Note on the sampling error of the difference between correlated proportions or percentages. Psychometrika, 12(2):153-157.", |
| "links": null |
| }, |
| "BIBREF19": { |
| "ref_id": "b19", |
| "title": "Efficient Estimation of Word Representations in Vector Space", |
| "authors": [ |
| { |
| "first": "Tomas", |
| "middle": [], |
| "last": "Mikolov", |
| "suffix": "" |
| }, |
| { |
| "first": "Kai", |
| "middle": [], |
| "last": "Chen", |
| "suffix": "" |
| }, |
| { |
| "first": "Greg", |
| "middle": [], |
| "last": "Corrado", |
| "suffix": "" |
| }, |
| { |
| "first": "Jeffrey", |
| "middle": [], |
| "last": "Dean", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "Proceedings of the International Conference on Learning Representations", |
| "volume": "", |
| "issue": "", |
| "pages": "1--12", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013. Efficient Estimation of Word Repre- sentations in Vector Space. In Proceedings of the International Conference on Learning Representa- tions, pages 1-12.", |
| "links": null |
| }, |
| "BIBREF20": { |
| "ref_id": "b20", |
| "title": "Wordnet: a lexical database for english", |
| "authors": [ |
| { |
| "first": "A", |
| "middle": [], |
| "last": "George", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Miller", |
| "suffix": "" |
| } |
| ], |
| "year": 1995, |
| "venue": "Communications of the ACM", |
| "volume": "38", |
| "issue": "11", |
| "pages": "39--41", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "George A Miller. 1995. Wordnet: a lexical database for english. Communications of the ACM, 38(11):39- 41.", |
| "links": null |
| }, |
| "BIBREF21": { |
| "ref_id": "b21", |
| "title": "Rectified Linear Units Improve Restricted Boltzmann Machines", |
| "authors": [ |
| { |
| "first": "Vinod", |
| "middle": [], |
| "last": "Nair", |
| "suffix": "" |
| }, |
| { |
| "first": "Geoffrey", |
| "middle": [ |
| "E" |
| ], |
| "last": "Hinton", |
| "suffix": "" |
| } |
| ], |
| "year": 2010, |
| "venue": "Proceedings of the 27th International Conference on Machine Learning", |
| "volume": "3", |
| "issue": "", |
| "pages": "807--814", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Vinod Nair and Geoffrey E Hinton. 2010. Rectified Linear Units Improve Restricted Boltzmann Ma- chines. In Proceedings of the 27th International Conference on Machine Learning, 3, pages 807- 814.", |
| "links": null |
| }, |
| "BIBREF22": { |
| "ref_id": "b22", |
| "title": "The Ohio Supercomputer Center", |
| "authors": [], |
| "year": 1987, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "The Ohio Supercomputer Center. 1987. Ohio Super- computer Center.", |
| "links": null |
| }, |
| "BIBREF23": { |
| "ref_id": "b23", |
| "title": "English gigaword. Linguistic Data Consortium", |
| "authors": [ |
| { |
| "first": "Robert", |
| "middle": [], |
| "last": "Parker", |
| "suffix": "" |
| }, |
| { |
| "first": "David", |
| "middle": [], |
| "last": "Graff", |
| "suffix": "" |
| }, |
| { |
| "first": "Junbo", |
| "middle": [], |
| "last": "Kong", |
| "suffix": "" |
| }, |
| { |
| "first": "Ke", |
| "middle": [], |
| "last": "Chen", |
| "suffix": "" |
| }, |
| { |
| "first": "Kazuaki", |
| "middle": [], |
| "last": "Maeda", |
| "suffix": "" |
| } |
| ], |
| "year": 2011, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Robert Parker, David Graff, Junbo Kong, Ke Chen, and Kazuaki Maeda. 2011. English gigaword. Linguis- tic Data Consortium.", |
| "links": null |
| }, |
| "BIBREF24": { |
| "ref_id": "b24", |
| "title": "Crowdsourcing multimodal dialog interactions: Lessons learned from the HALEF case", |
| "authors": [ |
| { |
| "first": "David", |
| "middle": [], |
| "last": "Vikram Ramanarayanan", |
| "suffix": "" |
| }, |
| { |
| "first": "Hillary", |
| "middle": [], |
| "last": "Suendermann-Oeft", |
| "suffix": "" |
| }, |
| { |
| "first": "Eugene", |
| "middle": [], |
| "last": "Molloy", |
| "suffix": "" |
| }, |
| { |
| "first": "Patrick", |
| "middle": [], |
| "last": "Tsuprun", |
| "suffix": "" |
| }, |
| { |
| "first": "Keelan", |
| "middle": [], |
| "last": "Lange", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Evanini", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "Proceedings of the AAAI-17 Workshop on Crowdsourcing, Deep Learning, and Artificial Intelligence Agents", |
| "volume": "", |
| "issue": "", |
| "pages": "423--431", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Vikram Ramanarayanan, David Suendermann-Oeft, Hillary Molloy, Eugene Tsuprun, Patrick Lange, and Keelan Evanini. 2017. Crowdsourcing multi- modal dialog interactions: Lessons learned from the HALEF case. In Proceedings of the AAAI-17 Work- shop on Crowdsourcing, Deep Learning, and Artifi- cial Intelligence Agents, pages 423-431.", |
| "links": null |
| }, |
| "BIBREF25": { |
| "ref_id": "b25", |
| "title": "Statistical qa-classifier vs. re-ranker: What's the difference?", |
| "authors": [ |
| { |
| "first": "Eduard", |
| "middle": [], |
| "last": "Deepak Ravichandran", |
| "suffix": "" |
| }, |
| { |
| "first": "Franz Josef", |
| "middle": [], |
| "last": "Hovy", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Och", |
| "suffix": "" |
| } |
| ], |
| "year": 2003, |
| "venue": "Proceedings of the ACL 2003 workshop on Multilingual summarization and question answering", |
| "volume": "12", |
| "issue": "", |
| "pages": "69--75", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Deepak Ravichandran, Eduard Hovy, and Franz Josef Och. 2003. Statistical qa-classifier vs. re-ranker: What's the difference? In Proceedings of the ACL 2003 workshop on Multilingual summarization and question answering-Volume 12, pages 69-75. Asso- ciation for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF26": { |
| "ref_id": "b26", |
| "title": "A crowdsourcing method to develop virtual human conversational agents", |
| "authors": [ |
| { |
| "first": "Brent", |
| "middle": [], |
| "last": "Rossen", |
| "suffix": "" |
| }, |
| { |
| "first": "Benjamin", |
| "middle": [], |
| "last": "Lok", |
| "suffix": "" |
| } |
| ], |
| "year": 2012, |
| "venue": "International Journal of HCS", |
| "volume": "70", |
| "issue": "4", |
| "pages": "301--319", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Brent Rossen and Benjamin Lok. 2012. A crowdsourc- ing method to develop virtual human conversational agents. International Journal of HCS, 70(4):301- 319.", |
| "links": null |
| }, |
| "BIBREF27": { |
| "ref_id": "b27", |
| "title": "Effective feature integration for automated short answer scoring", |
| "authors": [ |
| { |
| "first": "Keisuke", |
| "middle": [], |
| "last": "Sakaguchi", |
| "suffix": "" |
| }, |
| { |
| "first": "Michael", |
| "middle": [], |
| "last": "Heilman", |
| "suffix": "" |
| }, |
| { |
| "first": "Nitin", |
| "middle": [], |
| "last": "Madnani", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", |
| "volume": "", |
| "issue": "", |
| "pages": "1049--1054", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Keisuke Sakaguchi, Michael Heilman, and Nitin Mad- nani. 2015. Effective feature integration for auto- mated short answer scoring. In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Hu- man Language Technologies, pages 1049-1054. As- sociation for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF28": { |
| "ref_id": "b28", |
| "title": "Edinburgh neural machine translation systems for wmt 16", |
| "authors": [ |
| { |
| "first": "Rico", |
| "middle": [], |
| "last": "Sennrich", |
| "suffix": "" |
| }, |
| { |
| "first": "Barry", |
| "middle": [], |
| "last": "Haddow", |
| "suffix": "" |
| }, |
| { |
| "first": "Alexandra", |
| "middle": [], |
| "last": "Birch", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "Proceedings of the First Conference on Machine Translation", |
| "volume": "", |
| "issue": "", |
| "pages": "371--376", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Edinburgh neural machine translation sys- tems for wmt 16. In Proceedings of the First Conference on Machine Translation, pages 371- 376, Berlin, Germany. Association for Computa- tional Linguistics.", |
| "links": null |
| }, |
| "BIBREF29": { |
| "ref_id": "b29", |
| "title": "From rule-based to statistical grammars: Continuous improvement of largescale spoken dialog systems", |
| "authors": [ |
| { |
| "first": "David", |
| "middle": [], |
| "last": "Suendermann-Oeft", |
| "suffix": "" |
| }, |
| { |
| "first": "Keelan", |
| "middle": [], |
| "last": "Evanini", |
| "suffix": "" |
| }, |
| { |
| "first": "Jackson", |
| "middle": [], |
| "last": "Liscombe", |
| "suffix": "" |
| }, |
| { |
| "first": "Phillip", |
| "middle": [], |
| "last": "Hunter", |
| "suffix": "" |
| }, |
| { |
| "first": "Krishna", |
| "middle": [], |
| "last": "Dayanidhi", |
| "suffix": "" |
| }, |
| { |
| "first": "Roberto", |
| "middle": [], |
| "last": "Pieraccini", |
| "suffix": "" |
| } |
| ], |
| "year": 2009, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "4713--4716", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "David Suendermann-Oeft, Keelan Evanini, Jackson Liscombe, Phillip Hunter, Krishna Dayanidhi, and Roberto Pieraccini. 2009. From rule-based to statis- tical grammars: Continuous improvement of large- scale spoken dialog systems. pages 4713-4716.", |
| "links": null |
| }, |
| "BIBREF30": { |
| "ref_id": "b30", |
| "title": "Matching Networks for One Shot Learning", |
| "authors": [ |
| { |
| "first": "Oriol", |
| "middle": [], |
| "last": "Vinyals", |
| "suffix": "" |
| }, |
| { |
| "first": "Charles", |
| "middle": [], |
| "last": "Blundell", |
| "suffix": "" |
| }, |
| { |
| "first": "Timothy", |
| "middle": [], |
| "last": "Lillcrap", |
| "suffix": "" |
| }, |
| { |
| "first": "Koray", |
| "middle": [], |
| "last": "Kavukcuoglu", |
| "suffix": "" |
| }, |
| { |
| "first": "Daan", |
| "middle": [], |
| "last": "Wierstra", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "Proceedings of Neural Information Processing Systems", |
| "volume": "", |
| "issue": "", |
| "pages": "817--825", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Oriol Vinyals, Charles Blundell, Timothy Lillcrap, Ko- ray Kavukcuoglu, and Daan Wierstra. 2016. Match- ing Networks for One Shot Learning. In Pro- ceedings of Neural Information Processing Systems, pages 817-825.", |
| "links": null |
| }, |
| "BIBREF31": { |
| "ref_id": "b31", |
| "title": "ADADELTA: An Adaptive Learning Rate Method", |
| "authors": [ |
| { |
| "first": "D", |
| "middle": [], |
| "last": "Matthew", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Zeiler", |
| "suffix": "" |
| } |
| ], |
| "year": 2012, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Matthew D. Zeiler. 2012. ADADELTA: An Adaptive Learning Rate Method. CoRR.", |
| "links": null |
| } |
| }, |
| "ref_entries": { |
| "FIGREF0": { |
| "uris": null, |
| "type_str": "figure", |
| "text": "Virtual Patient Dialogue System a 10% absolute improvement in accuracy on the quintile of least frequently asked questions.", |
| "num": null |
| }, |
| "FIGREF1": { |
| "uris": null, |
| "type_str": "figure", |
| "text": "Label frequency distribution is extremely long-tailed, with few frequent labels and many infrequent labels. Values are shown above quintile boundaries.tail of relevant but infrequently asked questions.", |
| "num": null |
| }, |
| "TABREF0": { |
| "type_str": "table", |
| "html": null, |
| "text": ". i am so glad to see you. can you tell me a little about your issue <None> i'm sorry, i don't understand that question. would you restate it? what brings you in today what brings you in today i was hoping you could help me with my back pain, it really hurts! it has been awful.", |
| "content": "<table><tr><td>Student question hello mr. wilkins</td><td>Label detected hello mr</td><td>Canned response hello doctor</td></tr></table>", |
| "num": null |
| }, |
| "TABREF2": { |
| "type_str": "table", |
| "html": null, |
| "text": "Test results for the stacked CNN ensemble", |
| "content": "<table/>", |
| "num": null |
| }, |
| "TABREF4": { |
| "type_str": "table", |
| "html": null, |
| "text": "", |
| "content": "<table><tr><td>: Test results for the stacked CNN ensem-ble and the memory-augmented CNN classifier (MA-CNN) with the manually filtered paraphrases. The gain brought by the adding the automatically generated paraphrases into training data for MA-CNN is highly significant (p = 1.6 \u00d7 10 \u22124 , McNemar's test).</td></tr></table>", |
| "num": null |
| }, |
| "TABREF6": { |
| "type_str": "table", |
| "html": null, |
| "text": "Test results for the memory-augmented CNN classifier (MA-CNN) with different filtering techniques.", |
| "content": "<table><tr><td>Paraphrases</td><td>Rare Acc</td></tr><tr><td>No paraphrases</td><td>51.78</td></tr><tr><td>Lexical substitution</td><td>53.16</td></tr><tr><td>Neural Machine Translation</td><td>55.22</td></tr><tr><td>Both</td><td>56.14</td></tr></table>", |
| "num": null |
| }, |
| "TABREF7": { |
| "type_str": "table", |
| "html": null, |
| "text": "", |
| "content": "<table><tr><td>: Test results for the memory-augmented CNN classifier (MA-CNN) with different subsets of the man-ual filtered paraphrases generated using different para-phrase methods.</td></tr></table>", |
| "num": null |
| } |
| } |
| } |
| } |