| { |
| "paper_id": "2020", |
| "header": { |
| "generated_with": "S2ORC 1.0.0", |
| "date_generated": "2023-01-19T06:35:00.556710Z" |
| }, |
| "title": "Quantifying the Evaluation of Heuristic Methods for Textual Data Augmentation", |
| "authors": [ |
| { |
| "first": "Omid", |
| "middle": [], |
| "last": "Kashefi", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "University of Pittsburgh", |
| "location": {} |
| }, |
| "email": "kashefi@cs.pitt.edu" |
| }, |
| { |
| "first": "Rebecca", |
| "middle": [], |
| "last": "Hwa", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "University of Pittsburgh", |
| "location": {} |
| }, |
| "email": "hwa@cs.pitt.edu" |
| } |
| ], |
| "year": "", |
| "venue": null, |
| "identifiers": {}, |
| "abstract": "Data augmentation has been shown to be effective in providing more training data for machine learning and resulting in more robust classifiers. However, for some problems, there may be multiple augmentation heuristics, and the choices of which one to use may significantly impact the success of the training. In this work, we propose a metric for evaluating augmentation heuristics; specifically, we quantify the extent to which an example is \"hard to distinguish\" by considering the difference between the distribution of the augmented samples of different classes. Experimenting with multiple heuristics in two prediction tasks (positive/negative sentiment and verbosity/conciseness) validates our claims by revealing the connection between the distribution difference of different classes and the classification accuracy.", |
| "pdf_parse": { |
| "paper_id": "2020", |
| "_pdf_hash": "", |
| "abstract": [ |
| { |
| "text": "Data augmentation has been shown to be effective in providing more training data for machine learning and resulting in more robust classifiers. However, for some problems, there may be multiple augmentation heuristics, and the choices of which one to use may significantly impact the success of the training. In this work, we propose a metric for evaluating augmentation heuristics; specifically, we quantify the extent to which an example is \"hard to distinguish\" by considering the difference between the distribution of the augmented samples of different classes. Experimenting with multiple heuristics in two prediction tasks (positive/negative sentiment and verbosity/conciseness) validates our claims by revealing the connection between the distribution difference of different classes and the classification accuracy.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Abstract", |
| "sec_num": null |
| } |
| ], |
| "body_text": [ |
| { |
| "text": "Machine learning approaches have been shown to be capable of making accurate predictions in many well-known problem domains with an abundance of training data. This heavy reliance on the availability of the data, however, may hamper the application of machine learning approaches to resourcelimited problem domains, where a sizable training data are not always available.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "There is a growing body of research on training under resource scarcity, and data augmentation is one of such techniques. It aims to reconcile the data requirement of the machine learning approaches by applying a general (e.g., randomly remove a word) or domain-inspired heuristic (e.g., replace an adjective with an antonym) to the (limited) existing data in order to generate more training samples.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "One challenge for data augmentation is in choosing the most appropriate heuristic for the application in question. There may be many domainindependent augmentation heuristics, and a domain expert may come up with many different domaininspired heuristics; but the choices of which examples from these heuristics to use may have a significant impact on the success of the trained model.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "A straightforward approach to choose an augmentation heuristic is to actually perform the classification experiment on all possible augmented datasets and then chose the best performing one(s) based on the evaluative results. However, this approach may not be computationally practical when there are too many augmentation heuristic options.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "In this paper, we propose an alternative heuristic evaluation approach based on the idea that a good heuristic should aim to generate \"hard to distinguish\" samples for different classes. We further argue that the generation quality of \"hard to distinguish\" examples could be quantified as the difference between the distribution of the augmented samples that a heuristic generates for different classes.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "To calculate the distribution difference, we proposed to use pre-trained off-the-shelf embeddings to convert sentences into class distributions, then calculate the KL-divergence between them and used that as a metric to evaluate the \"hard to distinguish\" examples that a heuristic produces.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "We validate our proposed heuristic evaluation approach by experimenting with multiple heuristics and augmented datasets for two classification tasks: predicting whether a sentence expresses positive or negative sentiment and predicting whether a sentence is verbose or concise. Results suggest that quantifying the \"hard to distinguish\" example generation quality of the heuristics as the difference between class distribution of the augmented examples, could be served as an effective metric for choosing a suitable augmented dataset for a classification task.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "Data augmentation is a technique for generating additional training data by applying a heuristic transformation to the existing training examples. For example, an existing image could by rescaled or flipped to get more images with the same label to expand the size and diversity of the training dataset and thus train a more reliable and accurate model (Fr\u00e9nay and Verleysen, 2014; Hendrycks et al., 2018; Shorten and Khoshgoftaar, 2019) .", |
| "cite_spans": [ |
| { |
| "start": 353, |
| "end": 381, |
| "text": "(Fr\u00e9nay and Verleysen, 2014;", |
| "ref_id": "BIBREF6" |
| }, |
| { |
| "start": 382, |
| "end": 405, |
| "text": "Hendrycks et al., 2018;", |
| "ref_id": "BIBREF8" |
| }, |
| { |
| "start": 406, |
| "end": 437, |
| "text": "Shorten and Khoshgoftaar, 2019)", |
| "ref_id": "BIBREF24" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Data Augmentation", |
| "sec_num": "2" |
| }, |
| { |
| "text": "In general, data augmentation could be formulated as Equation 1, where h is a heuristic function that transforms the datapoint and label pair of (x, y) to a new augmented sample (x,\u0177).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Data Augmentation", |
| "sec_num": "2" |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "(x,\u0177) = h(x, y)", |
| "eq_num": "(1)" |
| } |
| ], |
| "section": "Data Augmentation", |
| "sec_num": "2" |
| }, |
| { |
| "text": "The majority of existing data augmentation approaches are label-preserving, which relaxes the Equation 1 as (x, y) = h(x, y) = (h(x), y); this means, if x belong to some class A, augmented x also belong to class A. For example, using a synonym replacement heuristic, a sentence with positive sentiment could be augmented into a new example, while preserving the overall positive sentiment. Label-preserving data augmentation requires existing labeled samples for every class that is needed to be augmented.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Data Augmentation", |
| "sec_num": "2" |
| }, |
| { |
| "text": "Data augmentation can be non-label-preserving as well, where the label itself might also transform using function h y that expands Equation 1 as:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Data Augmentation", |
| "sec_num": "2" |
| }, |
| { |
| "text": "(x,\u0177) = h(x, y) = (h x (x), h y (y))", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Data Augmentation", |
| "sec_num": "2" |
| }, |
| { |
| "text": "This means, while x belongs to class A,x might not necessarily belong to class A. For example, by replacing the most positive word(s) of a sentence with positive sentiment with an antonym, the sentence's sentiment may become negative. Non-labelpreserving data augmentation is not bound to the assumption of having labeled samples for instances of all classes and samples from one class may be enough to generate instances of other classes.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Data Augmentation", |
| "sec_num": "2" |
| }, |
| { |
| "text": "Given a classification task, there may be multiple heuristics and data augmentation approaches that allow us to transform existing samples to new ones, but the choice of heuristic may significantly impact the success of the task. In this paper, we aim to answer the key question: \"which heuristic and data augmentation approach is more appropriate for a classification task?\" In Section 3, we propose a low-cost approach to quantify the evaluation of different heuristics and the resulting augmented datasets for classification tasks.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Data Augmentation", |
| "sec_num": "2" |
| }, |
| { |
| "text": "We believe our proposed approach could be a contribution to the NLP community because data augmentation has been shown to be useful for many NLP applications, with researchers proposing many different approaches for text data augmentation; for example, (Zhang et al., 2015; Wei and Zou, 2019) used thesaurus-based and (Wang and Yang, 2015; Kobayashi, 2018; Jiao et al., 2019) used embedding-based lexical substitution approach, (Wei and Zou, 2019; Xie et al., 2019) used random noise injection, including random word insertion, deletion, or sentence shuffling, (Luque, 2019) used instance crossover by combining halves of tweets, (Guo et al., 2019) adapt the mixup approach (Zhang et al., 2018) to text by interpolating the distributed representation of different sentences, (Sennrich et al., 2016; Fadaee et al., 2017; Xie et al., 2019) used back-translation, and (Hu et al., 2017; Iyyer et al., 2018; Anaby-Tavor et al., 2020; Kumar et al., 2020) used (deep) generative models to augment more training examples.", |
| "cite_spans": [ |
| { |
| "start": 253, |
| "end": 273, |
| "text": "(Zhang et al., 2015;", |
| "ref_id": "BIBREF31" |
| }, |
| { |
| "start": 274, |
| "end": 292, |
| "text": "Wei and Zou, 2019)", |
| "ref_id": "BIBREF28" |
| }, |
| { |
| "start": 340, |
| "end": 356, |
| "text": "Kobayashi, 2018;", |
| "ref_id": "BIBREF15" |
| }, |
| { |
| "start": 357, |
| "end": 375, |
| "text": "Jiao et al., 2019)", |
| "ref_id": "BIBREF12" |
| }, |
| { |
| "start": 428, |
| "end": 447, |
| "text": "(Wei and Zou, 2019;", |
| "ref_id": "BIBREF28" |
| }, |
| { |
| "start": 448, |
| "end": 465, |
| "text": "Xie et al., 2019)", |
| "ref_id": "BIBREF29" |
| }, |
| { |
| "start": 630, |
| "end": 648, |
| "text": "(Guo et al., 2019)", |
| "ref_id": "BIBREF7" |
| }, |
| { |
| "start": 674, |
| "end": 694, |
| "text": "(Zhang et al., 2018)", |
| "ref_id": "BIBREF30" |
| }, |
| { |
| "start": 775, |
| "end": 798, |
| "text": "(Sennrich et al., 2016;", |
| "ref_id": "BIBREF23" |
| }, |
| { |
| "start": 799, |
| "end": 819, |
| "text": "Fadaee et al., 2017;", |
| "ref_id": "BIBREF5" |
| }, |
| { |
| "start": 820, |
| "end": 837, |
| "text": "Xie et al., 2019)", |
| "ref_id": "BIBREF29" |
| }, |
| { |
| "start": 865, |
| "end": 882, |
| "text": "(Hu et al., 2017;", |
| "ref_id": "BIBREF10" |
| }, |
| { |
| "start": 883, |
| "end": 902, |
| "text": "Iyyer et al., 2018;", |
| "ref_id": "BIBREF11" |
| }, |
| { |
| "start": 903, |
| "end": 928, |
| "text": "Anaby-Tavor et al., 2020;", |
| "ref_id": null |
| }, |
| { |
| "start": 929, |
| "end": 948, |
| "text": "Kumar et al., 2020)", |
| "ref_id": "BIBREF17" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Data Augmentation", |
| "sec_num": "2" |
| }, |
| { |
| "text": "However, with all these textual augmentation options, trying all of them for a (classifier) training task might be impractical, and to our best knowledge, there is not a guideline for how to choose between them for a task.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Data Augmentation", |
| "sec_num": "2" |
| }, |
| { |
| "text": "A straightforward approach to assess which heuristic and data augmentation approach is more appropriate for the task is to try every heuristic to generate an augmented dataset, then train a classifier on each and check the final classification performance (Qiu et al., 2020; Wei and Zou, 2019) . The training process in this brute-force approach, however, may be time-consuming and resource-intensive, especially in complex training scenarios.", |
| "cite_spans": [ |
| { |
| "start": 256, |
| "end": 274, |
| "text": "(Qiu et al., 2020;", |
| "ref_id": "BIBREF22" |
| }, |
| { |
| "start": 275, |
| "end": 293, |
| "text": "Wei and Zou, 2019)", |
| "ref_id": "BIBREF28" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Quantification of Heuristics Suitability", |
| "sec_num": "3" |
| }, |
| { |
| "text": "Alternatively, we may try to identify qualities that make a heuristic effective. Intuitively, a good heuristic ought to generate augmented samples that are the most similar to the original data distribution. However, this approach may overlook the additional generalization benefit that may come from diverse augmented training examples. Moreover, this approach may not be possible for problem domains with limited resources, where original labeled data is not available for all classes, and one may have to use non-label-preserving heuristics to augment examples for all classes.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Quantification of Heuristics Suitability", |
| "sec_num": "3" |
| }, |
| { |
| "text": "On the other hand, from the classification task perspective, a good heuristic should aim to generate near-miss examples (samples of class B hard to distinguish from A). We believe, the \"hard to distinguish\" samples can be quantified by finding a way to compute the difference between the samples of different classes, to sever as an guideline for choosing between different heuristic approaches.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Quantification of Heuristics Suitability", |
| "sec_num": "3" |
| }, |
| { |
| "text": "Let us assume samples of class A are drawn from distribution A, which should be different from distribution B that samples of class B are drawn from. The difference between distribution A and B can be calculated as the KL-divergence (KLD) (Kullback and Leibler, 1951 ) from B to A as: D KL (A||B). KLD calculates how probability distribution A is different from the reference probability distribution B as the amount of information gained if samples of B are used instead of samples of A.", |
| "cite_spans": [ |
| { |
| "start": 239, |
| "end": 266, |
| "text": "(Kullback and Leibler, 1951", |
| "ref_id": "BIBREF16" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Quantification of Heuristics Suitability", |
| "sec_num": "3" |
| }, |
| { |
| "text": "Thus, a lower D KL (A||B) means distribution A is more similar to distribution B, so samples of class A are harder to distinguish from samples of class B. Therefore, the extent to which \"hard to distinguish\" samples can be generated by heuristic h could be quantified as", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Quantification of Heuristics Suitability", |
| "sec_num": "3" |
| }, |
| { |
| "text": "D KL (A h ||B h ),", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Quantification of Heuristics Suitability", |
| "sec_num": "3" |
| }, |
| { |
| "text": "where A h and B h indicate the samples of class A and B augmented using heuristic h, and Equation 2 could be used to identify which heuristic is generating \"harder to distinguish\" samples and so more suitable for the classification task.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Quantification of Heuristics Suitability", |
| "sec_num": "3" |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "arg min h D KL (A h ||B h )", |
| "eq_num": "(2)" |
| } |
| ], |
| "section": "Quantification of Heuristics Suitability", |
| "sec_num": "3" |
| }, |
| { |
| "text": "Finally, to transform sentences from their discrete word representation into a continuous distribution representation, we utilize a few of the numerous pre-trained embeddings that nowadays are the de facto approach for encoding sentences into vector space (Cho et al., 2014; Le and Mikolov, 2014; Cer et al., 2018; Devlin et al., 2019) .", |
| "cite_spans": [ |
| { |
| "start": 256, |
| "end": 274, |
| "text": "(Cho et al., 2014;", |
| "ref_id": "BIBREF2" |
| }, |
| { |
| "start": 275, |
| "end": 296, |
| "text": "Le and Mikolov, 2014;", |
| "ref_id": "BIBREF18" |
| }, |
| { |
| "start": 297, |
| "end": 314, |
| "text": "Cer et al., 2018;", |
| "ref_id": "BIBREF1" |
| }, |
| { |
| "start": 315, |
| "end": 335, |
| "text": "Devlin et al., 2019)", |
| "ref_id": "BIBREF4" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Quantification of Heuristics Suitability", |
| "sec_num": "3" |
| }, |
| { |
| "text": "We examine the applicability of our proposed approach by studying two classification tasks: sentiment analysis, as a resource-rich problem domain that allows experimenting with both labelpreserving and non-label-preserving heuristics, and verbosity analysis, as a resource-limited problem domain that the absence of sizable labeled data limits the options to non-label-preserving heuristics.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Quantification of Heuristics Suitability", |
| "sec_num": "3" |
| }, |
| { |
| "text": "In this section, we go over some heuristic options for augmenting training corpora for sentiment analysis and verbosity detection domains.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Augmented Datasets", |
| "sec_num": "4" |
| }, |
| { |
| "text": "Our sentiment analysis task is to predict whether a sentence expresses positive, negative, or neutral sentiment? For this task, we use the sentences from the Yelp Polarity Dataset (YPD) (Zhang et al., 2015) to create the augmented dataset.", |
| "cite_spans": [ |
| { |
| "start": 186, |
| "end": 206, |
| "text": "(Zhang et al., 2015)", |
| "ref_id": "BIBREF31" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Augmented Sentiment Corpus", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "As label-preserving heuristics, we use following heuristics proposed by Wei and Zou (2019):", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Augmented Sentiment Corpus", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "\u2022 Synonym Replacement (SR). Randomly pick a content word from the sentence and replace it with a synonym chosen at random.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Augmented Sentiment Corpus", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "\u2022 Random Insertion (RI). Randomly choose a content word from the sentence and insert one of its synonyms to a random place in the sentence.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Augmented Sentiment Corpus", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "\u2022 Random Swap (RS). Swap the position of two randomly chosen words in the sentence.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Augmented Sentiment Corpus", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "\u2022 Random Deletion (RD). Delete a randomly chosen word from the sentence.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Augmented Sentiment Corpus", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "We apply these heuristics to the positive sentences of YPD to generate more positive examples, and the other way around for generating more negative examples. For each sentence, we repeat each heuristic operation until about 20% of its words are changed (\u03b1 = .2).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Augmented Sentiment Corpus", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "Moreover, we propose the following non-labelpreserving heuristics and apply them to the positive sentences to create the augmented negative examples, and the other way around for generating the augmented positive examples.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Augmented Sentiment Corpus", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "\u2022 ALL. In this heuristic, we replace all the sentiment words of a sentence with their antonyms. To find the sentiment words, we first collected a vocabulary of positive and negative unigrams by combing the labeled words of Stanford Sentiment Treebank (Socher et al., 2013) and the Opinion Lexicon (Hu and Liu, 2004) . This results in a vocabulary of 3,453 positive and 6,000 negative unigrams.", |
| "cite_spans": [ |
| { |
| "start": 251, |
| "end": 272, |
| "text": "(Socher et al., 2013)", |
| "ref_id": "BIBREF25" |
| }, |
| { |
| "start": 297, |
| "end": 315, |
| "text": "(Hu and Liu, 2004)", |
| "ref_id": "BIBREF9" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Augmented Sentiment Corpus", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "Then, for a positive sentence in YPD, we replace every word of it that appeared in the positive portion of the collected vocabulary by one of its randomly chosen antonyms, using WordNet (Miller, 1995) , to create the augmented negative sentence. We perform similarly but in the opposite direction to create the augmented positive sentences. \u2022 ONE. In this heuristic, instead of replacing all sentiment words with their randomly chosen antonym, we first filtered for antonyms that match the POS and sense of the sentiment word, then we pick the antonym that makes the most fluent augmented sentences, ranked by a language model (LM) trained on YPD. Finally, for every sentence, we only replace one of its sentiment words with its POS, sense, and LM filtered antonym.", |
| "cite_spans": [ |
| { |
| "start": 186, |
| "end": 200, |
| "text": "(Miller, 1995)", |
| "ref_id": "BIBREF21" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Augmented Sentiment Corpus", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "Using this heuristic, for example, a sentence with overall positive polarity may still contain a word that expresses a negative opinion about an aspect, so intuitively, this creates \"harder to distinguish\" examples compared to the ALL heuristic.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Augmented Sentiment Corpus", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "In total, we generated 50K positive and 50K negative augmented samples using each heuristic. We removed all of the original YPD sentences so that these datasets contain only augmented samples. We refer to each dataset with the same name as the heuristic function it is augmented with. Table 1 shows examples of sentences augmented using the label-preserving and non-label-preserving sentiment heuristics.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Augmented Sentiment Corpus", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "The verbosity detection task is to predict whether a sentence is verbose or concise. Unlike the sentiment analysis domain, the set of existing resources for the verbosity detection problem is much more limited: NUCLE covers grammatical redundancy (Dahlmeier et al., 2013) , and Kashefi et al. (2018) has a small corpus called Semantic Pleonasm Corpus (SPC) that contains semantic redundancy (i.e., verbosity) labels. Due to its small size, it is primarily suitable as a benchmark.", |
| "cite_spans": [ |
| { |
| "start": 247, |
| "end": 271, |
| "text": "(Dahlmeier et al., 2013)", |
| "ref_id": "BIBREF3" |
| }, |
| { |
| "start": 278, |
| "end": 299, |
| "text": "Kashefi et al. (2018)", |
| "ref_id": "BIBREF13" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Augmented Verbosity Corpus", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "Since to the best of our knowledge, there is no sizable resource with explicit verbose and concise labels, to augment a dataset of concise and verbose sentences, we start by trying to identify an existing real-world data source that has verbosity or conciseness characteristics. One domain-specific feature of Yelp that we exploit is the data category called \"tips.\" Since \"tips\" are very short sentences, they are likely to be concise; we sample for \"tips\" that contain adjectives because the evaluation corpus (i.e. SPC) mainly focuses on adjectival semantic redundancies.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Augmented Verbosity Corpus", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "Based on domain knowledge, we come up with the following non-label-preserving heuristics to create verbose samples based on the collected \"concise\" sentences by adding a superfluous adjective to the concise sentences:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Augmented Verbosity Corpus", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "\u2022 Duplicate (DUP). This heuristic is an obvious case for word redundancy by duplicating an adjective word of the sentence right next to itself.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Augmented Verbosity Corpus", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "\u2022 Synonym (SYN). This heuristic inserts a synonym next to an adjective word of the sentence. The conventional way to get synonyms of a word is to use WordNet, however, since these synonyms may express a different quality of the noun clause compared to the original adjective, augmented construction might not be semantically redundant.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Augmented Verbosity Corpus", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "For this reason, we opt to use sense2vec (Trask et al., 2015) , a contextual wordembedding fine-tuned on Yelp \"tips\". Since the adjective synonyms from sense2vec are matching the context and follow the same intent and emotional state of the original adjective, these two adjacent synonyms are likely to make a pleonastic construction.", |
| "cite_spans": [ |
| { |
| "start": 41, |
| "end": 61, |
| "text": "(Trask et al., 2015)", |
| "ref_id": "BIBREF26" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Augmented Verbosity Corpus", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "\u2022 Near-Miss Negative (NMN). In this heuristic, we try to create concise examples that are \"hard to distinguish\" from the verbose examples. We trained a language model on the Yelp \"tips\" and used that to predict the most likely words that can occur right after an adjective of the sentence. Let assume for adjective w adj in sentence s, using LM, we retrieved {w aug1 , w aug2 , ..., w aug5 } as a sorted list of most likely words that can appear next to w adj given its context s. We then filter for w aug s that are adjective themselves and a synonym of w adj , lets assume the filtered list be {w aug2 , w aug5 }.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Augmented Verbosity Corpus", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "Since LM is trained on Yelp, the w aug2 is already observed in the Yelp tips after the w adj in some context. Taking into account that Yelp tips are considered concise, the sequence of ... w adj w aug2 ... is also concise. Therefore, we can create concise examples that are containing two adjacent synonyms but are not verbose", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Augmented Verbosity Corpus", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "For each heuristic, we generate only one augmented verbose sample from an original concise sentence. In total, we augmented 100K concise and 100K verbose samples using each heuristic. Since the verbose examples are generated from concise sentences that are included in the augmented corpus, we removed the concise sentences with odd and verbose sentences with even indexes to make sure that non of the concise are verbose sentences in the corpus are corresponding to each other. The final augmented corpus, thus, contains 50K nonparallel samples of each class. We refer to each dataset with the same name as the heuristic function that was used to augment it. Table 2 shows examples of sentences augmented using the non-label-preserving verbosity heuristics. While duplicating the word \"delicious\" or adding \"tasty\" next to it makes the sentence verbose, adding \"redolent\" does not make it verbose because \"redolent\" and \"delicious\" are describing different quality of the \"bread.\"", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 660, |
| "end": 667, |
| "text": "Table 2", |
| "ref_id": "TABREF3" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Augmented Verbosity Corpus", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "The key questions for validating our proposed approach for quantifying the evaluation of heuristic textual data augmentation methods are:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experiments", |
| "sec_num": "5" |
| }, |
| { |
| "text": "\u2022 Q1. Can generating \"hard to distinguish\" examples be an effective way to assess whether a heuristic is generating a suitable augmented training dataset?", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experiments", |
| "sec_num": "5" |
| }, |
| { |
| "text": "\u2022 Q2. To what extent could the notion of \"hard to distinguish\" examples be quantified by our proposed metric -the difference between the class distribution of the augmented samples?", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experiments", |
| "sec_num": "5" |
| }, |
| { |
| "text": "\u2022 Q3. Is calculating the difference of class distributions computationally efficient in practice?", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experiments", |
| "sec_num": "5" |
| }, |
| { |
| "text": "To measure the accuracy of sentiment and verbosity classification in answering Q1, we trained an LSTM (Liu et al., 2016) and a CNN (Kim, 2014) classifier on each the augmented dataset. The classification result for each task and augmented dataset is reported in Section 5.2.", |
| "cite_spans": [ |
| { |
| "start": 102, |
| "end": 120, |
| "text": "(Liu et al., 2016)", |
| "ref_id": "BIBREF19" |
| }, |
| { |
| "start": 131, |
| "end": 142, |
| "text": "(Kim, 2014)", |
| "ref_id": "BIBREF14" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experiments", |
| "sec_num": "5" |
| }, |
| { |
| "text": "The LSTM and CNN models are trained on augmented corpora separately for each task; the sentiment classifiers are evaluated on a held-out portion of the YPD, and the verbosity classifiers are evaluated on SPC. None of the sentences of the held-out YPD and SPC are used during the creation of the augmented datasets.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experiments", |
| "sec_num": "5" |
| }, |
| { |
| "text": "To answer Q2, we use two pre-trained encoder models: Universal Sentence Encoder (USE) (Cer et al., 2018) and Bidirectional Encoder Representations from Transformers (BERT) (Devlin et al., 2019) , both of which are transformer-based encoder of greater-than-word length text, to transform the sentences into a continuous space so that we can treat them as class distributions and measure their similarity.", |
| "cite_spans": [ |
| { |
| "start": 86, |
| "end": 104, |
| "text": "(Cer et al., 2018)", |
| "ref_id": "BIBREF1" |
| }, |
| { |
| "start": 172, |
| "end": 193, |
| "text": "(Devlin et al., 2019)", |
| "ref_id": "BIBREF4" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experiments", |
| "sec_num": "5" |
| }, |
| { |
| "text": "If a good heuristic is the one that generates \"hard to distinguish\" examples, the dataset augmented using ONE should train a better classifier than ALL for the sentiment analysis task, and the verbosity classifier trained on NMN should outperform the classifiers trained on SYN and DUP. Table 3 and Table 4 show the classification accuracy of the neural models trained on different augmented datasets for sentiment and verbosity prediction tasks, and as we expected, heuristics that intuitively generate \"harder to distinguish\" examples are more suitable for the prediction task and trained a better classifier on both tasks: Sentiment Classification Accuracy: These observations suggest that an augmented dataset generated from a heuristic that produces \"harder to distinguish\" examples for different classes could train a better classifier (Q1).", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 287, |
| "end": 294, |
| "text": "Table 3", |
| "ref_id": "TABREF5" |
| }, |
| { |
| "start": 299, |
| "end": 306, |
| "text": "Table 4", |
| "ref_id": "TABREF7" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Classification Accuracy", |
| "sec_num": "5.1" |
| }, |
| { |
| "text": "ACC(ON E) > ACC(ALL) Verbosity Classification Accuracy: ACC(N M N ) > ACC(SY N ) > ACC(DU P ) Dataset Model", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Classification Accuracy", |
| "sec_num": "5.1" |
| }, |
| { |
| "text": "Since label-preserving heuristics do not change the class label of the samples, the extent to which \"hard to distinguish\" examples can be generated rely heavily on their existence in the original data. Thus, we cannot intuitively predict which labelpreserving heuristic might be a better choice, however, in Section 5.2, we further study whether our purposed heuristic evaluation approach is applicable to label-preserving heuristics.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Classification Accuracy", |
| "sec_num": "5.1" |
| }, |
| { |
| "text": "To investigate the extent to which \"hard to distinguish\" examples might be quantified as a difference between the distribution of the augmented samples of different classes, we first encode the augmented sentences into a continuous high dimensional vector space; then, we computed the difference be-", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Augmented Distribution Difference", |
| "sec_num": "5.2" |
| }, |
| { |
| "text": "Model ACC KLD tween the distribution of the augmented samples of different classes as the divergence from high dimensional representation of one class to another. For the sentiment analysis task, we computed the difference between augmented positive and negative distribution as follow, where E is either BERT or USE encoders, and positive and negative indicate augmented positive and negative examples respectively:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Dataset", |
| "sec_num": null |
| }, |
| { |
| "text": "D KL (E(positive)||E(negative))", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Dataset", |
| "sec_num": null |
| }, |
| { |
| "text": "The distribution difference for the verbosity analysis task is calculated as follow, where concise and verbose indicate augmented concise and verbose examples respectively:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Dataset", |
| "sec_num": null |
| }, |
| { |
| "text": "D KL (E(concise)||E(verbose))", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Dataset", |
| "sec_num": null |
| }, |
| { |
| "text": "It must be noted that since there is no correspondence between the augmented examples of different classes, we computed the difference as the average KL-Divergence over mini-batches of the size 64 samples from the shuffled augmented dataset for 10 epochs (the same batch and epoch values used for training LSTM and CNN models). Table 3 shows the distribution difference between augmented positive and negative samples for the sentiment analysis task. As shown, although the average classification accuracy of models trained on label-preserving heuristics are only marginally different, the divergence between distributions of augmented examples with positive and negative sentiments are following the reverse order for both BERT and USE representations, with one exception for USE representation of RI compared to RD: Since the non-label-preserving heuristics apply significant semantic changes to the original samples to change its class label, it is expected that the choice of heuristic should have a more noticeable impact on the classification accuracy compared to the augmentation using label-preserving heuristics. We also observe the same results for non-label-preserving heuristics: augmented dataset with higher classification accuracy has lower divergence between distributions of their positive and negative examples: Table 4 shows the distribution difference between augmented concise and verbose samples for the verbosity prediction task. Here, similar to the sentiment analysis task, we observe that the divergence between distributions of augmented concise and verbose examples are following the reverse order of classification accuracy for both BERT and USE representations: Overall: 76.4s AVG: 1825.5s These observations may indicate that the extent to which a heuristic might generate \"hard to distinguish\" examples could be quantified as the difference (divergence) between the distribution of augmented examples in different classes (Q2).", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 328, |
| "end": 335, |
| "text": "Table 3", |
| "ref_id": "TABREF5" |
| }, |
| { |
| "start": 1330, |
| "end": 1337, |
| "text": "Table 4", |
| "ref_id": "TABREF7" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Dataset", |
| "sec_num": null |
| }, |
| { |
| "text": "Now that we have investigated the role of \"hard to distinguish\" examples in the success of training a classifier (Q1) and how to quantify that (Q2), it is time to evaluate the computational efficiency of our purposed approach to see how practical it is compared to training a separate classifier for each augmented dataset and pick the best performing one(s) (Q3).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Computational Efficiency", |
| "sec_num": "5.3" |
| }, |
| { |
| "text": "To investigate this, we calculated the time for encoding the augmented examples into continuous space and the time requires for computing the KLD and compared them with the time required for training a classifier on an augmented dataset. Table 5 shows the average execution time of our proposed approach for evaluating the suitability of different data augmentation heuristics and training neural classifiers on augmented datasets. Reported numbers are averaged over sentiment and verbosity prediction tasks for all augmented datasets. Encoding is a one-time process for each augmented dataset, and numbers reported under KLD and Classification columns are the overall execution time after 10 epochs of training on an NVIDIA Tesla P100 GPU.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 238, |
| "end": 245, |
| "text": "Table 5", |
| "ref_id": "TABREF8" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Computational Efficiency", |
| "sec_num": "5.3" |
| }, |
| { |
| "text": "We observed that encoding and divergence calculation times only depend on the number of samples and the classification task and choice of heuristic is not affecting the execution times. We also observed that the training time for both LSTM and CNN also highly depends on the number of training samples, and changing tasks and augmented dataset only slightly change the training time (standard deviation of 9.4s and 6.8s, respectively).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Computational Efficiency", |
| "sec_num": "5.3" |
| }, |
| { |
| "text": "Execution times are showing that our proposed heuristic evaluation approach is about 25 times faster than training a classifier; this may suggest that our proposed approach could be a low-cost alternative solution for assessing the suitability of the heuristic strategies for augmenting training dataset for different classification tasks, especially for complex training scenarios when training many classifiers on different augmented dataset might not be computationally practical (Q3).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Computational Efficiency", |
| "sec_num": "5.3" |
| }, |
| { |
| "text": "This paper presents an approach for evaluating the suitability of augmentation heuristics for classifications task via \"hard to distinguish\" example generation capacity of the heuristics through analyzing the difference of class distribution of the augmented examples.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusion", |
| "sec_num": "6" |
| }, |
| { |
| "text": "Experimental results suggest our proposed heuristic evaluation approach could be a low-cost yet effective way of measuring the suitability of an augmented heuristic for a classification task.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusion", |
| "sec_num": "6" |
| } |
| ], |
| "back_matter": [ |
| { |
| "text": "We would like to thank the anonymous reviewers for their helpful comments. This material is based upon work supported by the National Science Foundation under Grant Number 1735752.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Acknowledgments", |
| "sec_num": null |
| } |
| ], |
| "bib_entries": { |
| "BIBREF0": { |
| "ref_id": "b0", |
| "title": "Naama Tepper, and Naama Zwerdling. 2020. Not Enough Data? Deep Learning to the Rescue! In AAAI", |
| "authors": [ |
| { |
| "first": "Ateret", |
| "middle": [], |
| "last": "Anaby-Tavor", |
| "suffix": "" |
| }, |
| { |
| "first": "Boaz", |
| "middle": [], |
| "last": "Carmeli", |
| "suffix": "" |
| }, |
| { |
| "first": "Esther", |
| "middle": [], |
| "last": "Goldbraich", |
| "suffix": "" |
| }, |
| { |
| "first": "Amir", |
| "middle": [], |
| "last": "Kantor", |
| "suffix": "" |
| }, |
| { |
| "first": "George", |
| "middle": [], |
| "last": "Kour", |
| "suffix": "" |
| }, |
| { |
| "first": "Segev", |
| "middle": [], |
| "last": "Shlomov", |
| "suffix": "" |
| } |
| ], |
| "year": null, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Ateret Anaby-Tavor, Boaz Carmeli, Esther Goldbraich, Amir Kantor, George Kour, Segev Shlomov, Naama Tepper, and Naama Zwerdling. 2020. Not Enough Data? Deep Learning to the Rescue! In AAAI.", |
| "links": null |
| }, |
| "BIBREF1": { |
| "ref_id": "b1", |
| "title": "Chris Tar, Yun-Hsuan Sung, Brian Strope, and Ray Kurzweil Google Research Mountain View", |
| "authors": [ |
| { |
| "first": "Daniel", |
| "middle": [], |
| "last": "Cer", |
| "suffix": "" |
| }, |
| { |
| "first": "Yinfei", |
| "middle": [], |
| "last": "Yang", |
| "suffix": "" |
| }, |
| { |
| "first": "Sheng-Yi", |
| "middle": [], |
| "last": "Kong", |
| "suffix": "" |
| }, |
| { |
| "first": "Nan", |
| "middle": [], |
| "last": "Hua", |
| "suffix": "" |
| }, |
| { |
| "first": "Nicole", |
| "middle": [], |
| "last": "Limtiaco", |
| "suffix": "" |
| }, |
| { |
| "first": "Rhomni", |
| "middle": [], |
| "last": "St John", |
| "suffix": "" |
| }, |
| { |
| "first": "Noah", |
| "middle": [], |
| "last": "Constant", |
| "suffix": "" |
| }, |
| { |
| "first": "Mario", |
| "middle": [], |
| "last": "Guajardo-C\u00e9spedes", |
| "suffix": "" |
| }, |
| { |
| "first": "Steve", |
| "middle": [], |
| "last": "Yuan", |
| "suffix": "" |
| } |
| ], |
| "year": 2018, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": { |
| "arXiv": [ |
| "arXiv:1803.11175" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Daniel Cer, Yinfei Yang, Sheng-yi Kong, Nan Hua, Nicole Limtiaco, Rhomni St John, Noah Constant, Mario Guajardo-C\u00e9spedes, Steve Yuan, Chris Tar, Yun-Hsuan Sung, Brian Strope, and Ray Kurzweil Google Research Mountain View. 2018. Universal Sentence Encoder. Computing Research Repository, arXiv:1803.11175.", |
| "links": null |
| }, |
| "BIBREF2": { |
| "ref_id": "b2", |
| "title": "Learning Phrase Representations using RNN Encoder-Decoder for Statistical Machine Translation", |
| "authors": [ |
| { |
| "first": "Kyunghyun", |
| "middle": [], |
| "last": "Cho", |
| "suffix": "" |
| }, |
| { |
| "first": "Bart", |
| "middle": [], |
| "last": "Van Merri\u00ebnboer", |
| "suffix": "" |
| }, |
| { |
| "first": "Caglar", |
| "middle": [], |
| "last": "Gulcehre", |
| "suffix": "" |
| }, |
| { |
| "first": "Dzmitry", |
| "middle": [], |
| "last": "Bahdanau", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "EMNLP", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Kyunghyun Cho, Bart Van Merri\u00ebnboer, Caglar Gul- cehre, Dzmitry Bahdanau, Fethi Bougares, Hol- ger Schwenk, and Yoshua Bengio. 2014. Learn- ing Phrase Representations using RNN Encoder- Decoder for Statistical Machine Translation. In EMNLP.", |
| "links": null |
| }, |
| "BIBREF3": { |
| "ref_id": "b3", |
| "title": "Building a Large Annotated Corpus of Learner English: The NUS Corpus of Learner English", |
| "authors": [ |
| { |
| "first": "Daniel", |
| "middle": [], |
| "last": "Dahlmeier", |
| "suffix": "" |
| }, |
| { |
| "first": "Siew Mei", |
| "middle": [], |
| "last": "Hwee Tou Ng", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Wu", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "SIGEDU", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Daniel Dahlmeier, Hwee Tou Ng, and Siew Mei Wu. 2013. Building a Large Annotated Corpus of Learner English: The NUS Corpus of Learner En- glish. In SIGEDU.", |
| "links": null |
| }, |
| "BIBREF4": { |
| "ref_id": "b4", |
| "title": "BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding", |
| "authors": [ |
| { |
| "first": "Jacob", |
| "middle": [], |
| "last": "Devlin", |
| "suffix": "" |
| }, |
| { |
| "first": "Ming-Wei", |
| "middle": [], |
| "last": "Chang", |
| "suffix": "" |
| }, |
| { |
| "first": "Kenton", |
| "middle": [], |
| "last": "Lee", |
| "suffix": "" |
| }, |
| { |
| "first": "Kristina", |
| "middle": [], |
| "last": "Toutanova", |
| "suffix": "" |
| } |
| ], |
| "year": 2019, |
| "venue": "NAACL", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of Deep Bidirectional Transformers for Language Un- derstanding. In NAACL.", |
| "links": null |
| }, |
| "BIBREF5": { |
| "ref_id": "b5", |
| "title": "Data Augmentation for Low-Resource Neural Machine Translation", |
| "authors": [ |
| { |
| "first": "Marzieh", |
| "middle": [], |
| "last": "Fadaee", |
| "suffix": "" |
| }, |
| { |
| "first": "Arianna", |
| "middle": [], |
| "last": "Bisazza", |
| "suffix": "" |
| }, |
| { |
| "first": "Christof", |
| "middle": [], |
| "last": "Monz", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "NAACL", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Marzieh Fadaee, Arianna Bisazza, and Christof Monz. 2017. Data Augmentation for Low-Resource Neural Machine Translation. In NAACL.", |
| "links": null |
| }, |
| "BIBREF6": { |
| "ref_id": "b6", |
| "title": "Classification in the Presence of Label Noise: a Survey", |
| "authors": [ |
| { |
| "first": "Beno\u00eet", |
| "middle": [], |
| "last": "Fr\u00e9nay", |
| "suffix": "" |
| }, |
| { |
| "first": "Michel", |
| "middle": [], |
| "last": "Verleysen", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "IEEE Transactions on Neural Networks and Learning Systems", |
| "volume": "25", |
| "issue": "5", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Beno\u00eet Fr\u00e9nay and Michel Verleysen. 2014. Classifica- tion in the Presence of Label Noise: a Survey. IEEE Transactions on Neural Networks and Learning Sys- tems, 25(5):845.", |
| "links": null |
| }, |
| "BIBREF7": { |
| "ref_id": "b7", |
| "title": "Augmenting Data with Mixup for Sentence Classification: An Empirical Study", |
| "authors": [ |
| { |
| "first": "Hongyu", |
| "middle": [], |
| "last": "Guo", |
| "suffix": "" |
| }, |
| { |
| "first": "Yongyi", |
| "middle": [], |
| "last": "Mao", |
| "suffix": "" |
| }, |
| { |
| "first": "Richong", |
| "middle": [], |
| "last": "Zhang", |
| "suffix": "" |
| } |
| ], |
| "year": 2019, |
| "venue": "Computing Research Repository", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": { |
| "arXiv": [ |
| "arXiv:1905.08941" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Hongyu Guo, Yongyi Mao, and Richong Zhang. 2019. Augmenting Data with Mixup for Sentence Classifi- cation: An Empirical Study. Computing Research Repository, arXiv:1905.08941.", |
| "links": null |
| }, |
| "BIBREF8": { |
| "ref_id": "b8", |
| "title": "Using Trusted Data to Train Deep Networks on Labels Corrupted by Severe Noise", |
| "authors": [ |
| { |
| "first": "Dan", |
| "middle": [], |
| "last": "Hendrycks", |
| "suffix": "" |
| }, |
| { |
| "first": "Mantas", |
| "middle": [], |
| "last": "Mazeika", |
| "suffix": "" |
| }, |
| { |
| "first": "Duncan", |
| "middle": [], |
| "last": "Wilson", |
| "suffix": "" |
| }, |
| { |
| "first": "Kevin", |
| "middle": [], |
| "last": "Gimpel", |
| "suffix": "" |
| } |
| ], |
| "year": 2018, |
| "venue": "NIPS", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Dan Hendrycks, Mantas Mazeika, Duncan Wilson, and Kevin Gimpel. 2018. Using Trusted Data to Train Deep Networks on Labels Corrupted by Se- vere Noise. In NIPS.", |
| "links": null |
| }, |
| "BIBREF9": { |
| "ref_id": "b9", |
| "title": "Mining and Summarizing Customer Reviews", |
| "authors": [ |
| { |
| "first": "Minqing", |
| "middle": [], |
| "last": "Hu", |
| "suffix": "" |
| }, |
| { |
| "first": "Bing", |
| "middle": [], |
| "last": "Liu", |
| "suffix": "" |
| } |
| ], |
| "year": 2004, |
| "venue": "KDD", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": { |
| "DOI": [ |
| "https://dl.acm.org/doi/10.1145/1014052.1014073" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Minqing Hu and Bing Liu. 2004. Mining and Summa- rizing Customer Reviews. In KDD.", |
| "links": null |
| }, |
| "BIBREF10": { |
| "ref_id": "b10", |
| "title": "Toward Controlled Generation of Text", |
| "authors": [ |
| { |
| "first": "Zhiting", |
| "middle": [], |
| "last": "Hu", |
| "suffix": "" |
| }, |
| { |
| "first": "Zichao", |
| "middle": [], |
| "last": "Yang", |
| "suffix": "" |
| }, |
| { |
| "first": "Xiaodan", |
| "middle": [], |
| "last": "Liang", |
| "suffix": "" |
| }, |
| { |
| "first": "Ruslan", |
| "middle": [], |
| "last": "Salakhutdinov", |
| "suffix": "" |
| }, |
| { |
| "first": "Eric", |
| "middle": [ |
| "P" |
| ], |
| "last": "Xing", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "ICML", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Zhiting Hu, Zichao Yang, Xiaodan Liang, Ruslan Salakhutdinov, and Eric P Xing. 2017. Toward Con- trolled Generation of Text. In ICML.", |
| "links": null |
| }, |
| "BIBREF11": { |
| "ref_id": "b11", |
| "title": "Adversarial Example Generation with Syntactically Controlled Paraphrase Networks", |
| "authors": [ |
| { |
| "first": "Mohit", |
| "middle": [], |
| "last": "Iyyer", |
| "suffix": "" |
| }, |
| { |
| "first": "John", |
| "middle": [], |
| "last": "Wieting", |
| "suffix": "" |
| }, |
| { |
| "first": "Kevin", |
| "middle": [], |
| "last": "Gimpel", |
| "suffix": "" |
| }, |
| { |
| "first": "Luke", |
| "middle": [], |
| "last": "Zettlemoyer", |
| "suffix": "" |
| } |
| ], |
| "year": 2018, |
| "venue": "NAACL", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Mohit Iyyer, John Wieting, Kevin Gimpel, and Luke Zettlemoyer. 2018. Adversarial Example Genera- tion with Syntactically Controlled Paraphrase Net- works. In NAACL.", |
| "links": null |
| }, |
| "BIBREF12": { |
| "ref_id": "b12", |
| "title": "TinyBERT: Distilling BERT for Natural Language Understanding. Computing Research Repository", |
| "authors": [ |
| { |
| "first": "Xiaoqi", |
| "middle": [], |
| "last": "Jiao", |
| "suffix": "" |
| }, |
| { |
| "first": "Yichun", |
| "middle": [], |
| "last": "Yin", |
| "suffix": "" |
| }, |
| { |
| "first": "Lifeng", |
| "middle": [], |
| "last": "Shang", |
| "suffix": "" |
| }, |
| { |
| "first": "Xin", |
| "middle": [], |
| "last": "Jiang", |
| "suffix": "" |
| }, |
| { |
| "first": "Xiao", |
| "middle": [], |
| "last": "Chen", |
| "suffix": "" |
| }, |
| { |
| "first": "Linlin", |
| "middle": [], |
| "last": "Li", |
| "suffix": "" |
| }, |
| { |
| "first": "Fang", |
| "middle": [], |
| "last": "Wang", |
| "suffix": "" |
| }, |
| { |
| "first": "Qun", |
| "middle": [], |
| "last": "Liu", |
| "suffix": "" |
| } |
| ], |
| "year": 2019, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": { |
| "arXiv": [ |
| "arXiv:1909.10351" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Xiaoqi Jiao, Yichun Yin, Lifeng Shang, Xin Jiang, Xiao Chen, Linlin Li, Fang Wang, and Qun Liu. 2019. TinyBERT: Distilling BERT for Natural Lan- guage Understanding. Computing Research Reposi- tory, arXiv:1909.10351.", |
| "links": null |
| }, |
| "BIBREF13": { |
| "ref_id": "b13", |
| "title": "Semantic Pleonasm Detection", |
| "authors": [ |
| { |
| "first": "Omid", |
| "middle": [], |
| "last": "Kashefi", |
| "suffix": "" |
| }, |
| { |
| "first": "T", |
| "middle": [], |
| "last": "Andrew", |
| "suffix": "" |
| }, |
| { |
| "first": "Rebecca", |
| "middle": [], |
| "last": "Lucas", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Hwa", |
| "suffix": "" |
| } |
| ], |
| "year": 2018, |
| "venue": "NAACL", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Omid Kashefi, Andrew T Lucas, and Rebecca Hwa. 2018. Semantic Pleonasm Detection. In NAACL.", |
| "links": null |
| }, |
| "BIBREF14": { |
| "ref_id": "b14", |
| "title": "Convolutional Neural Networks for Sentence Classification", |
| "authors": [ |
| { |
| "first": "Yoon", |
| "middle": [], |
| "last": "Kim", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "EMNLP", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Yoon Kim. 2014. Convolutional Neural Networks for Sentence Classification. EMNLP.", |
| "links": null |
| }, |
| "BIBREF15": { |
| "ref_id": "b15", |
| "title": "Contextual Augmentation: Data Augmentation by Words with Paradigmatic Relations", |
| "authors": [ |
| { |
| "first": "Sosuke", |
| "middle": [], |
| "last": "Kobayashi", |
| "suffix": "" |
| } |
| ], |
| "year": 2018, |
| "venue": "NAACL", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Sosuke Kobayashi. 2018. Contextual Augmentation: Data Augmentation by Words with Paradigmatic Re- lations. In NAACL.", |
| "links": null |
| }, |
| "BIBREF16": { |
| "ref_id": "b16", |
| "title": "On Information and Sufficiency. The Annals of Mathematical Statistics", |
| "authors": [ |
| { |
| "first": "S", |
| "middle": [], |
| "last": "Kullback", |
| "suffix": "" |
| }, |
| { |
| "first": "R", |
| "middle": [ |
| "A" |
| ], |
| "last": "Leibler", |
| "suffix": "" |
| } |
| ], |
| "year": 1951, |
| "venue": "", |
| "volume": "22", |
| "issue": "", |
| "pages": "79--86", |
| "other_ids": { |
| "DOI": [ |
| "10.1214/aoms/1177729694" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "S. Kullback and R. A. Leibler. 1951. On Information and Sufficiency. The Annals of Mathematical Statis- tics, 22(1):79-86.", |
| "links": null |
| }, |
| "BIBREF17": { |
| "ref_id": "b17", |
| "title": "Data Augmentation using Pre-trained Transformer Models", |
| "authors": [ |
| { |
| "first": "Varun", |
| "middle": [], |
| "last": "Kumar", |
| "suffix": "" |
| }, |
| { |
| "first": "Ashutosh", |
| "middle": [], |
| "last": "Choudhary", |
| "suffix": "" |
| }, |
| { |
| "first": "Eunah", |
| "middle": [], |
| "last": "Cho", |
| "suffix": "" |
| } |
| ], |
| "year": 2020, |
| "venue": "Computing Research Repository", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Varun Kumar, Ashutosh Choudhary, and Eunah Cho. 2020. Data Augmentation using Pre-trained Trans- former Models. Computing Research Repository, arXive: 2003.02245.", |
| "links": null |
| }, |
| "BIBREF18": { |
| "ref_id": "b18", |
| "title": "Distributed Representations of Sentences and Documents", |
| "authors": [ |
| { |
| "first": "Quoc", |
| "middle": [], |
| "last": "Le", |
| "suffix": "" |
| }, |
| { |
| "first": "Tomas", |
| "middle": [], |
| "last": "Mikolov", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "ICML", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Quoc Le and Tomas Mikolov. 2014. Distributed Repre- sentations of Sentences and Documents. In ICML.", |
| "links": null |
| }, |
| "BIBREF19": { |
| "ref_id": "b19", |
| "title": "Recurrent Neural Network for Text Classification with Multi-Task Learning", |
| "authors": [ |
| { |
| "first": "Pengfei", |
| "middle": [], |
| "last": "Liu", |
| "suffix": "" |
| }, |
| { |
| "first": "Xipeng", |
| "middle": [], |
| "last": "Qiu", |
| "suffix": "" |
| }, |
| { |
| "first": "Xuanjing", |
| "middle": [], |
| "last": "Huang", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "IJCAI", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Pengfei Liu, Xipeng Qiu, and Xuanjing Huang. 2016. Recurrent Neural Network for Text Classification with Multi-Task Learning. In IJCAI.", |
| "links": null |
| }, |
| "BIBREF20": { |
| "ref_id": "b20", |
| "title": "Atalaya at TASS 2019: Data Augmentation and Robust Embeddings for Sentiment Analysis", |
| "authors": [ |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Franco", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Luque", |
| "suffix": "" |
| } |
| ], |
| "year": 2019, |
| "venue": "TASS: Workshop on Sentiment Analysis at SEPLN", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Franco M. Luque. 2019. Atalaya at TASS 2019: Data Augmentation and Robust Embeddings for Senti- ment Analysis. In TASS: Workshop on Sentiment Analysis at SEPLN.", |
| "links": null |
| }, |
| "BIBREF21": { |
| "ref_id": "b21", |
| "title": "WordNet: a lexical database for English", |
| "authors": [ |
| { |
| "first": "A", |
| "middle": [], |
| "last": "George", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Miller", |
| "suffix": "" |
| } |
| ], |
| "year": 1995, |
| "venue": "Communications of the ACM", |
| "volume": "38", |
| "issue": "11", |
| "pages": "39--41", |
| "other_ids": { |
| "DOI": [ |
| "https://dl.acm.org/doi/10.1145/219717.219748" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "George A. Miller. 1995. WordNet: a lexical database for English. Communications of the ACM, 38(11):39-41.", |
| "links": null |
| }, |
| "BIBREF22": { |
| "ref_id": "b22", |
| "title": "EasyAug: An Automatic Textual Data Augmentation Platform for Classification Tasks", |
| "authors": [ |
| { |
| "first": "Siyuan", |
| "middle": [], |
| "last": "Qiu", |
| "suffix": "" |
| }, |
| { |
| "first": "Binxia", |
| "middle": [], |
| "last": "Xu", |
| "suffix": "" |
| }, |
| { |
| "first": "Jie", |
| "middle": [], |
| "last": "Zhang", |
| "suffix": "" |
| }, |
| { |
| "first": "Yafang", |
| "middle": [], |
| "last": "Wang", |
| "suffix": "" |
| }, |
| { |
| "first": "Xiaoyu", |
| "middle": [], |
| "last": "Shen", |
| "suffix": "" |
| }, |
| { |
| "first": "Gerard", |
| "middle": [], |
| "last": "De Melo", |
| "suffix": "" |
| }, |
| { |
| "first": "Chong", |
| "middle": [], |
| "last": "Long", |
| "suffix": "" |
| }, |
| { |
| "first": "Xiaolong", |
| "middle": [], |
| "last": "Li", |
| "suffix": "" |
| } |
| ], |
| "year": 2020, |
| "venue": "The Web Conference", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": { |
| "DOI": [ |
| "10.1145/3366424.3383552" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Siyuan Qiu, Binxia Xu, Jie Zhang, Yafang Wang, Xi- aoyu Shen, Gerard de Melo, Chong Long, and Xi- aolong Li. 2020. EasyAug: An Automatic Tex- tual Data Augmentation Platform for Classification Tasks. In The Web Conference.", |
| "links": null |
| }, |
| "BIBREF23": { |
| "ref_id": "b23", |
| "title": "Improving Neural Machine Translation Models with Monolingual Data", |
| "authors": [ |
| { |
| "first": "Rico", |
| "middle": [], |
| "last": "Sennrich", |
| "suffix": "" |
| }, |
| { |
| "first": "Barry", |
| "middle": [], |
| "last": "Haddow", |
| "suffix": "" |
| }, |
| { |
| "first": "Alexandra", |
| "middle": [], |
| "last": "Birch", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "ACL", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Improving Neural Machine Translation Mod- els with Monolingual Data. In ACL.", |
| "links": null |
| }, |
| "BIBREF24": { |
| "ref_id": "b24", |
| "title": "A survey on Image Data Augmentation for Deep Learning", |
| "authors": [ |
| { |
| "first": "Connor", |
| "middle": [], |
| "last": "Shorten", |
| "suffix": "" |
| }, |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Taghi", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Khoshgoftaar", |
| "suffix": "" |
| } |
| ], |
| "year": 2019, |
| "venue": "Journal of Big Data", |
| "volume": "6", |
| "issue": "1", |
| "pages": "1--48", |
| "other_ids": { |
| "DOI": [ |
| "10.1186/s40537-019-0197-0" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Connor Shorten and Taghi M. Khoshgoftaar. 2019. A survey on Image Data Augmentation for Deep Learning. Journal of Big Data, 6(1):1-48.", |
| "links": null |
| }, |
| "BIBREF25": { |
| "ref_id": "b25", |
| "title": "Recursive Deep Models for Semantic Compositionality Over a Sentiment Treebank", |
| "authors": [ |
| { |
| "first": "Richard", |
| "middle": [], |
| "last": "Socher", |
| "suffix": "" |
| }, |
| { |
| "first": "Alex", |
| "middle": [], |
| "last": "Perelygin", |
| "suffix": "" |
| }, |
| { |
| "first": "Y", |
| "middle": [], |
| "last": "Jean", |
| "suffix": "" |
| }, |
| { |
| "first": "Jason", |
| "middle": [], |
| "last": "Wu", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Chuang", |
| "suffix": "" |
| }, |
| { |
| "first": "D", |
| "middle": [], |
| "last": "Christopher", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Manning", |
| "suffix": "" |
| }, |
| { |
| "first": "Y", |
| "middle": [], |
| "last": "Andrew", |
| "suffix": "" |
| }, |
| { |
| "first": "Christopher", |
| "middle": [], |
| "last": "Ng", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Potts", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "EMNLP", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Richard Socher, Alex Perelygin, Jean Y Wu, Jason Chuang, Christopher D Manning, Andrew Y Ng, and Christopher Potts. 2013. Recursive Deep Mod- els for Semantic Compositionality Over a Sentiment Treebank. In EMNLP.", |
| "links": null |
| }, |
| "BIBREF26": { |
| "ref_id": "b26", |
| "title": "sense2vec -A Fast and Accurate Method for Word Sense Disambiguation In Neural Word Embeddings. Computing Research Repository", |
| "authors": [ |
| { |
| "first": "Andrew", |
| "middle": [], |
| "last": "Trask", |
| "suffix": "" |
| }, |
| { |
| "first": "Phil", |
| "middle": [], |
| "last": "Michalak", |
| "suffix": "" |
| }, |
| { |
| "first": "John", |
| "middle": [], |
| "last": "Liu", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": { |
| "arXiv": [ |
| "arXiv:1511.06388" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Andrew Trask, Phil Michalak, and John Liu. 2015. sense2vec -A Fast and Accurate Method for Word Sense Disambiguation In Neural Word Embeddings. Computing Research Repository, arXiv:1511.06388.", |
| "links": null |
| }, |
| "BIBREF27": { |
| "ref_id": "b27", |
| "title": "That's So Annoying!!!: A Lexical and Frame-Semantic Embedding Based Data Augmentation Approach to Automatic Categorization of Annoying Behaviors using #petpeeve Tweets", |
| "authors": [ |
| { |
| "first": "Yang", |
| "middle": [], |
| "last": "William", |
| "suffix": "" |
| }, |
| { |
| "first": "Diyi", |
| "middle": [], |
| "last": "Wang", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Yang", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "EMNLP", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "William Yang Wang and Diyi Yang. 2015. That's So Annoying!!!: A Lexical and Frame-Semantic Em- bedding Based Data Augmentation Approach to Au- tomatic Categorization of Annoying Behaviors us- ing #petpeeve Tweets. In EMNLP.", |
| "links": null |
| }, |
| "BIBREF28": { |
| "ref_id": "b28", |
| "title": "EDA: Easy data augmentation techniques for boosting performance on text classification tasks", |
| "authors": [ |
| { |
| "first": "Jason", |
| "middle": [], |
| "last": "Wei", |
| "suffix": "" |
| }, |
| { |
| "first": "Kai", |
| "middle": [], |
| "last": "Zou", |
| "suffix": "" |
| } |
| ], |
| "year": 2019, |
| "venue": "EMNLP", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Jason Wei and Kai Zou. 2019. EDA: Easy data aug- mentation techniques for boosting performance on text classification tasks. In EMNLP.", |
| "links": null |
| }, |
| "BIBREF29": { |
| "ref_id": "b29", |
| "title": "Unsupervised Data Augmentation for Consistency Training. Computing Research Repository", |
| "authors": [ |
| { |
| "first": "Qizhe", |
| "middle": [], |
| "last": "Xie", |
| "suffix": "" |
| }, |
| { |
| "first": "Zihang", |
| "middle": [], |
| "last": "Dai", |
| "suffix": "" |
| }, |
| { |
| "first": "Eduard", |
| "middle": [], |
| "last": "Hovy", |
| "suffix": "" |
| }, |
| { |
| "first": "Minh-Thang", |
| "middle": [], |
| "last": "Luong", |
| "suffix": "" |
| }, |
| { |
| "first": "V", |
| "middle": [], |
| "last": "Quoc", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Le", |
| "suffix": "" |
| } |
| ], |
| "year": 2019, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Qizhe Xie, Zihang Dai, Eduard Hovy, Minh-Thang Lu- ong, and Quoc V. Le. 2019. Unsupervised Data Aug- mentation for Consistency Training. Computing Re- search Repository, arXive: 1904.12848.", |
| "links": null |
| }, |
| "BIBREF30": { |
| "ref_id": "b30", |
| "title": "mixup: Beyond Empirical Risk Minimization", |
| "authors": [ |
| { |
| "first": "Hongyi", |
| "middle": [], |
| "last": "Zhang", |
| "suffix": "" |
| }, |
| { |
| "first": "Moustapha", |
| "middle": [], |
| "last": "Cisse", |
| "suffix": "" |
| }, |
| { |
| "first": "Yann", |
| "middle": [ |
| "N" |
| ], |
| "last": "Dauphin", |
| "suffix": "" |
| }, |
| { |
| "first": "David", |
| "middle": [], |
| "last": "Lopez-Paz", |
| "suffix": "" |
| } |
| ], |
| "year": 2018, |
| "venue": "ICLR", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Hongyi Zhang, Moustapha Cisse, Yann N. Dauphin, and David Lopez-Paz. 2018. mixup: Beyond Em- pirical Risk Minimization. In ICLR.", |
| "links": null |
| }, |
| "BIBREF31": { |
| "ref_id": "b31", |
| "title": "Character-level Convolutional Networks for Text Classification", |
| "authors": [ |
| { |
| "first": "Xiang", |
| "middle": [], |
| "last": "Zhang", |
| "suffix": "" |
| }, |
| { |
| "first": "Junbo", |
| "middle": [], |
| "last": "Zhao", |
| "suffix": "" |
| }, |
| { |
| "first": "Yann", |
| "middle": [], |
| "last": "Lecun", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "NIPS", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Xiang Zhang, Junbo Zhao, and Yann Lecun. 2015. Character-level Convolutional Networks for Text Classification. In NIPS.", |
| "links": null |
| } |
| }, |
| "ref_entries": { |
| "FIGREF0": { |
| "text": "Label-Preserving Heuristics -Sentiment Classification Accuracy: ACC(RS) > ACC(SR) > ACC(RI) > ACC(RD) Positive Distribution vs. Negative Distribution: KLD(RS) < KLD(SR) < KLD(RI) < KLD(RD)", |
| "uris": null, |
| "num": null, |
| "type_str": "figure" |
| }, |
| "FIGREF1": { |
| "text": "Non-Label-Preserving Sentiment Heuristics -Sentiment Classification Accuracy: ACC(ON E) >> ACC(ALL) Positive Distribution vs. Negative Distribution: KLD(ON E) < KLD(ALL)", |
| "uris": null, |
| "num": null, |
| "type_str": "figure" |
| }, |
| "FIGREF2": { |
| "text": "Non-Label-Preserving Verbosity Heuristics -Verbosity Classification Accuracy: ACC(N M N ) > ACC(SY N ) > ACC(DU P ) Verbose Distribution vs. Concise Distribution: KLD(N M N ) < KLD(SY N ) < KLD(DU P )", |
| "uris": null, |
| "num": null, |
| "type_str": "figure" |
| }, |
| "TABREF1": { |
| "text": "", |
| "num": null, |
| "content": "<table><tr><td>: Examples of Sentences Augmented using</td></tr><tr><td>Label-Preserving (SR, RI, RS, and RD) and Non-Label-</td></tr><tr><td>Preserving (ALL and ONE) Sentiment Heuristics</td></tr></table>", |
| "html": null, |
| "type_str": "table" |
| }, |
| "TABREF3": { |
| "text": "Examples of Sentences Augmented using the Verbosity Heuristics", |
| "num": null, |
| "content": "<table/>", |
| "html": null, |
| "type_str": "table" |
| }, |
| "TABREF5": { |
| "text": "Sentiment Classification Accuracy and Difference between Augmented Positive and Negative Distributions", |
| "num": null, |
| "content": "<table/>", |
| "html": null, |
| "type_str": "table" |
| }, |
| "TABREF7": { |
| "text": "", |
| "num": null, |
| "content": "<table><tr><td>: Verbosity Classification Accuracy and Differ-</td></tr><tr><td>ence between Augmented Verbose and Concise Distri-</td></tr><tr><td>butions</td></tr></table>", |
| "html": null, |
| "type_str": "table" |
| }, |
| "TABREF8": { |
| "text": "Execution Time of Our Proposed Heuristic Suitability Evaluation Approach Compared to the Classifier Training Time", |
| "num": null, |
| "content": "<table/>", |
| "html": null, |
| "type_str": "table" |
| } |
| } |
| } |
| } |