ACL-OCL / Base_JSON /prefixD /json /D19 /D19-1003.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "D19-1003",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T16:01:30.585955Z"
},
"title": "Practical Obstacles to Deploying Active Learning",
"authors": [
{
"first": "David",
"middle": [],
"last": "Lowell",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Northeastern University",
"location": {}
},
"email": "lowell.d@husky.neu.edu"
},
{
"first": "Zachary",
"middle": [
"C"
],
"last": "Lipton",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Carnegie Mellon University",
"location": {}
},
"email": "zlipton@cmu.edu"
},
{
"first": "Byron",
"middle": [
"C"
],
"last": "Wallace",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Northeastern University",
"location": {}
},
"email": "b.wallace@northeastern.edu"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Active learning (AL) is a widely-used training strategy for maximizing predictive performance subject to a fixed annotation budget. In AL one iteratively selects training examples for annotation, often those for which the current model is most uncertain (by some measure). The hope is that active sampling leads to better performance than would be achieved under independent and identically distributed (i.i.d.) random samples. While AL has shown promise in retrospective evaluations, these studies often ignore practical obstacles to its use. In this paper we show that while AL may provide benefits when used with specific models and for particular domains, the benefits of current approaches do not generalize reliably across models and tasks. This is problematic because in practice one does not have the opportunity to explore and compare alternative AL strategies. Moreover, AL couples the training dataset with the model used to guide its acquisition. We find that subsequently training a successor model with an actively-acquired dataset does not consistently outperform training on i.i.d. sampled data. Our findings raise the question of whether the downsides inherent to AL are worth the modest and inconsistent performance gains it tends to afford.",
"pdf_parse": {
"paper_id": "D19-1003",
"_pdf_hash": "",
"abstract": [
{
"text": "Active learning (AL) is a widely-used training strategy for maximizing predictive performance subject to a fixed annotation budget. In AL one iteratively selects training examples for annotation, often those for which the current model is most uncertain (by some measure). The hope is that active sampling leads to better performance than would be achieved under independent and identically distributed (i.i.d.) random samples. While AL has shown promise in retrospective evaluations, these studies often ignore practical obstacles to its use. In this paper we show that while AL may provide benefits when used with specific models and for particular domains, the benefits of current approaches do not generalize reliably across models and tasks. This is problematic because in practice one does not have the opportunity to explore and compare alternative AL strategies. Moreover, AL couples the training dataset with the model used to guide its acquisition. We find that subsequently training a successor model with an actively-acquired dataset does not consistently outperform training on i.i.d. sampled data. Our findings raise the question of whether the downsides inherent to AL are worth the modest and inconsistent performance gains it tends to afford.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Although deep learning now achieves state-ofthe-art results on a number of supervised learning tasks (Johnson and Zhang, 2016; Ghaddar and Langlais, 2018) , realizing these gains requires large annotated datasets (Shen et al., 2018) . This data dependence is problematic because labels are expensive. Several lines of research seek to reduce the amount of supervision required to achieve acceptable predictive performance, including semisupervised (Chapelle et al., 2009) , transfer (Pan and Yang, 2010) , and active learning (AL) (Cohn et al., 1996; Settles, 2012) .",
"cite_spans": [
{
"start": 101,
"end": 126,
"text": "(Johnson and Zhang, 2016;",
"ref_id": "BIBREF12"
},
{
"start": 127,
"end": 154,
"text": "Ghaddar and Langlais, 2018)",
"ref_id": "BIBREF8"
},
{
"start": 213,
"end": 232,
"text": "(Shen et al., 2018)",
"ref_id": "BIBREF27"
},
{
"start": 448,
"end": 471,
"text": "(Chapelle et al., 2009)",
"ref_id": "BIBREF1"
},
{
"start": 483,
"end": 503,
"text": "(Pan and Yang, 2010)",
"ref_id": "BIBREF20"
},
{
"start": 531,
"end": 550,
"text": "(Cohn et al., 1996;",
"ref_id": "BIBREF4"
},
{
"start": 551,
"end": 565,
"text": "Settles, 2012)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In AL, rather than training on a set of labeled data sampled at i.i.d. random from some larger population, the learner engages the annotator in a cycle of learning, iteratively selecting training data for annotation and updating its model. Poolbased AL (the variant we consider) proceeds in rounds. In each, the learner applies a heuristic to score unlabeled instances, selecting the highest scoring instances for annotation. 1 Intuitively, by selecting training data cleverly, an active learner might achieve greater predictive performance than it would by choosing examples at random.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The more informative samples come at the cost of violating the standard i.i.d. assumption upon which supervised machine learning typically relies. In other words, the training and test data no longer reflect the same underlying data distribution. Empirically, AL has been found to work well with a variety of tasks and models (Settles, 2012; Ramirez-Loaiza et al., 2017; Gal et al., 2017a; Zhang et al., 2017; Shen et al., 2018) . However, academic investigations of AL typically omit key real-world considerations that might overestimate its utility. For example, once a dataset is actively acquired with one model, it is seldom investigated whether this training sample will confer benefits if used to train a second model (vs i.i.d. data) . Given that datasets often outlive learning algorithms, this is an important practical consideration. In contrast to experimental (retrospective) studies, in a real-world setting, an AL practitioner is not afforded the opportunity to retrospectively analyze or alter their scoring function. One would instead need to expend significant resources to validate that a given scoring function performs as intended for a particular model and task. This would require i.i.d. sampled data to evaluate the comparative effectiveness of different AL strategies. However, collection of such additional data would defeat the purpose of AL, i.e., obviating the need for a large amount of supervision. To confidently use AL in practice, one must have a reasonable belief that a given AL scoring (or acquisition) function will produce the desired results before they deploy it (Attenberg and Provost, 2011) .",
"cite_spans": [
{
"start": 326,
"end": 341,
"text": "(Settles, 2012;",
"ref_id": null
},
{
"start": 342,
"end": 370,
"text": "Ramirez-Loaiza et al., 2017;",
"ref_id": "BIBREF24"
},
{
"start": 371,
"end": 389,
"text": "Gal et al., 2017a;",
"ref_id": "BIBREF6"
},
{
"start": 390,
"end": 409,
"text": "Zhang et al., 2017;",
"ref_id": "BIBREF32"
},
{
"start": 410,
"end": 428,
"text": "Shen et al., 2018)",
"ref_id": "BIBREF27"
},
{
"start": 725,
"end": 741,
"text": "(vs i.i.d. data)",
"ref_id": null
},
{
"start": 1604,
"end": 1633,
"text": "(Attenberg and Provost, 2011)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Most AL research does not explicitly characterize the circumstances under which AL may be expected to perform well. Practitioners must therefore make the implicit assumption that a given active acquisition strategy is likely to perform well under any circumstances. Our empirical findings suggest that this assumption is not well founded and, in fact, common AL algorithms behave inconsistently across model types and datasets, often performing no better than random (i.i.d.) sampling (1a). Further, while there is typically some AL strategy which outperforms i.i.d. random samples for a given dataset, which heuristic varies.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Contributions. We highlight important but often overlooked issues in the use of AL in practice. We report an extensive set of experimental results on classification and sequence tagging tasks that suggest AL typically affords only marginal performance gains at the somewhat high cost of noni.i.d. training samples, which do not consistently transfer well to subsequent models.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We illustrate inconsistent comparative performance using AL. Consider Figure 1a , in which we plot the relative gains (\u2206) achieved by a BiLSTM model using a maximum-entropy active sampling strategy, as compared to the same model trained with randomly sampled data. Positive values on the y-axis correspond to cases in which AL achieves better performance than random sampling, 0 (dotted line) indicates no difference between the two, and negative values correspond to cases in which random sampling performs better than AL. Across the four datasets shown, results are decidedly mixed.",
"cite_spans": [],
"ref_spans": [
{
"start": 70,
"end": 79,
"text": "Figure 1a",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "The (Potential) Trouble with AL",
"sec_num": "2"
},
{
"text": "And yet realizing these equivocal gains using AL brings inherent drawbacks. For example, acquisition functions generally depend on the underlying model being trained (Settles, 2009 (Settles, , 2012 , which we will refer to as the acquisition model. Consequently, the collected training data and the acquisition model are coupled. This coupling is problematic because manually labeled data tends to have a longer shelf life than models, largely because it is expensive to acquire. However, progress in machine learning is fast. Consequently, in many settings, an actively acquired dataset may remain in use (much) longer than the source model used to acquire it. In these cases, a few natural ques-tions arise: How does a successor model S fare, when trained on data collected via an acquisition model A? How does this compare to training S on natively acquired data? How does it compare to training S on i.i.d. data?",
"cite_spans": [
{
"start": 166,
"end": 180,
"text": "(Settles, 2009",
"ref_id": "BIBREF25"
},
{
"start": 181,
"end": 197,
"text": "(Settles, , 2012",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "The (Potential) Trouble with AL",
"sec_num": "2"
},
{
"text": "For example, if we use uncertainty sampling under a support vector machine (SVM) to acquire a training set D, and subsequently train a Convolutional Neural Network (CNN) using D, will the CNN perform better than it would have if trained on a dataset acquired via i.i.d. random sampling? And how does it perform compared to using a training corpus actively acquired using the CNN? Figure 1b shows results for a text classification example using the Subjectivity corpus (Pang and Lee, 2004) . We consider three models: a Bidirectional Long Short-Term Memory Network (BiLSTM) (Hochreiter and Schmidhuber, 1997) , a Convolutional Neural Network (CNN) (Kim, 2014; Zhang and Wallace, 2015) , and a Support Vector Machine (SVM) (Joachims, 1998) . Training the LSTM with a dataset actively acquired using either of the other models yields predictive performance that is worse than that achieved under i.i.d. sampling. Given that datasets tend to outlast models, these results raise questions regarding the benefits of using AL in practice.",
"cite_spans": [
{
"start": 468,
"end": 488,
"text": "(Pang and Lee, 2004)",
"ref_id": "BIBREF21"
},
{
"start": 573,
"end": 607,
"text": "(Hochreiter and Schmidhuber, 1997)",
"ref_id": "BIBREF9"
},
{
"start": 647,
"end": 658,
"text": "(Kim, 2014;",
"ref_id": "BIBREF13"
},
{
"start": 659,
"end": 683,
"text": "Zhang and Wallace, 2015)",
"ref_id": "BIBREF33"
},
{
"start": 721,
"end": 737,
"text": "(Joachims, 1998)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [
{
"start": 380,
"end": 389,
"text": "Figure 1b",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "The (Potential) Trouble with AL",
"sec_num": "2"
},
{
"text": "We note that in prior work, Tomanek and Morik (2011) also explored the transferability of actively acquired datasets, although their work did not consider modern deep learning models or share our broader focus on practical issues in AL.",
"cite_spans": [
{
"start": 28,
"end": 52,
"text": "Tomanek and Morik (2011)",
"ref_id": "BIBREF30"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "The (Potential) Trouble with AL",
"sec_num": "2"
},
{
"text": "We seek to answer two questions empirically: (1) How reliably does AL yield gains over sampling i.i.d.? And, (2) What happens when we use a dataset actively acquired using one model to train a different (successor) model? To answer these questions, we consider two tasks for which AL has previously been shown to confer considerable benefits: text classification and sequence tagging (specifically NER). 2 To build intuition, our experiments address both linear models and deep networks more representative of the current state-of-the-art for these tasks. We investigate the standard strategy of acquiring data and training using a single model, and also the case of acquiring data using one model and subsequently using it to train a second model. Our experiments consider all possible (acquisition, successor) pairs among the considered models, such that the standard AL scheme corresponds to the setting in which the acquisition and successor models are same. For each pair (A, S), we first simulate iterative active data acquisition with model A to label a training dataset D A . We then train the successor model S using D A .",
"cite_spans": [
{
"start": 404,
"end": 405,
"text": "2",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Questions and Setup",
"sec_num": "3"
},
{
"text": "In our evaluation, we compare the relative performance (accuracy or F1, as appropriate for the task) of the successor model trained with corpus D A to the scores achieved by training on comparable amounts of native and i.i.d. sampled data. We simulate pool-based AL using labeled benchmark datasets by withholding document labels from the models. This induces a pool of unlabeled data U. In AL, it is common to warm-start the acquisition model, training on some modest amount of i.i.d. labeled data D w before using the model to score candidates in U (Settles, 2009) and commencing the AL process. We follow this convention throughout.",
"cite_spans": [
{
"start": 551,
"end": 566,
"text": "(Settles, 2009)",
"ref_id": "BIBREF25"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Questions and Setup",
"sec_num": "3"
},
{
"text": "Once we have trained the acquisition model on the warm-start data, we begin the simulated AL loop, iteratively selecting instances for labeling and adding them to the dataset. We denote the dataset acquired by model A at iteration t by D t A ; D 0",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Questions and Setup",
"sec_num": "3"
},
{
"text": "A is initialized to D w for all models (i.e., all values of A). At each iteration, the acquisition model is trained with D t A . It then scores the remaining unlabeled documents in U \\ D t A according to a standard uncertainty AL heuristic. The top n candidates C t A are selected for (simulated) annotation. Their labels are revealed and they are added to the training set:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Questions and Setup",
"sec_num": "3"
},
{
"text": "D t+1 A \u2190 D t A \u222a C t A .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Questions and Setup",
"sec_num": "3"
},
{
"text": "At the experiment's conclusion (time step T ), each acquisition model A will have selected a (typically distinct) subset of U for training.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Questions and Setup",
"sec_num": "3"
},
{
"text": "Once we have acquired datasets from each acquisition model D A , we evaluate the performance of each possible successor model when trained on D A . Specifically, we train each successor model S on the acquired data D t A for all t in the range [0, T ], evaluating its performance on a held-out test set (distinct from U). We compare the performance achieved in this case to that obtained using an i.i.d. training set of the same size.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Questions and Setup",
"sec_num": "3"
},
{
"text": "We run this experiment ten times, averaging results to create summary learning curves, as shown in Figure 1 . All reported results, including i.i.d. baselines, are averages of ten experiments, each conducted with a distinct D w . These learning curves quantify the comparative performance of a particular model achieved using the same amount of supervision, but elicited under different acquisition models. For each model, we compare the learning curves of each acquisition strategy, including active acquisition using a foreign model and subsequent transfer, active acquisition without changing models (i.e., typical AL), and the baseline strategy of i.i.d. sampling.",
"cite_spans": [],
"ref_spans": [
{
"start": 99,
"end": 107,
"text": "Figure 1",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Experimental Questions and Setup",
"sec_num": "3"
},
{
"text": "We now briefly describe the models, datasets, acquisition functions, and implementation details for the experiments we conduct with active learners for text classification (4.1) and NER (4.2).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tasks",
"sec_num": "4"
},
{
"text": "Models We consider three standard models for text classification: Support Vector Machines (SVMs), Convolutional Neural Networks (CNNs) (Kim, 2014; Zhang and Wallace, 2015) , and Bidirectional Long Short-Term Memory (BiLSTM) networks (Hochreiter and Schmidhuber, 1997) . For SVM, we represent texts via sparse, TF-IDF bag-of-words (BoW) vectors. For neural models (CNN and BiLSTM), we represent each document as a sequence of word embeddings, stacked into an l \u00d7 d matrix where l is the length of the sentence and d is the dimensionality of the word embeddings. We initialize all word embeddings with pretrained GloVe vectors (Pennington et al., 2014) .",
"cite_spans": [
{
"start": 135,
"end": 146,
"text": "(Kim, 2014;",
"ref_id": "BIBREF13"
},
{
"start": 147,
"end": 171,
"text": "Zhang and Wallace, 2015)",
"ref_id": "BIBREF33"
},
{
"start": 233,
"end": 267,
"text": "(Hochreiter and Schmidhuber, 1997)",
"ref_id": "BIBREF9"
},
{
"start": 625,
"end": 650,
"text": "(Pennington et al., 2014)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Text Classification",
"sec_num": "4.1"
},
{
"text": "We initialize vector representations for all words for which we do not have pre-trained embeddings uniformly at random. For the CNN, we impose a maximum sentence length of 120 words, truncating sentences exceeding this length and padding shorter sentences. We used filter sizes of 3, 4, and 5, with 128 filters per size. For BiL-STMs, we selected the maximum sentence length such that 90% of sentences in D t would be of equal or lesser length. 3 We trained all neural models using the Adam optimizer (Kingma and Ba, 2014), with a learning rate of 0.001, \u03b2 1 = 0.9, \u03b2 1 = 0.999, and = 10 \u22128 .",
"cite_spans": [
{
"start": 445,
"end": 446,
"text": "3",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Text Classification",
"sec_num": "4.1"
},
{
"text": "We perform text classification experiments using four benchmark datasets. We reserve 20% of each dataset (sampled at i.i.d. random) as test data, and use the remaining 80% as the pool of unlabeled data U. We sample 2.5% of the remaining documents randomly from U for each D w . All models receive the same D w for any given experiment.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Datasets",
"sec_num": null
},
{
"text": "\u2022 Movie Reviews: This corpus consists of sentences drawn from movie reviews. The task is to classify sentences as expressing positive or negative sentiment (Pang and Lee, 2005 ).",
"cite_spans": [
{
"start": 156,
"end": 175,
"text": "(Pang and Lee, 2005",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Datasets",
"sec_num": null
},
{
"text": "\u2022 Subjectivity: This dataset consists of statements labeled as either objective or subjective (Pang and Lee, 2004 ).",
"cite_spans": [
{
"start": 94,
"end": 113,
"text": "(Pang and Lee, 2004",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Datasets",
"sec_num": null
},
{
"text": "\u2022 TREC: This task entails categorizing questions into 1 of 6 categories based on the subject of the question (e.g., questions about people, locations, and so on) (Li and Roth, 2002) . The TREC dataset defines standard train/test splits, but we generate our own for consistency in train/validation/test proportions across corpora.",
"cite_spans": [
{
"start": 162,
"end": 181,
"text": "(Li and Roth, 2002)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Datasets",
"sec_num": null
},
{
"text": "\u2022 Customer Reviews: This dataset is composed of product reviews. The task is to categorize them as positive or negative (Hu and Liu, 2004) .",
"cite_spans": [
{
"start": 120,
"end": 138,
"text": "(Hu and Liu, 2004)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Datasets",
"sec_num": null
},
{
"text": "Models We consider transfer between two NER models: Conditional Random Fields (CRF) (Lafferty et al., 2001 ) and Bidirectional LSTM-CNNs (BiLSTM-CNNs) (Chiu and Nichols, 2015). For the CRF model we use a set of features including word-level and character-based embeddings, word suffix, capitalization, digit contents, and part-of-speech tags. The BiLSTM-CNN model 4 initializes word vectors to pretrained GloVe vector embeddings (Pennington et al., 2014) . We learn all word and character level features from scratch, initializing with random embeddings.",
"cite_spans": [
{
"start": 84,
"end": 106,
"text": "(Lafferty et al., 2001",
"ref_id": "BIBREF15"
},
{
"start": 429,
"end": 454,
"text": "(Pennington et al., 2014)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Named Entity Recognition",
"sec_num": "4.2"
},
{
"text": "Datasets We perform NER experiments on the CoNLL-2003 and OntoNotes-5.0 English datasets. We used the standard test sets for both corpora, but merged training and validation sets to form U. We initialize each D w to 2.5% of U. Figure 2 : Sample learning curves for the text classification task on the Movie Reviews dataset and the NER task on the OntoNotes dataset using the maximum entropy acquisition function (we report learning curves for all models and datasets in the Appendix). Individual plots correspond to successor models. Each line corresponds to an acquisition model, with the blue line representing an i.i.d. baseline.",
"cite_spans": [],
"ref_spans": [
{
"start": 227,
"end": 235,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Named Entity Recognition",
"sec_num": "4.2"
},
{
"text": "scheme (Tjong Kim Sang and De Meulder, 2003) . The corpus contains 301,418 words.",
"cite_spans": [
{
"start": 7,
"end": 44,
"text": "(Tjong Kim Sang and De Meulder, 2003)",
"ref_id": "BIBREF29"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Named Entity Recognition",
"sec_num": "4.2"
},
{
"text": "\u2022 OntoNotes-5.0: A corpus of sentences drawn from a variety of sources including newswire, broadcast news, broadcast conversation, and web data. Words are categorized using eighteen entity categories annotated using the IOB scheme (Weischedel et al., 2013) . The corpus contains 2,053,446 words.",
"cite_spans": [
{
"start": 231,
"end": 256,
"text": "(Weischedel et al., 2013)",
"ref_id": "BIBREF31"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Named Entity Recognition",
"sec_num": "4.2"
},
{
"text": "We evaluate these models using three common active learning acquisition functions: classical uncertainty sampling, query by committee (QBC), and Bayesian active learning by disagreement (BALD).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acquisition Functions",
"sec_num": "4.3"
},
{
"text": "Uncertainty Sampling For text classification we use the entropy variant of uncertainty sampling, which is perhaps the most widely used AL heuristic (Settles, 2009) . Documents are selected for annotation according to the function",
"cite_spans": [
{
"start": 148,
"end": 163,
"text": "(Settles, 2009)",
"ref_id": "BIBREF25"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Acquisition Functions",
"sec_num": "4.3"
},
{
"text": "argmax x\u2208U \u2212 j P (y j |x) log P (y j |x),",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acquisition Functions",
"sec_num": "4.3"
},
{
"text": "where x are instances in the pool U, j indexes potential labels of these (we have elided the in-stance index here) and P (y j |x) is the predicted probability that x belongs to class y j (this estimate is implicitly conditioned on a model that can provide such estimates). For SVM, the equivalent form of this is to choose documents closest to the decision boundary.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acquisition Functions",
"sec_num": "4.3"
},
{
"text": "For the NER task we use maximized normalized log-probability (MNLP) (Shen et al., 2018) as our AL heuristic, which adapts the least confidence heuristics to sequences by normalizing the log probabilities of predicted tag sequence by the sequence length. This avoids favoring selecting longer sentences (owing to the lower probability of getting the entire tag sequence right). Documents are sorted in ascending order according to the function",
"cite_spans": [
{
"start": 68,
"end": 87,
"text": "(Shen et al., 2018)",
"ref_id": "BIBREF27"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Acquisition Functions",
"sec_num": "4.3"
},
{
"text": "max y 1 ,...,yn 1 n n i=1 log P (y i |y 1 , ..., y n\u22121 , x)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acquisition Functions",
"sec_num": "4.3"
},
{
"text": "Where the max over y assignments denotes the most likely set of tags for instance x and n is the sequence length. Because explicitly calculating the most likely tag sequence is computationally expensive, we follow (Shen et al., 2018) in using a greedy decoding (i.e., beam search with width 1) to determine the model's prediction. Query by Committee For our QBC experiments, we use the bagging variant of QBC (Mamitsuka et al., 1998) , in which a committee of n models is assembled by sampling with replacement n sets of m documents from the training data (D t at each t). Each model is then trained using a distinct resulting set, and the pool documents that maximize their disagreement are selected. We use 10 as our committee size, and set m as equal to the number of documents in D t .",
"cite_spans": [
{
"start": 214,
"end": 233,
"text": "(Shen et al., 2018)",
"ref_id": "BIBREF27"
},
{
"start": 409,
"end": 433,
"text": "(Mamitsuka et al., 1998)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Acquisition Functions",
"sec_num": "4.3"
},
{
"text": "For the text classification task, we compute disagreement using Kullback-Leibler divergence (McCallum and Nigamy, 1998) ments for annotation according to the function",
"cite_spans": [
{
"start": 92,
"end": 119,
"text": "(McCallum and Nigamy, 1998)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Text classification",
"sec_num": null
},
{
"text": "argmax x\u2208U 1 C C c=1 j P c (y j |x) log P c (y j |x) P C (y j |x)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Text classification",
"sec_num": null
},
{
"text": "where x are instances in the pool U, j indexes potential labels of these instances, and C is the committee size. P c (y j |x) is the probability that x belongs to class y j as predicted by committee member c. P C (y j |x) represents the consensus probability that x belongs to class y j ,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Text classification",
"sec_num": null
},
{
"text": "1 C C c=1 P c (y j |x).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Text classification",
"sec_num": null
},
{
"text": "For NER, we compute disagreement using the average per word vote-entropy (Dagan and Engelson, 1995) , selecting sequences for annotation which maximize the function",
"cite_spans": [
{
"start": 73,
"end": 99,
"text": "(Dagan and Engelson, 1995)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Text classification",
"sec_num": null
},
{
"text": "\u2212 1 n n i=1 m V (y i , m) C log V (y i , m) C",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Text classification",
"sec_num": null
},
{
"text": "where n is the sequence length, C is the committee size, and V (y i , m) is the number of committee members who assign tag m to word i in their most likely tag sequence. We do not apply the QBC acquisition function to the OntoNotes dataset, as training the committee for this larger dataset becomes impractical.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Text classification",
"sec_num": null
},
{
"text": "Bayesian AL by Disagreement We use the Monte Carlo variant of BALD, which exploits an interpretation of dropout regularization as a Bayesian approximation to a Gaussian process (Gal et al., 2017b; Siddhant and Lipton, 2018) . This technique entails applying dropout at test time, and then estimating uncertainty as the disagreement between outputs realized via multiple passes through the model. We use the acquisition function proposed in (Siddhant and Lipton, 2018) , which selects for annotation those instances that maximize the number of passes through the model that disagree with the most popular choice:",
"cite_spans": [
{
"start": 177,
"end": 196,
"text": "(Gal et al., 2017b;",
"ref_id": "BIBREF7"
},
{
"start": 197,
"end": 223,
"text": "Siddhant and Lipton, 2018)",
"ref_id": "BIBREF28"
},
{
"start": 440,
"end": 467,
"text": "(Siddhant and Lipton, 2018)",
"ref_id": "BIBREF28"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Text classification",
"sec_num": null
},
{
"text": "argmax x\u2208U (1 \u2212 count(mode(y 1 x , ..., y T x )) T )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Text classification",
"sec_num": null
},
{
"text": "where x are instances in the pool U, y i x is the class prediction of the ith model pass on instance x, and T is the number of passes taken through the model. Any ties are resolved using uncertainty sampling over the mean predicted probabilities of all T passes.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Text classification",
"sec_num": null
},
{
"text": "In the NER task, agreement is measured across the entire sequence. Because this acquisition function relies on dropout, we do not consider it for non-neural models (SVM and CRF).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Text classification",
"sec_num": null
},
{
"text": "We compare transfer between all possible (acquisition, successor) model pairs for each task. We report the performance of each model under all acquisition functions both in tables compiling results (Table 1 and Table 2 for classification and NER, respectively) and graphically via learning curves that plot predictive performance as a function of train set size (Figure 2) .",
"cite_spans": [],
"ref_spans": [
{
"start": 198,
"end": 218,
"text": "(Table 1 and Table 2",
"ref_id": "TABREF1"
},
{
"start": 362,
"end": 372,
"text": "(Figure 2)",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "5"
},
{
"text": "We report additional results, including all learning curves (for all model pairs and for all tasks), and tabular results (for all acquisition functions) in the Appendix. We also provide in the Appendix plots resembling 1a for all (model, acquisition function) pairs that report the difference between performance under standard AL (in which acquisition and successor model are the same) and that under commensurate i.i.d. data, which affords further analysis of the gains offered by standard AL. For text classification tasks, we report accuracies; for NER tasks, we report F1.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "5"
},
{
"text": "To compare the learning curves, we select incremental points along the x-axis and report the performance at these points. Specifically, we report results with training sets containing 10% and 20% of the training pool.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "5"
},
{
"text": "Results in Tables 1 and 2 AL thus seems to yield modest (though inconsistent) improvements over i.i.d. random sampling, but our results further suggest that this comes at an additional cost: the acquired dataset may not generalize well to new learners. Specifically, models trained on foreign actively acquired datasets tend to underperform those trained on i.i.d. datasets. We observe this most clearly in the classification task, where only a handful of (acquisition, successor, acquisition function) combinations lead to performance greater than that achieved using i.i.d. data. Specifically, only 37.5% of the tabulated data points representing dataset transfer (in which acquisition and successor models differ) outperform the i.i.d. baseline.",
"cite_spans": [],
"ref_spans": [
{
"start": 11,
"end": 25,
"text": "Tables 1 and 2",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Discussion",
"sec_num": "6"
},
{
"text": "Results for NER are more favorable for AL. For this task we observe consistent improved performance versus the i.i.d. baseline in both standard AL data points and transfer data points. These results are consistent with previous findings on transferring actively acquired datasets for NER (Tomanek and Morik, 2011) .",
"cite_spans": [
{
"start": 288,
"end": 313,
"text": "(Tomanek and Morik, 2011)",
"ref_id": "BIBREF30"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "6"
},
{
"text": "In standard AL for text classification, the only (model, acquisition function) pairs that we observe to produce better than i.i.d. results with any regularity are uncertainty with SVM or CNN, and BALD with CNN. When transferring actively acquired datasets, we do not observe consistently better than i.i.d. results with any combination of acquisition model, successor model, and acquisition function. The success of AL appears to depend very much on the dataset. For example, AL methods -both in the standard and acquisition/successor settings -perform much more reliably on the Subjectivity dataset than any other. In contrast, AL performs consistently poorly on the TREC dataset.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "6"
},
{
"text": "Our findings suggest that AL is brittle. During experimentation, we also found that performance often depends on factors that one may think are minor design decisions. For example, our setup largely resembles that of Siddhant and Lipton (2018) , yet initially we observed large discrepancies in results. Digging into this revealed that much of the difference was due to our use of word2vec (Mikolov et al., 2013) rather than GloVe (Pennington et al., 2014) for word embedding initializations. That small decisions like this can result in relatively pronounced performance differences for AL strategies is disconcerting.",
"cite_spans": [
{
"start": 217,
"end": 243,
"text": "Siddhant and Lipton (2018)",
"ref_id": "BIBREF28"
},
{
"start": 390,
"end": 412,
"text": "(Mikolov et al., 2013)",
"ref_id": "BIBREF19"
},
{
"start": 431,
"end": 456,
"text": "(Pennington et al., 2014)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "6"
},
{
"text": "A key advantage afforded by neural models is representation learning. A natural question here is therefore whether the representations induced by the neural models differs as a function of the acquisition strategy. To investigate this, we measure pairwise distances between instances in the learned feature space after training. Specifically, for each test instance we calculate its cosine similarity to all other test instances, inducing a ranking. We do this in the three different feature spaces learned by the CNN and LSTM models, respectively, after sampling under the three acquisition models.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "6"
},
{
"text": "We quantify dissimilarities between the rankings induced under different representations via Spearman's rank correlation coefficients. We re-peat this for all instances in the test set, and average over these coefficients to derive an overall similarity measure, which may be viewed as quantifying the similarity between learned feature spaces via average pairwise similarities within them. As reported in Table 4 , despite the aforementioned differences in predictive performance, the learned representations seem to be similar. In other words, sampling under foreign acquisition models does not lead to notably different representations.",
"cite_spans": [],
"ref_spans": [
{
"start": 406,
"end": 413,
"text": "Table 4",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Discussion",
"sec_num": "6"
},
{
"text": "We extensively evaluated standard AL methods under varying model, domain, and acquisition function combinations for two standard NLP tasks (text classification and sequence tagging). We also assessed performance achieved when transferring an actively sampled training dataset from an acquisition model to a distinct successor model. Given the longevity and value of training sets and the frequency at which new ML models advance the state-of-the-art, this should be an anticipated scenario: Annotated data often outlives models.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "7"
},
{
"text": "Our findings indicate that AL performs unreliably. While a specific acquisition function and model applied to a particular task and domain may be quite effective, it is not clear that this can be predicted ahead of time. Indeed, there is no way to retrospectively determine the relative success of AL without collecting a relatively large quantity of i.i.d. sampled data, and this would undermine the purpose of AL in the first place. Further, even if such an i.i.d. sample were taken as a diagnostic tool early in the active learning cycle, relative success early in the AL cycle is not necessarily indicative of relative success later in the cycle, as illustrated by Figure 1a .",
"cite_spans": [],
"ref_spans": [
{
"start": 669,
"end": 678,
"text": "Figure 1a",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "7"
},
{
"text": "Problematically, even in successful cases, an actively sampled training set is linked to the model used to acquire it. We have found that training successor models with this set will often result in performance worse than that attained using an equivalently sized i.i.d. sample. Results are more favorable to AL for NER, as compared to text classification, which is consistent with prior work (Tomanek and Morik, 2011) .",
"cite_spans": [
{
"start": 393,
"end": 418,
"text": "(Tomanek and Morik, 2011)",
"ref_id": "BIBREF30"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "7"
},
{
"text": "In short, the relative performance of individual active acquisition functions varies considerably over datasets and domains. While AL often does yield gains over i.i.d. sampling, these tend to be marginal and inconsistent. Moreover, this comes at a relatively steep cost: The acquired dataset may be disadvantageous for training subsequent models. Together these findings raise serious concerns regarding the efficacy of active learning in practice.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "7"
},
{
"text": "This may be done either deterministically, by selecting the top-k instances, or stochastically, selecting instances with probabilities proportional to heuristic scores.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Recent works have shown that AL is effective for these tasks even when using modern, neural architectures(Zhang et al., 2017;Shen et al., 2018), but do not address our primary concerns regarding replicability and transferability.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Passing longer sentences to the BiLSTM degraded performance in preliminary experiments.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Implementation of BiLSTM-CNN is based on https: //github.com/asiddhant/Active-NLP.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "This work was supported in part by the Army Research Office (ARO), award W911NF1810328.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgements",
"sec_num": "8"
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Inactive learning?: difficulties employing active learning in practice",
"authors": [
{
"first": "Josh",
"middle": [],
"last": "Attenberg",
"suffix": ""
},
{
"first": "Foster",
"middle": [],
"last": "Provost",
"suffix": ""
}
],
"year": 2011,
"venue": "ACM SIGKDD Explorations Newsletter",
"volume": "12",
"issue": "2",
"pages": "36--41",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Josh Attenberg and Foster Provost. 2011. Inactive learning?: difficulties employing active learning in practice. ACM SIGKDD Explorations Newsletter, 12(2):36-41.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Semi-supervised learning (chapelle, o",
"authors": [
{
"first": "Olivier",
"middle": [],
"last": "Chapelle",
"suffix": ""
},
{
"first": "Bernhard",
"middle": [],
"last": "Scholkopf",
"suffix": ""
},
{
"first": "Alexander",
"middle": [],
"last": "Zien",
"suffix": ""
}
],
"year": 2006,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Olivier Chapelle, Bernhard Scholkopf, and Alexander Zien. 2009. Semi-supervised learning (chapelle, o. et al., eds.; 2006)[book reviews].",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Named entity recognition with bidirectional lstm-cnns",
"authors": [
{
"first": "P",
"middle": [
"C"
],
"last": "Jason",
"suffix": ""
},
{
"first": "Eric",
"middle": [],
"last": "Chiu",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Nichols",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1511.08308"
]
},
"num": null,
"urls": [],
"raw_text": "Jason PC Chiu and Eric Nichols. 2015. Named en- tity recognition with bidirectional lstm-cnns. arXiv preprint arXiv:1511.08308.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Active learning with statistical models",
"authors": [
{
"first": "Zoubin",
"middle": [],
"last": "David A Cohn",
"suffix": ""
},
{
"first": "Michael I Jordan",
"middle": [],
"last": "Ghahramani",
"suffix": ""
}
],
"year": 1996,
"venue": "Journal of artificial intelligence research",
"volume": "4",
"issue": "",
"pages": "129--145",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David A Cohn, Zoubin Ghahramani, and Michael I Jor- dan. 1996. Active learning with statistical models. Journal of artificial intelligence research, 4:129- 145.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Committeebased sampling for training probabilistic classifiers",
"authors": [
{
"first": "Ido",
"middle": [],
"last": "Dagan",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Sean P Engelson",
"suffix": ""
}
],
"year": 1995,
"venue": "Machine Learning Proceedings",
"volume": "",
"issue": "",
"pages": "150--157",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ido Dagan and Sean P Engelson. 1995. Committee- based sampling for training probabilistic classifiers. In Machine Learning Proceedings 1995, pages 150- 157. Elsevier.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Deep bayesian active learning with image data",
"authors": [
{
"first": "Yarin",
"middle": [],
"last": "Gal",
"suffix": ""
},
{
"first": "Riashat",
"middle": [],
"last": "Islam",
"suffix": ""
},
{
"first": "Zoubin",
"middle": [],
"last": "Ghahramani",
"suffix": ""
}
],
"year": 2017,
"venue": "International Conference on Machine Learning",
"volume": "",
"issue": "",
"pages": "1183--1192",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yarin Gal, Riashat Islam, and Zoubin Ghahramani. 2017a. Deep bayesian active learning with im- age data. In International Conference on Machine Learning, pages 1183-1192.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Deep bayesian active learning with image data",
"authors": [
{
"first": "Yarin",
"middle": [],
"last": "Gal",
"suffix": ""
},
{
"first": "Riashat",
"middle": [],
"last": "Islam",
"suffix": ""
},
{
"first": "Zoubin",
"middle": [],
"last": "Ghahramani",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yarin Gal, Riashat Islam, and Zoubin Ghahramani. 2017b. Deep bayesian active learning with image data. CoRR, abs/1703.02910.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Robust lexical features for improved neural network namedentity recognition",
"authors": [
{
"first": "Abbas",
"middle": [],
"last": "Ghaddar",
"suffix": ""
},
{
"first": "Phillippe",
"middle": [],
"last": "Langlais",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 27th International Conference on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "1896--1907",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Abbas Ghaddar and Phillippe Langlais. 2018. Robust lexical features for improved neural network named- entity recognition. In Proceedings of the 27th In- ternational Conference on Computational Linguis- tics, pages 1896-1907. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Long short-term memory",
"authors": [
{
"first": "Sepp",
"middle": [],
"last": "Hochreiter",
"suffix": ""
},
{
"first": "J\u00fcrgen",
"middle": [],
"last": "Schmidhuber",
"suffix": ""
}
],
"year": 1997,
"venue": "Neural computation",
"volume": "9",
"issue": "8",
"pages": "1735--1780",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sepp Hochreiter and J\u00fcrgen Schmidhuber. 1997. Long short-term memory. Neural computation, 9(8):1735-1780.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Mining and summarizing customer reviews",
"authors": [
{
"first": "Minqing",
"middle": [],
"last": "Hu",
"suffix": ""
},
{
"first": "Bing",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of the tenth ACM SIGKDD international conference on Knowledge discovery and data mining",
"volume": "",
"issue": "",
"pages": "168--177",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Minqing Hu and Bing Liu. 2004. Mining and summa- rizing customer reviews. In Proceedings of the tenth ACM SIGKDD international conference on Knowl- edge discovery and data mining, pages 168-177. ACM.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Text categorization with support vector machines: Learning with many relevant features",
"authors": [
{
"first": "Thorsten",
"middle": [],
"last": "Joachims",
"suffix": ""
}
],
"year": 1998,
"venue": "European conference on machine learning",
"volume": "",
"issue": "",
"pages": "137--142",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thorsten Joachims. 1998. Text categorization with support vector machines: Learning with many rel- evant features. In European conference on machine learning, pages 137-142. Springer.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Supervised and semi-supervised text categorization using lstm for region embeddings",
"authors": [
{
"first": "Rie",
"middle": [],
"last": "Johnson",
"suffix": ""
},
{
"first": "Tong",
"middle": [],
"last": "Zhang",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 33rd International Conference on International Conference on Machine Learning",
"volume": "48",
"issue": "",
"pages": "526--534",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rie Johnson and Tong Zhang. 2016. Supervised and semi-supervised text categorization using lstm for region embeddings. In Proceedings of the 33rd In- ternational Conference on International Conference on Machine Learning -Volume 48, ICML'16, pages 526-534. JMLR.org.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Convolutional neural networks for sentence classification",
"authors": [
{
"first": "Yoon",
"middle": [],
"last": "Kim",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1408.5882"
]
},
"num": null,
"urls": [],
"raw_text": "Yoon Kim. 2014. Convolutional neural net- works for sentence classification. arXiv preprint arXiv:1408.5882.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Adam: A method for stochastic optimization",
"authors": [
{
"first": "P",
"middle": [],
"last": "Diederik",
"suffix": ""
},
{
"first": "Jimmy",
"middle": [],
"last": "Kingma",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ba",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1412.6980"
]
},
"num": null,
"urls": [],
"raw_text": "Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Conditional random fields: Probabilistic models for segmenting and labeling sequence data",
"authors": [
{
"first": "John",
"middle": [
"D"
],
"last": "Lafferty",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Mccallum",
"suffix": ""
},
{
"first": "Fernando",
"middle": [
"C N"
],
"last": "Pereira",
"suffix": ""
}
],
"year": 2001,
"venue": "Proceedings of the Eighteenth International Conference on Machine Learning, ICML '01",
"volume": "",
"issue": "",
"pages": "282--289",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "John D. Lafferty, Andrew McCallum, and Fernando C. N. Pereira. 2001. Conditional random fields: Probabilistic models for segmenting and labeling se- quence data. In Proceedings of the Eighteenth Inter- national Conference on Machine Learning, ICML '01, pages 282-289, San Francisco, CA, USA. Mor- gan Kaufmann Publishers Inc.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Learning question classifiers",
"authors": [
{
"first": "Xin",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Roth",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of the 19th international conference on Computational linguistics",
"volume": "1",
"issue": "",
"pages": "1--7",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xin Li and Dan Roth. 2002. Learning question clas- sifiers. In Proceedings of the 19th international conference on Computational linguistics-Volume 1, pages 1-7. Association for Computational Linguis- tics.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Query learning strategies using boosting and bagging",
"authors": [
{
"first": "Naoki Abe Hiroshi",
"middle": [],
"last": "Mamitsuka",
"suffix": ""
}
],
"year": 1998,
"venue": "Machine learning: proceedings of the fifteenth international conference (ICML98)",
"volume": "1",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Naoki Abe Hiroshi Mamitsuka et al. 1998. Query learning strategies using boosting and bagging. In Machine learning: proceedings of the fifteenth inter- national conference (ICML98), volume 1. Morgan Kaufmann Pub.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Employing em and pool-based active learning for text classification",
"authors": [
{
"first": "Andrew Kachites",
"middle": [],
"last": "Mccallum",
"suffix": ""
},
{
"first": "Kamal",
"middle": [],
"last": "Nigamy",
"suffix": ""
}
],
"year": 1998,
"venue": "Proc. International Conference on Machine Learning (ICML)",
"volume": "",
"issue": "",
"pages": "359--367",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Andrew Kachites McCallum and Kamal Nigamy. 1998. Employing em and pool-based active learning for text classification. In Proc. International Confer- ence on Machine Learning (ICML), pages 359-367. Citeseer.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Efficient estimation of word representations in vector space",
"authors": [
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
},
{
"first": "Kai",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Greg",
"middle": [],
"last": "Corrado",
"suffix": ""
},
{
"first": "Jeffrey",
"middle": [],
"last": "Dean",
"suffix": ""
}
],
"year": 2013,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013. Efficient estimation of word represen- tations in vector space.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "A survey on transfer learning",
"authors": [
{
"first": "Qiang",
"middle": [],
"last": "Sinno Jialin Pan",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Yang",
"suffix": ""
}
],
"year": 2010,
"venue": "IEEE Transactions on knowledge and data engineering",
"volume": "22",
"issue": "10",
"pages": "1345--1359",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sinno Jialin Pan and Qiang Yang. 2010. A survey on transfer learning. IEEE Transactions on knowledge and data engineering, 22(10):1345-1359.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "A sentimental education: Sentiment analysis using subjectivity summarization based on minimum cuts",
"authors": [
{
"first": "Bo",
"middle": [],
"last": "Pang",
"suffix": ""
},
{
"first": "Lillian",
"middle": [],
"last": "Lee",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of the ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bo Pang and Lillian Lee. 2004. A sentimental educa- tion: Sentiment analysis using subjectivity summa- rization based on minimum cuts. In Proceedings of the ACL.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Seeing stars: Exploiting class relationships for sentiment categorization with respect to rating scales",
"authors": [
{
"first": "Bo",
"middle": [],
"last": "Pang",
"suffix": ""
},
{
"first": "Lillian",
"middle": [],
"last": "Lee",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of the ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bo Pang and Lillian Lee. 2005. Seeing stars: Exploit- ing class relationships for sentiment categorization with respect to rating scales. In Proceedings of the ACL.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Glove: Global vectors for word representation",
"authors": [
{
"first": "Jeffrey",
"middle": [],
"last": "Pennington",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
}
],
"year": 2014,
"venue": "Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "1532--1543",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jeffrey Pennington, Richard Socher, and Christo- pher D. Manning. 2014. Glove: Global vectors for word representation. In Empirical Methods in Nat- ural Language Processing (EMNLP), pages 1532- 1543.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Active learning: an empirical study of common baselines",
"authors": [
{
"first": "Maria",
"middle": [
"E"
],
"last": "Ramirez-Loaiza",
"suffix": ""
},
{
"first": "Manali",
"middle": [],
"last": "Sharma",
"suffix": ""
},
{
"first": "Geet",
"middle": [],
"last": "Kumar",
"suffix": ""
},
{
"first": "Mustafa",
"middle": [],
"last": "Bilgic",
"suffix": ""
}
],
"year": 2017,
"venue": "Data Mining and Knowledge Discovery",
"volume": "31",
"issue": "2",
"pages": "287--313",
"other_ids": {
"DOI": [
"10.1007/s10618-016-0469-7"
]
},
"num": null,
"urls": [],
"raw_text": "Maria E. Ramirez-Loaiza, Manali Sharma, Geet Ku- mar, and Mustafa Bilgic. 2017. Active learning: an empirical study of common baselines. Data Mining and Knowledge Discovery, 31(2):287-313.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Active learning literature survey",
"authors": [
{
"first": "B",
"middle": [],
"last": "Settles",
"suffix": ""
}
],
"year": 2009,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "B. Settles. 2009. Active learning literature survey. Computer Sciences Technical Report 1648, Univer- sity of Wisconsin-Madison.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Active learning",
"authors": [],
"year": null,
"venue": "Synthesis Lectures on Artificial Intelligence and Machine Learning",
"volume": "6",
"issue": "1",
"pages": "1--114",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Burr Settles. 2012. Active learning. Synthesis Lec- tures on Artificial Intelligence and Machine Learn- ing, 6(1):1-114.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Deep active learning for named entity recognition",
"authors": [
{
"first": "Yanyao",
"middle": [],
"last": "Shen",
"suffix": ""
},
{
"first": "Hyokun",
"middle": [],
"last": "Yun",
"suffix": ""
},
{
"first": "Zachary",
"middle": [
"C"
],
"last": "Lipton",
"suffix": ""
},
{
"first": "Yakov",
"middle": [],
"last": "Kronrod",
"suffix": ""
},
{
"first": "Animashree",
"middle": [],
"last": "Anandkumar",
"suffix": ""
}
],
"year": 2018,
"venue": "International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yanyao Shen, Hyokun Yun, Zachary C. Lipton, Yakov Kronrod, and Animashree Anandkumar. 2018. Deep active learning for named entity recog- nition. In International Conference on Learning Representations.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Deep bayesian active learning for natural language processing: Results of a large-scale empirical study",
"authors": [
{
"first": "Aditya",
"middle": [],
"last": "Siddhant",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Zachary C Lipton",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1808.05697"
]
},
"num": null,
"urls": [],
"raw_text": "Aditya Siddhant and Zachary C Lipton. 2018. Deep bayesian active learning for natural language pro- cessing: Results of a large-scale empirical study. arXiv preprint arXiv:1808.05697.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Introduction to the conll-2003 shared task: Language-independent named entity recognition",
"authors": [
{
"first": "Erik",
"middle": [
"F"
],
"last": "Tjong",
"suffix": ""
},
{
"first": "Kim",
"middle": [],
"last": "Sang",
"suffix": ""
},
{
"first": "Fien",
"middle": [],
"last": "De Meulder",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of CoNLL-2003",
"volume": "",
"issue": "",
"pages": "142--147",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Erik F. Tjong Kim Sang and Fien De Meulder. 2003. Introduction to the conll-2003 shared task: Language-independent named entity recognition. In Proceedings of CoNLL-2003, pages 142-147. Ed- monton, Canada.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Inspecting sample reusability for active learning",
"authors": [
{
"first": "Katrin",
"middle": [],
"last": "Tomanek",
"suffix": ""
},
{
"first": "Katharina",
"middle": [],
"last": "Morik",
"suffix": ""
}
],
"year": 2011,
"venue": "Active Learning and Experimental Design workshop In conjunction with AISTATS 2010",
"volume": "",
"issue": "",
"pages": "169--181",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Katrin Tomanek and Katharina Morik. 2011. Inspect- ing sample reusability for active learning. In Ac- tive Learning and Experimental Design workshop In conjunction with AISTATS 2010, pages 169-181.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Ontonotes release 5.0 ldc2013t19. Linguistic Data Consortium, Philadelphia",
"authors": [
{
"first": "Ralph",
"middle": [],
"last": "Weischedel",
"suffix": ""
},
{
"first": "Martha",
"middle": [],
"last": "Palmer",
"suffix": ""
},
{
"first": "Mitchell",
"middle": [],
"last": "Marcus",
"suffix": ""
},
{
"first": "Eduard",
"middle": [],
"last": "Hovy",
"suffix": ""
},
{
"first": "Sameer",
"middle": [],
"last": "Pradhan",
"suffix": ""
},
{
"first": "Lance",
"middle": [],
"last": "Ramshaw",
"suffix": ""
},
{
"first": "Nianwen",
"middle": [],
"last": "Xue",
"suffix": ""
},
{
"first": "Ann",
"middle": [],
"last": "Taylor",
"suffix": ""
},
{
"first": "Jeff",
"middle": [],
"last": "Kaufman",
"suffix": ""
},
{
"first": "Michelle",
"middle": [],
"last": "Franchini",
"suffix": ""
}
],
"year": 2013,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ralph Weischedel, Martha Palmer, Mitchell Marcus, Eduard Hovy, Sameer Pradhan, Lance Ramshaw, Nianwen Xue, Ann Taylor, Jeff Kaufman, Michelle Franchini, et al. 2013. Ontonotes release 5.0 ldc2013t19. Linguistic Data Consortium, Philadel- phia, PA.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Active discriminative text representation learning",
"authors": [
{
"first": "Ye",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Matthew",
"middle": [],
"last": "Lease",
"suffix": ""
},
{
"first": "Byron C",
"middle": [],
"last": "Wallace",
"suffix": ""
}
],
"year": 2017,
"venue": "AAAI",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ye Zhang, Matthew Lease, and Byron C Wallace. 2017. Active discriminative text representation learning. In AAAI.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "A sensitivity analysis of (and practitioners' guide to) convolutional neural networks for sentence classification",
"authors": [
{
"first": "Ye",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Byron",
"middle": [],
"last": "Wallace",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1510.03820"
]
},
"num": null,
"urls": [],
"raw_text": "Ye Zhang and Byron Wallace. 2015. A sensitivity anal- ysis of (and practitioners' guide to) convolutional neural networks for sentence classification. arXiv preprint arXiv:1510.03820.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"num": null,
"type_str": "figure",
"text": "Performance of AL relative to i.i.d. across corpora. Transferring actively acquired training sets.",
"uris": null
},
"FIGREF1": {
"num": null,
"type_str": "figure",
"text": "We highlight practical issues in the use of AL. (a) AL yields inconsistent gains, relative to a baseline of i.i.d. sampling, across corpora. (b) Training a BiLSTM with training sets actively acquired based on the uncertainty of other models tends to result in worse performance than training on i.i.d. samples.",
"uris": null
},
"FIGREF2": {
"num": null,
"type_str": "figure",
"text": "CoNLL-2003: Sentences from Reuters news with words tagged as person, location, organization, or miscellaneous entities using an IOB BiLSTM-CNN on OntoNotes dataset",
"uris": null
},
"TABREF0": {
"num": null,
"type_str": "table",
"content": "<table><tr><td/><td colspan=\"3\">Acquisition model</td><td/></tr><tr><td/><td>10% of pool</td><td/><td>20% of pool</td><td/></tr><tr><td/><td/><td colspan=\"2\">Movie reviews</td><td/></tr><tr><td>SVM</td><td>65.3 65.3 65.8</td><td>65.7</td><td>68.2 69.0 69.4</td><td>68.9</td></tr><tr><td>CNN</td><td>65.0 65.3 65.5</td><td>65.4</td><td>69.4 69.1 69.5</td><td>69.5</td></tr><tr><td>LSTM</td><td>63.0 62.0 62.5</td><td>63.1</td><td>67.2 65.1 65.8</td><td>67.0</td></tr><tr><td/><td/><td colspan=\"2\">Subjectivity</td><td/></tr><tr><td>SVM</td><td>85.2 85.6 85.3</td><td>85.5</td><td>87.5 87.6 87.4</td><td>87.6</td></tr><tr><td>CNN</td><td>85.3 85.2 86.3</td><td>86.0</td><td>87.9 87.6 88.4</td><td>88.6</td></tr><tr><td>LSTM</td><td>82.9 82.7 82.7</td><td>84.1</td><td>86.7 86.3 85.8</td><td>87.6</td></tr><tr><td/><td/><td colspan=\"2\">TREC</td><td/></tr><tr><td>SVM</td><td>68.5 68.3 66.8</td><td>68.5</td><td>74.1 74.7 73.2</td><td>74.3</td></tr><tr><td>CNN</td><td>70.9 70.5 69.0</td><td>70.0</td><td>76.1 77.7 77.3</td><td>78.0</td></tr><tr><td>LSTM</td><td>65.2 64.5 63.6</td><td>63.8</td><td>71.5 72.7 71.0</td><td>73.3</td></tr><tr><td/><td/><td colspan=\"2\">Customer reviews</td><td/></tr><tr><td>SVM</td><td>68.8 70.5 70.3</td><td>68.5</td><td>73.6 74.2 72.9</td><td>71.1</td></tr><tr><td>CNN</td><td>70.6 70.9 71.7</td><td>68.2</td><td>74.1 74.5 74.8</td><td>71.5</td></tr><tr><td>LSTM</td><td>66.1 67.2 65.1</td><td>65.9</td><td>68.0 66.6 66.5</td><td>66.3</td></tr><tr><td/><td colspan=\"3\">Named Entity Recognition</td><td/></tr><tr><td/><td/><td colspan=\"2\">Acquisition Model</td><td/></tr><tr><td/><td colspan=\"2\">10% of pool</td><td colspan=\"2\">20% of pool</td></tr><tr><td>Successor</td><td colspan=\"4\">i.i.d. CRF BiLSTM-CNN i.i.d. CRF BiLSTM-CNN</td></tr><tr><td/><td/><td/><td>CoNLL</td><td/></tr><tr><td>CRF</td><td>69.2 70.5</td><td>70.2</td><td>73.6 74.4</td><td>74.0</td></tr><tr><td colspan=\"2\">BiLSTM-CNN 87.4 87.4</td><td>87.8</td><td>89.1 89.6</td><td>89.6</td></tr><tr><td/><td/><td colspan=\"2\">OntoNotes</td><td/></tr><tr><td>CRF</td><td>73.8 75.5</td><td>75.4</td><td>77.6 79.1</td><td>78.7</td></tr><tr><td colspan=\"2\">BiLSTM-CNN 82.6 83.1</td><td>83.1</td><td>84.6 85.2</td><td>84.9</td></tr></table>",
"html": null,
"text": "Successor i.i.d. SVM CNN LSTM i.i.d. SVM CNN LSTMTable 1: Text classification accuracy, evaluated for each combination of acquisition and successor models using uncertainty sampling. Accuracies are reported for training sets composed of 10% and 20% of the document pool. Colors indicate performance relative to i.i.d. baselines: Blue indicates that a model fared better, red that it performed worse, and black that it performed the same."
},
"TABREF1": {
"num": null,
"type_str": "table",
"content": "<table/>",
"html": null,
"text": "F1 measurements for the NER task, with training sets comprising 10% and 20% of the training pool."
},
"TABREF3": {
"num": null,
"type_str": "table",
"content": "<table/>",
"html": null,
"text": "Text classification dataset statistics."
},
"TABREF5": {
"num": null,
"type_str": "table",
"content": "<table><tr><td>sistently across text classification datasets. In 75%</td></tr><tr><td>of all combinations of model, dataset, and training</td></tr><tr><td>set size, there exists some acquisition function that</td></tr><tr><td>outperforms i.i.d. data. This is consistent with the</td></tr><tr><td>prior literature indicating the effectiveness of AL.</td></tr><tr><td>However, when implementing AL in a real, live</td></tr><tr><td>setting, a practitioner would choose a single acqui-</td></tr><tr><td>sition function ahead of time. To accurately reflect</td></tr><tr><td>this scenario, we must consider the performance</td></tr><tr><td>of individual acquisition functions across multiple</td></tr><tr><td>datasets. Results for individual AL strategies are</td></tr><tr><td>more equivocal. In our reported classification dat-</td></tr><tr><td>apoints, standard AL outperforms i.i.d. sampling</td></tr><tr><td>in only a slight majority (60.9%) of cases.</td></tr></table>",
"html": null,
"text": "Average Spearman's rank correlation coefficients (over five runs) of cosine distances between test set representations learned with native active learning and distances between those learned with transferred actively acquired datasets, at the end of the AL process. Uncertainty is used as the acquisition function in all cases."
}
}
}
}