ACL-OCL / Base_JSON /prefixS /json /S19 /S19-1020.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "S19-1020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T15:46:21.623173Z"
},
"title": "Exploration of Noise Strategies in Semi-supervised Named Entity Classification",
"authors": [
{
"first": "Pooja",
"middle": [],
"last": "Lakshmi",
"suffix": "",
"affiliation": {},
"email": "poojal@email.arizona.edu"
},
{
"first": "Ajay",
"middle": [],
"last": "Nagesh",
"suffix": "",
"affiliation": {},
"email": "ajaynagesh@didiglobal.com"
},
{
"first": "Mihai",
"middle": [],
"last": "Surdeanu",
"suffix": "",
"affiliation": {},
"email": "msurdeanu@email.arizona.edu"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Noise is inherent in real world datasets and modeling noise is critical during training as it is effective in regularization. Recently, novel semi-supervised deep learning techniques have demonstrated tremendous potential when learning with very limited labeled training data in image processing tasks. A critical aspect of these semi-supervised learning techniques is augmenting the input or the network with noise to be able to learn robust models. While modeling noise is relatively straightforward in continuous domains such as image classification, it is not immediately apparent how noise can be modeled in discrete domains such as language. Our work aims to address this gap by exploring different noise strategies for the semi-supervised named entity classification task, including statistical methods such as adding Gaussian noise to input embeddings, and linguistically-inspired ones such as dropping words and replacing words with their synonyms. We compare their performance on two benchmark datasets (OntoNotes and CoNLL) for named entity classification. Our results indicate that noise strategies that are linguistically informed perform at least as well as statistical approaches, while being simpler and requiring minimal tuning.",
"pdf_parse": {
"paper_id": "S19-1020",
"_pdf_hash": "",
"abstract": [
{
"text": "Noise is inherent in real world datasets and modeling noise is critical during training as it is effective in regularization. Recently, novel semi-supervised deep learning techniques have demonstrated tremendous potential when learning with very limited labeled training data in image processing tasks. A critical aspect of these semi-supervised learning techniques is augmenting the input or the network with noise to be able to learn robust models. While modeling noise is relatively straightforward in continuous domains such as image classification, it is not immediately apparent how noise can be modeled in discrete domains such as language. Our work aims to address this gap by exploring different noise strategies for the semi-supervised named entity classification task, including statistical methods such as adding Gaussian noise to input embeddings, and linguistically-inspired ones such as dropping words and replacing words with their synonyms. We compare their performance on two benchmark datasets (OntoNotes and CoNLL) for named entity classification. Our results indicate that noise strategies that are linguistically informed perform at least as well as statistical approaches, while being simpler and requiring minimal tuning.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Modeling noise is a fundamental aspect of machine learning systems. The real world where these systems are deployed are certainly exposed to noisy data. Furthermore, noise is used as an effective regularizer during the training of neural networks (e.g., dropout (Srivastava et al., 2014) ). Correct prediction in the presence of noisy input demonstrates robustness of learning systems. A simple analogy to illustrate this is, during image classification, the addition of limited random Gaussian noise to an image can be barely perceived by our visual system and does not drastically change the label a human assigns to an image (Raj, 2018) . With the emphasis on compliance and recent advances in adversarial techniques, modeling noise has assumed renewed importance (Goodfellow et al., 2014) .",
"cite_spans": [
{
"start": 262,
"end": 287,
"text": "(Srivastava et al., 2014)",
"ref_id": "BIBREF11"
},
{
"start": 628,
"end": 639,
"text": "(Raj, 2018)",
"ref_id": "BIBREF9"
},
{
"start": 767,
"end": 792,
"text": "(Goodfellow et al., 2014)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Noise is an important factor in recent state-ofthe-art semi-supervised learning systems for image classification (Tarvainen and Valpola, 2017; Rasmus et al., 2015; Miyato et al., 2018) . In image processing modeling random noise is relatively straightforward as it is a continuous domain. For instance, adding a small amount random Gaussian jitter can be considered as noisy input. So are other image transformations such as translation, rotation, removing color and so on. However, a discrete domain such as language is not easily amenable to noise augmentation. While one can certainly add random Gaussian noise to embeddings of words (continuous vector representation such as word2vec rather than one-hot encoding), the intuition behind such perturbation is not apparent. Algorithms which require explicit modeling of noise require careful thinking in the language domain and is challenging (Clark et al., 2018; Nagesh and Surdeanu, 2018a) .",
"cite_spans": [
{
"start": 113,
"end": 142,
"text": "(Tarvainen and Valpola, 2017;",
"ref_id": "BIBREF12"
},
{
"start": 143,
"end": 163,
"text": "Rasmus et al., 2015;",
"ref_id": "BIBREF10"
},
{
"start": 164,
"end": 184,
"text": "Miyato et al., 2018)",
"ref_id": "BIBREF5"
},
{
"start": 894,
"end": 914,
"text": "(Clark et al., 2018;",
"ref_id": "BIBREF1"
},
{
"start": 915,
"end": 942,
"text": "Nagesh and Surdeanu, 2018a)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "To the best of our knowledge, previous work in the area of modeling noise in natural language processing (NLP) applications has been limited. Clark et al. (2018) acknowledge the difficulty of modeling noise for language and incorporate a simple word dropout in their experiments. So does the work by Nagesh and Surdeanu (2018a) . Nagesh and Surdeanu (2018b) add a standard Gaussian perturbation with a fixed variance to the pretrained word vectors to simulate noise. Belinkov and Bisk (2017) is perhaps one of the most comprehensive works that explore various noise strategies with a different end goal in mind. Their work Figure 1 : Mean Teacher framework for the named entity classification task (left). E wi are words in the entity mention, W i are words in the context with entity mention replaced by <E> token. cost = (classification cost) + \u03bb(consistency cost). Unlabeled examples have only consistency cost. Backprop only through student model, teacher model parameters are averaged. The architecture of the student or teacher model (right). Noise can be added to parts in boldface. predictions = softmax(output layer) explores the degree of robustness of various neural network approaches to different types of noise on a machine translation task.",
"cite_spans": [
{
"start": 142,
"end": 161,
"text": "Clark et al. (2018)",
"ref_id": "BIBREF1"
},
{
"start": 300,
"end": 327,
"text": "Nagesh and Surdeanu (2018a)",
"ref_id": "BIBREF6"
},
{
"start": 467,
"end": 491,
"text": "Belinkov and Bisk (2017)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [
{
"start": 623,
"end": 631,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper, we discuss several noise strategies for the semi-supervised named entity classification task. Some of these, such as word-dropout and synonym-replace, are linguistic and are discrete in nature while others such as Gaussian perturbation to word embeddings are statistical. We show that linguistic noise, while being simple, perform as well as statistical noise. A combination of linguistic and network dropout provides the best performance.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Semi-supervised learning (SSL) is one of the cornerstones in machine learning (ML) (Zhu, 2005) . This is especially true in the case of natural language processing (NLP), as obtaining labeled training data is a costly and tedious process for most of the data-hungry deep learning models.",
"cite_spans": [
{
"start": 83,
"end": 94,
"text": "(Zhu, 2005)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Semi-supervised Deep Learning",
"sec_num": "2"
},
{
"text": "There has been a flurry of recent work in SSL in the image processing community (Tarvainen and Valpola, 2017; Rasmus et al., 2015) . Some of these recent works have achieved impressive performance on hard perceptive tasks. However, repurposing these works to NLP is not a straight forward exercise. As stated earlier, many of these approaches require noise (along with an optional input augmentation step such as rotation) to change the percept slightly, to achieve robust performance. However, augmenting data with noise for NLP tasks is not very clear, as the input domain consists of discrete tokens rather than continuous inputs such as images.",
"cite_spans": [
{
"start": 80,
"end": 109,
"text": "(Tarvainen and Valpola, 2017;",
"ref_id": "BIBREF12"
},
{
"start": 110,
"end": 130,
"text": "Rasmus et al., 2015)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Semi-supervised Deep Learning",
"sec_num": "2"
},
{
"text": "In our previous work (Nagesh and Surdeanu, 2018a), we evaluated three different semi-supervised learning paradigms, namely, bootstrapping-based approaches (Gupta and Manning, 2015) , ladder networks (Rasmus et al., 2015) and mean-teacher (Tarvainen and Valpola, 2017) for the semi-supervised named entity classification (NEC) task. The mean-teacher (MT) approach produced the best performance. However, our exploration of noise was limited in the previous study and hence is the focus of the current paper.",
"cite_spans": [
{
"start": 155,
"end": 180,
"text": "(Gupta and Manning, 2015)",
"ref_id": "BIBREF3"
},
{
"start": 199,
"end": 220,
"text": "(Rasmus et al., 2015)",
"ref_id": "BIBREF10"
},
{
"start": 238,
"end": 267,
"text": "(Tarvainen and Valpola, 2017)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Semi-supervised Deep Learning",
"sec_num": "2"
},
{
"text": "The MT framework belongs to the general class of teacher-student networks that learns in the semi-supervised setting i.e., limited supervision and a large amount of unlabeled data and is illustrated in the left part of Figure 1 . It consists of two models, termed student and teacher which are structurally identical but differ in the way their parameters are updated. While the student is updated using regular back-propagation, the parameters of the teacher are a weighted average of the student parameters across different epochs. Further, the cost function is a linear combination of supervision cost (from the limited number amount of supervision) and consistency cost (agreement between the representation from the teacher and student models measured as the L 2 norm difference between them). The motivation of using consistency in the cost function and averaging the parameters in the teacher is to reduce confirmation bias in the teacher when its own predictions are used as pseudo-labels during the training process (akin to averaged perceptron). This provides a strong proxy for the student to rely on in the absence of labeled training data (Tarvainen and Valpola, 2017) .",
"cite_spans": [
{
"start": 1152,
"end": 1181,
"text": "(Tarvainen and Valpola, 2017)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [
{
"start": 219,
"end": 227,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Semi-supervised Deep Learning",
"sec_num": "2"
},
{
"text": "The specific model we employ for semisupervised named entity classification (NEC) task along with a canonical input data point is depicted in the right part of Figure 1 . The input consists of an entity mention and the sentence it appears in, as the context. The goal is to predict the label of the entity. In the semi-supervised setting only a few labeled data points are provided, the rest of the data is unlabeled. We initialize the words in the example with pre-trained word embeddings and run a bi-directional LSTM on both the entity mention and its context. We concatenate the final LSTM state of both the mention and the context representations and run a multi-layer perceptron with one hidden layer to produce the output layer.",
"cite_spans": [],
"ref_spans": [
{
"start": 160,
"end": 168,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Semi-supervised Deep Learning",
"sec_num": "2"
},
{
"text": "A key aspect of the MT framework is the augmentation of the input and/or the network with noise as shown in the right part of Figure 1 . We explain this in detail in the next section.",
"cite_spans": [],
"ref_spans": [
{
"start": 126,
"end": 134,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Semi-supervised Deep Learning",
"sec_num": "2"
},
{
"text": "A critical component in the algorithm is the addition of noise to the models. Noise can be added mainly in three key places to the model presented in the previous section as depicted in Figure 1 (parts in boldface). We add a similar but distinct noise to both the teacher and the student models. 1 Input noise -In the form of linguistically motivated noise such as word dropout, or replacing words with their synonyms (more details below). 2 Statistical noise -In the form of standard Gaussian perturbations to pre-trained word embeddings. 3 Network noise -Dropout in the intermediate layers of the student and teacher networks.",
"cite_spans": [],
"ref_spans": [
{
"start": 186,
"end": 194,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Exploration of Noise Strategies",
"sec_num": "3"
},
{
"text": "The idea of adding noise is to regularize the model parameters and help learn robust models in the scenario of very limited labeled training data using the teacher and student models via the consistency cost. Consequently, the MT framework can also be perceived as a consistency regularization technique.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Exploration of Noise Strategies",
"sec_num": "3"
},
{
"text": "The input noise is applied to the context of an entity mention. The noise was added to a fixed number of words in a context. We explored different types of input noise: (1) Word-dropout -dropping words randomly in the input context (2) Synonym-replace -replace a randomly chosen word in the context by its synonym from WordNet (3) Word-dropout-idf -drop the most informative word in the context, as determined by the inverse document frequency (IDF) of context words computed offline. (4) Synonym-replace-idf -replace the words in the context according to their IDF (as described above).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Exploration of Noise Strategies",
"sec_num": "3"
},
{
"text": "For the statistical noise, we perturbed the pretrained word embeddings with standard Gaussian noise with a fixed standard deviation. We varied the amount of standard deviation and the number of words to which this type of noise is added. As we demonstrate in the experiments, this requires careful tuning. Further, adding Gaussian noise is a computationally intensive process as we need to perform this operation in every minibatch during the training process.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Exploration of Noise Strategies",
"sec_num": "3"
},
{
"text": "We implemented network noise with dropout with fixed probability in both the context representation and the hidden layer in the multi-layer perceptron.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Exploration of Noise Strategies",
"sec_num": "3"
},
{
"text": "Finally, we combined network noise with input noise. Empirically, we show that this combination yields the best possible performance for the task addressed.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Exploration of Noise Strategies",
"sec_num": "3"
},
{
"text": "Task and datasets:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4"
},
{
"text": "The task investigated in this work is named entity classification (NEC), defined as identifying the correct type of an entity mention in a given context, e.g., classifying \"Bill Clinton\" in the sentence \"Former President Bill Clinton expects to attend the inauguration tomorrow.\" as a person name. We define the context as the complete sentence in which the entity mention appears. We use standard benchmark datasets, namely, CoNLL-2003 shared task dataset (Tjong Kim Sang and De Meulder, 2003) and Ontonotes-2013 (Pradhan et al., 2013 . Our setting is semisupervised NEC, so we randomly select a very small percentage of the training dataset (40 datapoints i.e. 0.18% of CoNLL and 440 datapoints i.e. 0.56% of Ontonotes as labeled data, and artificially remove the labels of the remaining datapoints to simulate the semi-supervised setting. Our task is to predict the correct labels of the unlabeled datapoints. CoNLL had 4 label categories while Ontonotes has 11. We measure the accuracy as the percentage of the datapoints which have been predicted with the correct labels. Experimental settings: We use the entity boundaries for all datapoints during training but only use labels for a small portion of the data as indicated above. We demonstrate an input to our model in the bottom-right of Figure 1 . To reduce computational overhead, we filtered out entity mentions which were greater than length 5 from the Ontonotes dataset (4 respectively for CoNLL), and contexts which were greater than length 59 or smaller than length 5 (40 and 3 respectively for CoNLL). Following Nagesh and Surdeanu (2018a), we intialized the pre-trained word-embeddings from Levy and Goldberg (2014) (300d). We ran a 100d bi-directional LSTM on both the entity and context representations, concatenated their outputs and fed them to a 300d multi-layer perceptron with ReLU activations. For network dropout we used p = 0.2. This is similar to dropout regularization used in deep neural networks but since the dropout layer drops neurons randomly in teacher and student, this acts as noise in the MT framework. We tried a few variations of this model such as augmenting the LSTM with position embeddings, attention and replacing the LSTM with an average model, but did not observe a considerable improvement in performance.",
"cite_spans": [
{
"start": 457,
"end": 494,
"text": "(Tjong Kim Sang and De Meulder, 2003)",
"ref_id": "BIBREF13"
},
{
"start": 499,
"end": 513,
"text": "Ontonotes-2013",
"ref_id": null
},
{
"start": 514,
"end": 535,
"text": "(Pradhan et al., 2013",
"ref_id": "BIBREF8"
}
],
"ref_spans": [
{
"start": 1296,
"end": 1304,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4"
},
{
"text": "Results: We present our main results in Table 1 . An important note is that the results are the accuracy of classification over 21,373 and 78,492 datapoints in CoNLL and Ontonotes respectively, using only a tiny sliver of the labels in these datasets as supervision. Increasing the number of labeled examples as supervision has the expected effect of improvement in performance. However it is often difficult to obtain sufficient number of examples in the real world. The datapoints for supervision are chosen randomly having equal representation in all classes. The analysis of amount of supervision and its effect on accuracy is reported in Nagesh and Surdeanu (2018a) . We report the average (along with their variance) of 5 randomized runs in each noise setting. Our baseline is the no noise setting, where the input to the student and teacher models are not augmented by noise. From Table 1 , we observe that adding noise is necessary for good performance, as we see that the various noise strategies consistently improve performance over the baseline on both the datasets. Network noise is a crucial factor for good per-formance. Input noise which are linguistically motivated, such as word-dropout and synonymreplace perform as well as the statistical noise. More specifically, word-dropout of 3 words and synonym-replace of 3 words, are the highest performing non-network noise strategies on CoNLL and Ontonotes respectively. Synonym-replace is an interesting strategy as we believe it makes the input more interpretable. In the sense that, the word embedding of a synonym word is closer to the actual word in the vector space. As opposed to gaussian embedding noise, which is a random delta noise added to the embedding to perturb it and we are not sure of its orientation in the high dimensional space. Adding Gaussian noise to all words results in performance poorer than or close to baseline. 1 Furthermore, Gaussian noise requires fine-tuning over the value of stdev and the number of words on which these should be applied which makes this computationally expensive approach ( Table 2) . The performance on *-*-idf runs suggest that random word selection is as good or better. This is ideal, since it is simpler and independent of the data distribution. Finally, network noise in combination with linguistic input noise provides the best possible performance, as seen in Figure 2 . One possible explanation for this could be that ensembling two high performance systems is akin to combining two good signals achieving better overall results.",
"cite_spans": [
{
"start": 643,
"end": 670,
"text": "Nagesh and Surdeanu (2018a)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [
{
"start": 40,
"end": 47,
"text": "Table 1",
"ref_id": "TABREF1"
},
{
"start": 888,
"end": 895,
"text": "Table 1",
"ref_id": "TABREF1"
},
{
"start": 2091,
"end": 2099,
"text": "Table 2)",
"ref_id": "TABREF0"
},
{
"start": 2385,
"end": 2393,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4"
},
{
"text": "The modeling of noise in discrete domains such as language has received limited focus so far, in the language processing community. In this work we explore several noise strategies for the semisupervised named entity classification task using the mean teacher framework, where noise augmentation is a crucial factor. We show that linguistic noise such as word-dropout and synonymreplace perform as well as statistical noise, while being simpler and easier to tune. A combination of linguistic and network dropout provides the best performance. As part of future work, we wish to explore noise augmentation in other language processing tasks such as fine-grained entity typing.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Future Work",
"sec_num": "5"
},
{
"text": "InTable 1, for Gaussian noise, stdev value is chosen randomly as 4. If we have the luxury to tune this parameter thenTable 2noise, gives the best performance at stdev 0.05.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Synthetic and natural noise both break neural machine translation",
"authors": [
{
"first": "Yonatan",
"middle": [],
"last": "Belinkov",
"suffix": ""
},
{
"first": "Yonatan",
"middle": [],
"last": "Bisk",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yonatan Belinkov and Yonatan Bisk. 2017. Synthetic and natural noise both break neural machine transla- tion. CoRR, abs/1711.02173.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Semi-supervised sequence modeling with cross-view training",
"authors": [
{
"first": "Kevin",
"middle": [],
"last": "Clark",
"suffix": ""
},
{
"first": "Thang",
"middle": [],
"last": "Luong",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
},
{
"first": "Quoc",
"middle": [
"V"
],
"last": "Le",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kevin Clark, Thang Luong, Christopher D. Manning, and Quoc V. Le. 2018. Semi-supervised sequence modeling with cross-view training.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Explaining and harnessing adversarial examples",
"authors": [
{
"first": "J",
"middle": [],
"last": "Ian",
"suffix": ""
},
{
"first": "Jonathon",
"middle": [],
"last": "Goodfellow",
"suffix": ""
},
{
"first": "Christian",
"middle": [],
"last": "Shlens",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Szegedy",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1412.6572"
]
},
"num": null,
"urls": [],
"raw_text": "Ian J Goodfellow, Jonathon Shlens, and Christian Szegedy. 2014. Explaining and harnessing adver- sarial examples. arXiv preprint arXiv:1412.6572.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Distributed representations of words to guide bootstrapped entity classifiers",
"authors": [
{
"first": "Sonal",
"middle": [],
"last": "Gupta",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Christopher",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the Conference of the North American Chapter",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sonal Gupta and Christopher D. Manning. 2015. Dis- tributed representations of words to guide boot- strapped entity classifiers. In Proceedings of the Conference of the North American Chapter of the Association for Computational Linguistics.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Dependencybased word embeddings",
"authors": [
{
"first": "Omer",
"middle": [],
"last": "Levy",
"suffix": ""
},
{
"first": "Yoav",
"middle": [],
"last": "Goldberg",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics",
"volume": "2",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Omer Levy and Yoav Goldberg. 2014. Dependency- based word embeddings. In Proceedings of the 52nd Annual Meeting of the Association for Computa- tional Linguistics (Volume 2: Short Papers), Bal- timore, Maryland. Association for Computational Linguistics.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Virtual adversarial training: A regularization method for supervised and semi-supervised learning",
"authors": [
{
"first": "T",
"middle": [],
"last": "Miyato",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Maeda",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Ishii",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Koyama",
"suffix": ""
}
],
"year": 2018,
"venue": "IEEE Transactions on Pattern Analysis and Machine Intelligence",
"volume": "",
"issue": "",
"pages": "1--1",
"other_ids": {
"DOI": [
"10.1109/TPAMI.2018.2858821"
]
},
"num": null,
"urls": [],
"raw_text": "T. Miyato, S. Maeda, S. Ishii, and M. Koyama. 2018. Virtual adversarial training: A regularization method for supervised and semi-supervised learn- ing. IEEE Transactions on Pattern Analysis and Machine Intelligence, pages 1-1.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "An exploration of three lightly-supervised representation learning approaches for named entity classification",
"authors": [
{
"first": "Ajay",
"middle": [],
"last": "Nagesh",
"suffix": ""
},
{
"first": "Mihai",
"middle": [],
"last": "Surdeanu",
"suffix": ""
}
],
"year": 2018,
"venue": "COLING",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ajay Nagesh and Mihai Surdeanu. 2018a. An ex- ploration of three lightly-supervised representation learning approaches for named entity classification. In COLING.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Keep your bearings: Lightly-supervised information extraction with ladder networks that avoids semantic drift",
"authors": [
{
"first": "Ajay",
"middle": [],
"last": "Nagesh",
"suffix": ""
},
{
"first": "Mihai",
"middle": [],
"last": "Surdeanu",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ajay Nagesh and Mihai Surdeanu. 2018b. Keep your bearings: Lightly-supervised information extraction with ladder networks that avoids semantic drift. In NAACL HLT 2018.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Towards robust linguistic analysis using ontonotes",
"authors": [
{
"first": "Alessandro",
"middle": [],
"last": "Sameer Pradhan",
"suffix": ""
},
{
"first": "Nianwen",
"middle": [],
"last": "Moschitti",
"suffix": ""
},
{
"first": "Hwee Tou",
"middle": [],
"last": "Xue",
"suffix": ""
},
{
"first": "Anders",
"middle": [],
"last": "Ng",
"suffix": ""
},
{
"first": "Olga",
"middle": [],
"last": "Bjrkelund",
"suffix": ""
},
{
"first": "Yuchen",
"middle": [],
"last": "Uryupina",
"suffix": ""
},
{
"first": "Zhi",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Zhong",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the Seventeenth Conference on Computational Natural Language Learning",
"volume": "",
"issue": "",
"pages": "143--152",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sameer Pradhan, Alessandro Moschitti, Nianwen Xue, Hwee Tou Ng, Anders Bjrkelund, Olga Uryupina, Yuchen Zhang, and Zhi Zhong. 2013. Towards ro- bust linguistic analysis using ontonotes. In Proceed- ings of the Seventeenth Conference on Computa- tional Natural Language Learning, pages 143-152, Sofia, Bulgaria. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Data augmentation -how to use deep learning when you have limited data -part 2",
"authors": [
{
"first": "Bharath",
"middle": [],
"last": "Raj",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "2018--2030",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bharath Raj. 2018. Data augmentation -how to use deep learning when you have limited data -part 2. https://bit.ly/2IvKw11. Accessed: 2018- 12-10.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Semisupervised learning with ladder network",
"authors": [
{
"first": "Antti",
"middle": [],
"last": "Rasmus",
"suffix": ""
},
{
"first": "Harri",
"middle": [],
"last": "Valpola",
"suffix": ""
},
{
"first": "Mikko",
"middle": [],
"last": "Honkala",
"suffix": ""
}
],
"year": 2015,
"venue": "Mathias Berglund, and Tapani Raiko",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Antti Rasmus, Harri Valpola, Mikko Honkala, Math- ias Berglund, and Tapani Raiko. 2015. Semi- supervised learning with ladder network. CoRR, abs/1507.02672.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Dropout: A simple way to prevent neural networks from overfitting",
"authors": [
{
"first": "Nitish",
"middle": [],
"last": "Srivastava",
"suffix": ""
},
{
"first": "Geoffrey",
"middle": [],
"last": "Hinton",
"suffix": ""
},
{
"first": "Alex",
"middle": [],
"last": "Krizhevsky",
"suffix": ""
},
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
},
{
"first": "Ruslan",
"middle": [],
"last": "Salakhutdinov",
"suffix": ""
}
],
"year": 2014,
"venue": "Journal of Machine Learning Research",
"volume": "15",
"issue": "",
"pages": "1929--1958",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014. Dropout: A simple way to prevent neural networks from overfitting. Journal of Machine Learning Re- search, 15:1929-1958.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Weightaveraged consistency targets improve semisupervised deep learning results",
"authors": [
{
"first": "Antti",
"middle": [],
"last": "Tarvainen",
"suffix": ""
},
{
"first": "Harri",
"middle": [],
"last": "Valpola",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Antti Tarvainen and Harri Valpola. 2017. Weight- averaged consistency targets improve semi- supervised deep learning results. CoRR, abs/1703.01780.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Introduction to the CoNLL-2003 shared task: Language-independent named entity recognition",
"authors": [
{
"first": "Erik",
"middle": [
"F"
],
"last": "Tjong",
"suffix": ""
},
{
"first": "Kim",
"middle": [],
"last": "Sang",
"suffix": ""
},
{
"first": "Fien",
"middle": [],
"last": "De Meulder",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of CoNLL-2003",
"volume": "",
"issue": "",
"pages": "142--147",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Erik F. Tjong Kim Sang and Fien De Meulder. 2003. Introduction to the CoNLL-2003 shared task: Language-independent named entity recognition. In Proceedings of CoNLL-2003, pages 142-147. Ed- monton, Canada.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Semi-supervised learning literature survey",
"authors": [
{
"first": "Xiaojin",
"middle": [],
"last": "Zhu",
"suffix": ""
}
],
"year": 2005,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xiaojin Zhu. 2005. Semi-supervised learning literature survey. Technical Report 1530, Computer Sciences, University of Wisconsin-Madison.",
"links": null
}
},
"ref_entries": {
"TABREF0": {
"num": null,
"content": "<table><tr><td/><td/><td>CoNLL</td><td>Ontonotes</td></tr><tr><td>No noise</td><td/><td>65.76 (\u00b12.06)</td><td>64.20 (\u00b12.27)</td></tr><tr><td/><td>1 W</td><td>67.70 (\u00b12.97)</td><td>67.46 (\u00b13.53)</td></tr><tr><td>Word-dropout</td><td>2 W</td><td>68.15 (\u00b13.15)</td><td>68.19 (\u00b13.35)</td></tr><tr><td/><td>3 W</td><td>68.54 (\u00b13.38)</td><td>68.42 (\u00b13.94)</td></tr><tr><td/><td>1 W</td><td>67.56 (\u00b13.04)</td><td>67.70 (\u00b13.20)</td></tr><tr><td>Synonym-replace</td><td>2 W</td><td>67.95 (\u00b13.17)</td><td>68.40 (\u00b13.62)</td></tr><tr><td/><td>3 W</td><td>68.35 (\u00b13.07)</td><td>68.46 (\u00b14.06)</td></tr><tr><td/><td>1 W</td><td>67.59 (\u00b13.03)</td><td>67.38 (\u00b13.29)</td></tr><tr><td>Word-droput-idf</td><td>2 W</td><td>68.11 (\u00b13.17)</td><td>68.14 (\u00b13.63)</td></tr><tr><td/><td>3 W</td><td>68.49 (\u00b13.27)</td><td>68.30 (\u00b13.77)</td></tr><tr><td/><td>1 W</td><td>67.51 (\u00b13.02)</td><td>67.24 (\u00b13.55)</td></tr><tr><td>Synonym-replace-idf</td><td>2 W</td><td>67.79 (\u00b13.15)</td><td>68.23 (\u00b13.42)</td></tr><tr><td/><td>3 W</td><td>68.26 (\u00b13.05)</td><td>67.95 (\u00b13.96)</td></tr><tr><td>Gaussian (stdev=4)</td><td>all W</td><td>62.98 (\u00b12.66)</td><td>64.89 (\u00b15.12)</td></tr><tr><td>Network Dropout</td><td/><td>68.40 (\u00b13.11)</td><td>71.77 (\u00b12.18)</td></tr></table>",
"type_str": "table",
"text": "Performance upon combining noise strategies, CoNLL (left) and Ontonotes (right). Best performance: network dropout + 5W word-dropout -70.57% (CoNLL), network dropout + 3W synonym-replace -72.78% (Ontonotes)",
"html": null
},
"TABREF1": {
"num": null,
"content": "<table/>",
"type_str": "table",
"text": "Overall accuracies comparing all noise strategies on CoNLL and Ontonotes datasets. No noise is the baseline. X W \u21d2 X words perturbed by noise. Accuracy is % of correctly classified datapoints. (\u00b1y) \u21d2 variance of 5 runs.",
"html": null
},
"TABREF3": {
"num": null,
"content": "<table/>",
"type_str": "table",
"text": "",
"html": null
}
}
}
}