ACL-OCL / Base_JSON /prefixR /json /ranlp /2021.ranlp-1.116.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2021",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T14:53:41.015738Z"
},
"title": "Improving Distantly Supervised Relation Extraction with Self-Ensemble Noise Filtering",
"authors": [
{
"first": "Tapas",
"middle": [],
"last": "Nayak",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "IIT Kharagpur",
"location": {
"country": "India"
}
},
"email": ""
},
{
"first": "Navonil",
"middle": [],
"last": "Majumder",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "IIT Kharagpur",
"location": {
"country": "India"
}
},
"email": ""
},
{
"first": "Soujanya",
"middle": [],
"last": "Poria",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "IIT Kharagpur",
"location": {
"country": "India"
}
},
"email": "soujanya.poria@gmail.com"
},
{
"first": "Sutd",
"middle": [],
"last": "Singapore",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "IIT Kharagpur",
"location": {
"country": "India"
}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Distantly supervised models are very popular for relation extraction since we can obtain a large amount of training data using the distant supervision method without human annotation. In distant supervision, a sentence is considered as a source of a tuple if the sentence contains both entities of the tuple. However, this condition is too permissive and does not guarantee the presence of relevant relation-specific information in the sentence. As such, distantly supervised training data contains much noise which adversely affects the performance of the models. In this paper, we propose a selfensemble filtering mechanism to filter out the noisy samples during the training process. We evaluate our proposed framework on the New York Times dataset which is obtained via distant supervision. Our experiments with multiple state-of-the-art neural relation extraction models show that our proposed filtering mechanism improves the robustness of the models and increases their F1 scores.",
"pdf_parse": {
"paper_id": "2021",
"_pdf_hash": "",
"abstract": [
{
"text": "Distantly supervised models are very popular for relation extraction since we can obtain a large amount of training data using the distant supervision method without human annotation. In distant supervision, a sentence is considered as a source of a tuple if the sentence contains both entities of the tuple. However, this condition is too permissive and does not guarantee the presence of relevant relation-specific information in the sentence. As such, distantly supervised training data contains much noise which adversely affects the performance of the models. In this paper, we propose a selfensemble filtering mechanism to filter out the noisy samples during the training process. We evaluate our proposed framework on the New York Times dataset which is obtained via distant supervision. Our experiments with multiple state-of-the-art neural relation extraction models show that our proposed filtering mechanism improves the robustness of the models and increases their F1 scores.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "The task of relation extraction is about finding relation or no relation between two entities. This is an important task to fill the gaps of existing knowledge bases (KB). Open information extraction (OpenIE) (Banko et al., 2007) is one way of extracting relations from text. They consider the verb in a sentence as the relation and then find the noun phrases located to the left and right of that verb as the entities. But this process has two serious problems: First, the same relation can appear in the text with many verb forms and OpenIE treats them as different relations. This leads to the duplication of relations in KB. Second, OpenIE treats any verbs in a sentence as a relation which can generate a large number of insignificant tuples which cannot be added to a KB.",
"cite_spans": [
{
"start": 209,
"end": 229,
"text": "(Banko et al., 2007)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Supervised relation extraction models, on the other hand, do not have these problems. But they require a large amount of annotated data which is difficult to get. Mintz et al. (2009) , Riedel et al. (2010) , and Hoffmann et al. (2011) used the idea of distant supervision to automatically obtain the training data to overcome this problem. The idea of distant supervision is that if a sentence contains both the entities of a tuple, it is chosen as a source sentence of this tuple. Although this process can generate some noisy training instances, it can give a significant amount of training data which can be used to build supervised models for this task. They map the tuples from existing KBs such as Freebase (Bollacker et al., 2008) to the text corpus such as Wikipedia articles (Mintz et al., 2009) or New York Times articles (Riedel et al., 2010; Hoffmann et al., 2011) .",
"cite_spans": [
{
"start": 163,
"end": 182,
"text": "Mintz et al. (2009)",
"ref_id": "BIBREF10"
},
{
"start": 185,
"end": 205,
"text": "Riedel et al. (2010)",
"ref_id": "BIBREF18"
},
{
"start": 208,
"end": 234,
"text": "and Hoffmann et al. (2011)",
"ref_id": "BIBREF6"
},
{
"start": 713,
"end": 737,
"text": "(Bollacker et al., 2008)",
"ref_id": "BIBREF1"
},
{
"start": 784,
"end": 804,
"text": "(Mintz et al., 2009)",
"ref_id": "BIBREF10"
},
{
"start": 832,
"end": 853,
"text": "(Riedel et al., 2010;",
"ref_id": "BIBREF18"
},
{
"start": 854,
"end": 876,
"text": "Hoffmann et al., 2011)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Based on distantly supervised training data, researchers have proposed many state-of-the-art models for relation extraction. Mintz et al. (2009) , Riedel et al. (2010) , and Hoffmann et al. (2011) proposed feature-based learning models and used entity tokens and their nearby tokens, their part-ofspeech tags, and other linguistic features to train their models. Recently, many neural network-based models have been proposed to avoid feature engineering. Zeng et al. (2014) and Zeng et al. (2015) used convolutional neural networks (CNN) with max-pooling to find the relation between two given entities. Shen and Huang (2016) , Jat et al. (2017) , Nayak and Ng (2019) used attention framework in their neural models for this task.",
"cite_spans": [
{
"start": 125,
"end": 144,
"text": "Mintz et al. (2009)",
"ref_id": "BIBREF10"
},
{
"start": 147,
"end": 167,
"text": "Riedel et al. (2010)",
"ref_id": "BIBREF18"
},
{
"start": 170,
"end": 196,
"text": "and Hoffmann et al. (2011)",
"ref_id": "BIBREF6"
},
{
"start": 455,
"end": 473,
"text": "Zeng et al. (2014)",
"ref_id": "BIBREF30"
},
{
"start": 478,
"end": 496,
"text": "Zeng et al. (2015)",
"ref_id": "BIBREF29"
},
{
"start": 604,
"end": 625,
"text": "Shen and Huang (2016)",
"ref_id": "BIBREF20"
},
{
"start": 628,
"end": 645,
"text": "Jat et al. (2017)",
"ref_id": "BIBREF7"
},
{
"start": 648,
"end": 667,
"text": "Nayak and Ng (2019)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "But the distantly supervised data may contain many noisy samples. Sometimes sentences may contain the two entities of a positive tuple, but they may not contain the relation specific information. These kinds of sentences and entity pairs are considered as positive noisy samples. Another set of noisy samples comes from the way samples for None relation are created. If a sentence contains two entities from the KB and there is no positive relation between these two entities in the KB, this sentence and entity pair is considered as a sample for None relation. But knowledge bases are not complete and many valid relations among the entities in the KBs are missing. So it may be possible that the sentence contains information about some positive relation between the two entities, but since the relation is not present in the KB, this sentence and entity pair is incorrectly considered as a sample for None relation. These kinds of sentences and entity pairs are considered as negative noisy samples.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We include examples of clean and noisy samples generated using distant supervision in Table 1 . The KB contains many entities out of which four entities are Barack Obama, Hawaii, Karkuli, and West Bengal. Barack Obama and Hawaii have a birth place relation between them. Karkuli and West Bengal are not connected with any relations in the KB. So we assume that there is no valid relation between these two entities. The sentence in the first sample contains the two entities Barack Obama and Hawaii, and it also contains information about Obama being born in Hawaii. So this sentence is a correct source for the tuple (Barack Obama, Hawaii, birth place). So this is a positive clean sample. The sentence in the second sample contains the two entities, but it does not contain the information about Barack Obama being born in Hawaii. So it is a positive noisy sample. In the case of the third and fourth samples, according to distant supervision, they are considered as samples for None relation. But the sentence in the third sample contains the information for the relation located in between Karkuli and West Bengal. So the third sample is a negative noisy sample. The fourth sample is an example of a negative clean sample.",
"cite_spans": [],
"ref_spans": [
{
"start": 86,
"end": 94,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The presence of the noisy samples in the distantly supervised data adversely affects the performance of the models. Our goal is to remove the noisy samples from the training process to make the models more robust for this task. We propose a selfensemble based noisy samples filtering method for this purpose. Our framework identifies the noisy samples during the training and removes them from training data in the following iterations. This framework can be used with any supervised relation extraction model. We run experiments with several state-of-the-art neural models, namely Convolutional Neural Network (CNN) (Zeng et al., 2014) , Piecewise Convolutional Neural Network (PCNN) (Zeng et al., 2015) , Entity Attention (EA) (Shen and Huang, 2016) , and Bi-GRU Word Attention (BGWA) (Jat et al., 2017) with the distantly supervised New York Times dataset (Hoffmann et al., 2011) . Our framework improves the F1 score of these models by 2.1%, 1.1%, 2.1%, and 2.3% respectively 1 .",
"cite_spans": [
{
"start": 617,
"end": 636,
"text": "(Zeng et al., 2014)",
"ref_id": "BIBREF30"
},
{
"start": 685,
"end": 704,
"text": "(Zeng et al., 2015)",
"ref_id": "BIBREF29"
},
{
"start": 729,
"end": 751,
"text": "(Shen and Huang, 2016)",
"ref_id": "BIBREF20"
},
{
"start": 787,
"end": 805,
"text": "(Jat et al., 2017)",
"ref_id": "BIBREF7"
},
{
"start": 859,
"end": 882,
"text": "(Hoffmann et al., 2011)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Sentence-level relation extraction is defined as follows: Given a sentence S and two entities {E 1 , E 2 } marked in the sentence, find the relation r(E 1 , E 2 ) between these two entities in S from a pre-defined set of relations R \u222a {None}. R is the set of positive relations and None indicates that none of the relations in R holds between the two marked entities in the sentence. The relation between the entities is argument orderspecific, i.e., r(E 1 , E 2 ) and r(E 2 , E 1 ) are not the same. The input to the system is a sentence S and two entities E 1 and E 2 , and output is the relation r(E 1 , E 2 ) \u2208 R \u222a {None}. Distant supervised datasets are used for training relation extraction models. But the presence of noisy samples negatively affects their performance. In this work, we try to identify these noisy samples during training and filter them out from the subsequent training process to improve the performance of the models.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Task Description",
"sec_num": "2"
},
{
"text": "3 Self-Ensemble Filtering Framework Figure 1 shows our proposed self-ensemble filtering framework. This framework is inspired from the work by Nguyen et al. 2020. We start with clean and noisy samples and assume that all samples are clean. At the end of each iteration, we predict the labels of the entire training samples. Based on the predicted label and the label assigned by distant supervision, we decide to filter out a sample in the next iteration. After each iteration, we consider the entire training samples for the filtering process. The individual models at each iteration can be very sensitive to wrong labels, so in our training process, we maintain a self-ensemble version of the models which is a moving average of the models of previous iterations. We hypothesize that the predictions of the ensemble model are more stable than the individual models. So the predictions from the ensemble model are used to identify the noisy samples. These noisy samples are removed from the training samples of the next iteration. We consider the entire distantly supervised training data for prediction and filtering so that if a sample is filtered out wrongly in an iteration, it can be included again in the training data in the subsequent iteration.",
"cite_spans": [],
"ref_spans": [
{
"start": 36,
"end": 44,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Task Description",
"sec_num": "2"
},
{
"text": "We use the student-teacher training mechanism proposed by Tarvainen and Valpola (2017) for our selfensemble model learning. A student model can be any supervised learning model such as a neural network model. A teacher model is the clone of student model with same parameters. The weights of parameters of this teacher model is the exponential moving average of the weights of parameters of the student model. So this teacher model is the self-ensemble version of the student model. An additional consistency loss is used to maintain the consistency of the predictions of the student model and the teacher model. Following is the stepby-step algorithm to train such an self-ensemble model:",
"cite_spans": [
{
"start": 58,
"end": 86,
"text": "Tarvainen and Valpola (2017)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Self-Ensemble Training",
"sec_num": "3.1"
},
{
"text": "1. First, a student model M i s is initialized. This can be any supervised relation extraction model such as CNN, PCNN, Entity Attention (EA) or Bi-GRU Word Attention (BGWA) model.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Self-Ensemble Training",
"sec_num": "3.1"
},
{
"text": "t is cloned from the student model M i s . We completely detach the weights of the teacher model from the student model.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A teacher model M i",
"sec_num": "2."
},
{
"text": "to update the parameters of the student model.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A gradient descent based optimizer is selected",
"sec_num": "3."
},
{
"text": "loss of the student model for the classification task and a consistency loss between the student model and teacher model.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Loss is calculated based on the cross-entropy",
"sec_num": "4."
},
{
"text": "\u2022 In each step or mini-batch:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "In each training iteration or epoch:",
"sec_num": "5."
},
{
"text": "-Update the weights of the student model M i s using the selected optimizer and the loss function.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "In each training iteration or epoch:",
"sec_num": "5."
},
{
"text": "-Update the weights of the teacher model M i t as an exponential moving average of the student weights.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "In each training iteration or epoch:",
"sec_num": "5."
},
{
"text": "\u2022 Evaluate the performance of the teacher model M i t on a validation dataset. If we decide to continue the training after evaluation, we use a filtering strategy at this point to remove the noisy samples from the training data. This clean training data is used in the next iteration of the training process.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "In each training iteration or epoch:",
"sec_num": "5."
},
{
"text": "t . This teacher model is the self-ensemble version of the student model.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Return the best teacher model M i",
"sec_num": "6."
},
{
"text": "We use the negative log-likelihood loss of the relation classification task from the student model (L ce ) and a mean-squared error based consistency loss between the student and teacher model (L mse ) to update the student model. For L ce , p(r i |s i , e 1 i , e 2 i , \u03b8 s ) is the conditional probability of the true relation r i when the sentence s i , two entities e 1 i and e 2 i , and the model parameters of the student \u03b8 s are given. For L mse , y i,j s and y i,j t are the softmax output of the j th relation class of i th training sample in the batch from the student model and the teacher model respectively. C is number of relation class in the dataset and B is the batch size. The parameters of the student model \u03b8 s are updated based on the combined loss L using an gradient descent based optimizer. The consistency loss (L mse ) makes sure that output softmax distribution of the student model and teacher model are close to each other, thus maintain the consistency of the output from both models.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Loss Function & Updating the Student",
"sec_num": "3.2"
},
{
"text": "L ce = \u2212 1 B B i=1 log(p(r i |s i , e 1 i , e 2 i , \u03b8 s )) L mse = 1 B B i=1 C j=1 (y i,j s \u2212 y i,j t ) 2 L = L ce + L mse",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Loss Function & Updating the Student",
"sec_num": "3.2"
},
{
"text": "We update the parameters of teacher model \u03b8 t based on the exponential moving average of the all previous optimization steps of the student model.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Updating the Teacher",
"sec_num": "3.3"
},
{
"text": "W(\u03b8 l t ) = \u03b1W(\u03b8 l\u22121 t ) + (1 \u2212 \u03b1)W(\u03b8 l s ) where W(\u03b8 l t )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Updating the Teacher",
"sec_num": "3.3"
},
{
"text": "and W(\u03b8 l s ) are the weights of the parameters of the teacher model and student model respectively after the l th global optimization step. W(\u03b8 l\u22121 t ) is the weights of the teacher model parameters up to the l \u2212 1 th global optimization step. \u03b1 is a weight factor to control the contribution of the student model of the current step and the teacher model up to the previous step. At the initial optimization steps of the training, we keep the value of \u03b1 low as the self-ensemble model or teacher model is not stable yet and the student model should contribute more. As the training progress and the self-ensemble model becomes stable, we slowly increase the value of \u03b1 so that we take the majority contribution from the self-ensemble model itself. We use the following Gaussian curve (He et al., 2018) to ramp up the value of \u03b1 from 0 to \u03b1 max which is a hyper-parameter of the model.",
"cite_spans": [
{
"start": 786,
"end": 803,
"text": "(He et al., 2018)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Updating the Teacher",
"sec_num": "3.3"
},
{
"text": "T = E * L B p = 1 \u2212 min(step idx, T ) T \u03b1 = e \u22125p 2 \u03b1 max",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Updating the Teacher",
"sec_num": "3.3"
},
{
"text": "Here E is the epoch count to ramp up the \u03b1 from 0 to \u03b1 max . E is a hyper-parameter of the model and generally, this is lower than the total number of epochs of the training process. L is the size of distant supervised training data at the beginning of training, B is the batch size, and step idx is the current global optimization step count of the training. T represents the number of global optimization steps required for \u03b1 to reach its maximum value \u03b1 max .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Updating the Teacher",
"sec_num": "3.3"
},
{
"text": "After each iteration, we use a validation dataset to determine to stop or to continue the training. If we decide to continue the training, then we use the self-ensemble model or the teacher model to filter out noisy samples from the initial training data. This clean training data is used in the next training iteration. We use the self-ensemble model to predict the relation on initial training data for Figure 2 : Ramping up of \u03b1 during training. We use E=5, T=33,000, and \u03b1 max = 0.9 to generate this curve for the demonstration of how \u03b1 reaches from 0 to \u03b1 max . the filtering process after each iteration. We use the entire initial training data for prediction so that if a training sample is filtered out wrongly in an iteration as a noisy one, it can be used again in subsequent training iterations if the subsequent selfensemble model predicts the sample as a clean one.",
"cite_spans": [],
"ref_spans": [
{
"start": 405,
"end": 413,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Noise Filtering Strategy",
"sec_num": "3.4"
},
{
"text": "Generally, distantly supervised datasets contain a largely high number of None samples than the valid relation samples. For this reason, we choose a strict filtering strategy for None samples and a lenient filtering strategy for valid relation samples. We consider a None sample as clean if teacher models predict the None relation. Otherwise, this sample is considered as noisy and filtered out from the training set of next iteration. For the valid relations, we consider a sample as clean if the relation assigned by distant supervision belongs to the top K predictions of the teacher model. This clean training data is used in the next training iteration.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Noise Filtering Strategy",
"sec_num": "3.4"
},
{
"text": "We have used the following state-of-the-art neural relation extraction models as the student model in our filtering framework. These models use three types of embedding vectors: (1) word embedding vector w \u2208 R dw (2) a positional embedding vector u 1 \u2208 R du which represents the linear distance of a word from the start token of entity 1 (3) another positional embedding vector u 2 \u2208 R du which represents the linear distance of a word from the start token of entity 2. The sentences are represented using a sequence of vectors {x 1 , x 2 , ....., x n } where x t = w t u 1 t u 2 t . represents the concatenation of vectors and n is the sentence length. These token vectors x t are given as input to all the following models.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Student Models",
"sec_num": "4"
},
{
"text": "4.1 CNN (Zeng et al., 2014) In this model, convolution operations with maxpooling are applied on the token vectors sequence {x 1 , x 2 , ....., x n } to obtain the sentence-level feature vector.",
"cite_spans": [
{
"start": 8,
"end": 27,
"text": "(Zeng et al., 2014)",
"ref_id": "BIBREF30"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Student Models",
"sec_num": "4"
},
{
"text": "c i = f T (x i x i+1 .... x i+k\u22121 ) c max = max(c 1 , c 2 , ...., c n ) v = [c 1 max , c 2 max , ...., c f k max ]",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Student Models",
"sec_num": "4"
},
{
"text": "f is a convolutional filter vector of dimension",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Student Models",
"sec_num": "4"
},
{
"text": "k(d w +2d u )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Student Models",
"sec_num": "4"
},
{
"text": "where k is the filter width. The index i moves from 1 to n and produces a set of scalar values {c 1 , c 2 , ....., c n }. The max-pooling operation chooses the maximum c max from these values as a feature. With f k number of filters, we get a feature vector v \u2208 R f k . This feature vector v is passed to feed-forward layer with softmax to classify the relation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Student Models",
"sec_num": "4"
},
{
"text": "4.2 PCNN (Zeng et al., 2015) Piecewise Convolutional Neural Network (PCNN) is a modified version of the CNN model described above. Similar to the CNN model, convolutional operations are applied to the input vector sequence. But CNN and PCNN models differ on how the max-pooling operation is performed on the convolutional outputs. Rather than applying a global max-pooling operation on the entire sentence, three max-pooling operations are applied on three segments/pieces of the sentence based on the location of the two entities. This is why this model is called the Piecewise Convolutional Neural Network (PCNN). The first max-pooling operation is applied from the beginning of the sequence to the end of the entity appearing first in the sentence.",
"cite_spans": [
{
"start": 9,
"end": 28,
"text": "(Zeng et al., 2015)",
"ref_id": "BIBREF29"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Student Models",
"sec_num": "4"
},
{
"text": "The second max-pooling operation is applied from the beginning of the entity appearing first in the sentence to the end of the entity appearing second in the sentence. The third max-pooling operation is applied from the beginning of the entity appearing second in the sentence to the end of the sentence. These max-pooled features are concatenated and passed to a feed-forward layer with softmax to determine the relation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Student Models",
"sec_num": "4"
},
{
"text": "4.3 Entity Attention (EA) (Shen and Huang, 2016) This model combines the CNN model with an attention network. First, convolutional operations with max-pooling are used to extract the global features of the sentence. Next, attention is applied to the words of the sentence based on the two entities separately. The word embedding of the last token of an entity is concatenated with the embedding of every word. This concatenated representation is passed to a feed-forward layer with tanh activation and then another feed-forward layer with softmax to get a scalar attention score for every word for that entity. The word embeddings are averaged based on the attention scores to get the attentive feature vectors. The CNN-extracted global feature vector and two attentive feature vectors for the two entities are concatenated and passed to a feed-forward layer with softmax to determine the relation.",
"cite_spans": [
{
"start": 26,
"end": 48,
"text": "(Shen and Huang, 2016)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Student Models",
"sec_num": "4"
},
{
"text": "4.4 Bi-GRU Word Attention (BGWA) (Jat et al., 2017) This model uses a bidirectional gated recurrent unit (Bi-GRU) (Cho et al., 2014) to capture the longterm dependency among the words in the sentence. The tokens vectors x t are passed to a Bi-GRU layer. The hidden vectors of the Bi-GRU layer are passed to a bi-linear operator which is a combination of two feed-forward layers with softmax to compute a scalar attention score for each word. The hidden vectors of the Bi-GRU layer are multiplied by their corresponding attention scores for scaling up the hidden vectors. A piecewise convolution neural network (Zeng et al., 2015) is used on top of the scaled hidden vectors to obtain the feature vector. This feature vector is passed to a feed-forward layer with softmax to determine the relation.",
"cite_spans": [
{
"start": 33,
"end": 51,
"text": "(Jat et al., 2017)",
"ref_id": "BIBREF7"
},
{
"start": 114,
"end": 132,
"text": "(Cho et al., 2014)",
"ref_id": "BIBREF2"
},
{
"start": 610,
"end": 629,
"text": "(Zeng et al., 2015)",
"ref_id": "BIBREF29"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Student Models",
"sec_num": "4"
},
{
"text": "To verify our hypothesis, we need training data that is created using distant supervision, thus noisy and test data which is not noisy, thus human-annotated.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments 5.1 Datasets",
"sec_num": "5"
},
{
"text": "If the test data is also noisy, then it will be hard to derive any conclusion from the results. So, we choose the New York Times (NYT) corpus of Hoffmann et al. (2011) for our experiments. This dataset has 24 valid relations and a None relation. The statistics of the dataset is given in Table 2 . The training dataset is created by aligning Freebase tuples to NYT articles, but the test dataset is manually annotated. We use 10% of the training data as validation data and the remaining 90% for training. ",
"cite_spans": [
{
"start": 145,
"end": 167,
"text": "Hoffmann et al. (2011)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [
{
"start": 288,
"end": 295,
"text": "Table 2",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Experiments 5.1 Datasets",
"sec_num": "5"
},
{
"text": "We use precision, recall, and F1 scores to evaluate the performance of models on relation extraction after removing the None labels. We use a confidence threshold to decide if the relation of a test instance belongs to the set of valid relations R or None. If the network predicts None for a test instance, then it is considered as None only. But if the network predicts a relation from the set R and the corresponding softmax score is below the confidence threshold, then the final predicted label is changed to None. This confidence threshold is the one that achieves the highest F1 score on the validation data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Metrics",
"sec_num": "5.2"
},
{
"text": "We run word2vec (Mikolov et al., 2013) on the NYT corpus to obtain the initial word embeddings with a dimension of d w = 50 and update the embeddings during training. We set the dimension positional embedding vector at d u = 5. We use f k = 230 convolutional filters of kernel size k = 3 for feature extraction whenever we apply the convolution operation. We use dropout in our network with a dropout rate of 0.5, and in convolutional layers, we use the tanh activation function. We train our models with a mini-batch size of 50 and optimize the network parameters using the Adagrad optimizer (Duchi et al., 2011) . We want to keep the value of \u03b1 max high because when the training progress, we want to increase the contribution of the self-ensemble model compare to the student model. So we set the value of \u03b1 max at 0.9. We experiment with E = {5, 10} epochs to ramp up the value of \u03b1 from 0 to \u03b1 max . We also experiment with K = {3, 5} for filtering the valid relation samples during the filtering process after each training iteration. The performance of the self-ensemble model does not vary much with these choices of E or K. So we use E = 5 and K = 3 for final experiments. Table 5 : Precision, Recall, and F1 score of the ensemble version of the student models on NYT dataset. \u2193 column shows the absolute % decline of F1 score respect to the SEF models (Table 3) .",
"cite_spans": [
{
"start": 16,
"end": 38,
"text": "(Mikolov et al., 2013)",
"ref_id": "BIBREF9"
},
{
"start": 593,
"end": 613,
"text": "(Duchi et al., 2011)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [
{
"start": 1182,
"end": 1189,
"text": "Table 5",
"ref_id": null
},
{
"start": 1362,
"end": 1371,
"text": "(Table 3)",
"ref_id": null
}
],
"eq_spans": [],
"section": "Parameter Settings",
"sec_num": "5.3"
},
{
"text": "There are two approaches for relation extraction (Nayak et al., 2021) : (i) Pipeline approaches (Zeng et al., 2014 (Zeng et al., , 2015 Jat et al., 2017; Nayak and Ng, 2019 ) (ii) Joint extraction approaches (Takanobu et al., 2019; Nayak and Ng, 2020) . Most of these models work with distantly supervised noisy datasets. Thus noise mitigation is an important dimension in this area of research. Multi-instance relation extraction is one of the popular methods for noise mitigation. Riedel et al. (2010) 2019used this multi-instance learning concept in their proposed relation extraction models. For each entity pair, they used all the sentences that contain these two entities to find the relation between them. Their goal was to reduce the effect of noisy samples using this multi-instance setting. They used different types of sentence selection mechanisms to give importance to the sentences that contain relation specific keywords and ignore the noisy sentences. But this idea may not be effective if there is only one sentence for an entity pair. Ren et al. (2017) and Yaghoobzadeh et al. (2017) used the multi-task learning approach for mitigating the influence of the noisy samples. They used fine-grained entity typing as an additional task in their model. used an adversarial training approach for the same purpose. They add noise to the word embeddings to make the model more robust for distantly supervised training. Qin et al. (2018a) used the generative adversarial network (GAN) to address this issue of the noisy samples in relation extraction. They used a separate binary classifier as a generator in their model for each positive relation class to identify the true positives for that relation and filter out the noisy ones. Qin et al. (2018b) used reinforcement learning to identify the noisy samples for the positive relation classes. used reinforcement learning to identify the noisy samples for the positive relations and then use the identified noisy samples as unlabelled data in their model. Shang et al. (2020) used a clustering approach to identify the noisy samples. They assign the correct relation label to these noisy samples and use them as additional training data in their model. Different from these approaches, we propose a student-teacher framework that can work with any supervised neural network models to address the issue of noisy samples in distantlysupervised datasets.",
"cite_spans": [
{
"start": 49,
"end": 69,
"text": "(Nayak et al., 2021)",
"ref_id": "BIBREF11"
},
{
"start": 96,
"end": 114,
"text": "(Zeng et al., 2014",
"ref_id": "BIBREF30"
},
{
"start": 115,
"end": 135,
"text": "(Zeng et al., , 2015",
"ref_id": "BIBREF29"
},
{
"start": 136,
"end": 153,
"text": "Jat et al., 2017;",
"ref_id": "BIBREF7"
},
{
"start": 154,
"end": 172,
"text": "Nayak and Ng, 2019",
"ref_id": "BIBREF12"
},
{
"start": 208,
"end": 231,
"text": "(Takanobu et al., 2019;",
"ref_id": "BIBREF22"
},
{
"start": 232,
"end": 251,
"text": "Nayak and Ng, 2020)",
"ref_id": "BIBREF13"
},
{
"start": 483,
"end": 503,
"text": "Riedel et al. (2010)",
"ref_id": "BIBREF18"
},
{
"start": 1053,
"end": 1070,
"text": "Ren et al. (2017)",
"ref_id": "BIBREF17"
},
{
"start": 1075,
"end": 1101,
"text": "Yaghoobzadeh et al. (2017)",
"ref_id": "BIBREF27"
},
{
"start": 1429,
"end": 1447,
"text": "Qin et al. (2018a)",
"ref_id": "BIBREF15"
},
{
"start": 1743,
"end": 1761,
"text": "Qin et al. (2018b)",
"ref_id": "BIBREF16"
},
{
"start": 2017,
"end": 2036,
"text": "Shang et al. (2020)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "6"
},
{
"text": "In this work, we propose a self-ensemble based noisy samples filtering framework for distantly supervised relation extraction. Our framework identifies the noisy samples during training and removes them from the training data in the following iterations. This framework can be used with any supervised relation extraction models. We run experiments using several state-of-the-art neural models with this proposed filtering framework on the distantly supervised New York Times dataset. The results show that our proposed framework improves the robustness of these models and increases their F1 score on the relation extraction task.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "7"
},
{
"text": "The code and data for this work is available at https://github.com/nayakt/SENF4DSRE.git",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "This research is supported by A*STAR under its RIE 2020 Advanced Manufacturing and Engineering (AME) programmatic grant RGAST2003, (award # A19E2b0098, project K-EMERGE: Knowledge Extraction, Modelling, and Explainable Reasoning for General Expertise).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
},
{
"text": " Table 3 : Precision, Recall, and F1 score comparison of the student models on NYT dataset when trained with self-ensemble filtering framework (SEF column) and when trained independently (Student column). We report the average of five runs with standard deviation. \u2191 column shows the absolute % improvement of F1 score over the Student models.",
"cite_spans": [],
"ref_spans": [
{
"start": 1,
"end": 8,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "annex",
"sec_num": null
},
{
"text": "We include the results of our experiments in Table 3. We run the CNN, PCNN, EA, and BGWA models 5 times with different random seeds and report the average with standard deviation in the 'Student' column in Table 3 . The column 'SEF' (Self-Ensemble Filtering) is the average results of 5 runs of CNN, PCNN, EA, and BGWA models with the self-ensemble filtering framework. We see that our SEF framework achieves 2.1%, 1.1%, 2.1%, and 2.3% higher F1 score for the CNN, PCNN, EA, and BGWA models respectively compared to the Student models. If we compare the precision and recall score of the four models, we see that our self-ensemble framework improves the recall score more than the corresponding precision score in each of these four models. These results show the effectiveness of our self-ensemble filtering framework in a distant supervised dataset.",
"cite_spans": [],
"ref_spans": [
{
"start": 206,
"end": 213,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "5.4"
},
{
"text": "We experiment with how the self-ensemble version of the student models behave without filtering the noisy samples after each iteration. So in this setting, we use the entire distant supervised training data at every iteration. The results are included in Table 4 under the 'SE' (Self-Ensemble) column. This result shows that the performance of the four neural models under self-ensemble training without filtering is not much different from the 'Student' performance of Table 3 . This shows that the filtering of the noisy samples from the training dataset helps to improve the performance of our proposed self-ensemble framework. Table 4 : Precision, Recall, and F1 score of the selfensemble version of the student models on NYT dataset without noise filtering. We report the average of five runs with standard deviation. \u2193 column shows the absolute % decline of F1 score respect to the SEF models (Table 3) .",
"cite_spans": [],
"ref_spans": [
{
"start": 255,
"end": 262,
"text": "Table 4",
"ref_id": null
},
{
"start": 470,
"end": 477,
"text": "Table 3",
"ref_id": null
},
{
"start": 631,
"end": 638,
"text": "Table 4",
"ref_id": null
},
{
"start": 899,
"end": 908,
"text": "(Table 3)",
"ref_id": null
}
],
"eq_spans": [],
"section": "Self-Ensemble without Filtering",
"sec_num": "5.5"
},
{
"text": "Since our SEF framework has an ensemble component, we compare its performance with the ensemble versions of the independent student models. The 'Ensemble' column in Table 5 refers to the ensemble results of the 5 runs of each student model. We use the five runs of the models on the test data and average the softmax output of these runs to decide the relation. We see that our SEF framework outperforms the ensemble results for CNN, PCNN, EA, and BGWA with 1.6%, 0.5%, 0.7% and 2.6% F1 score respectively. Here, we should consider the fact that to build an ensemble model, the student models must be run multiple times (5 times in our case). In contrast, self-ensemble models can be built in a single run with little cost of maintaining the moving average of the student model.",
"cite_spans": [],
"ref_spans": [
{
"start": 165,
"end": 172,
"text": "Table 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Ensemble vs Self-Ensemble Filtering",
"sec_num": "5.6"
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Open information extraction from the web",
"authors": [
{
"first": "Michele",
"middle": [],
"last": "Banko",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Michael",
"suffix": ""
},
{
"first": "Stephen",
"middle": [],
"last": "Cafarella",
"suffix": ""
},
{
"first": "Matthew",
"middle": [],
"last": "Soderland",
"suffix": ""
},
{
"first": "Oren",
"middle": [],
"last": "Broadhead",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Etzioni",
"suffix": ""
}
],
"year": 2007,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michele Banko, Michael J Cafarella, Stephen Soder- land, Matthew Broadhead, and Oren Etzioni. 2007. Open information extraction from the web. In IJ- CAI.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Freebase: A collaboratively created graph database for structuring human knowledge",
"authors": [
{
"first": "Kurt",
"middle": [],
"last": "Bollacker",
"suffix": ""
},
{
"first": "Colin",
"middle": [],
"last": "Evans",
"suffix": ""
},
{
"first": "Praveen",
"middle": [],
"last": "Paritosh",
"suffix": ""
},
{
"first": "Tim",
"middle": [],
"last": "Sturge",
"suffix": ""
},
{
"first": "Jamie",
"middle": [],
"last": "Taylor",
"suffix": ""
}
],
"year": 2008,
"venue": "ACM SIGMOD ICMD",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kurt Bollacker, Colin Evans, Praveen Paritosh, Tim Sturge, and Jamie Taylor. 2008. Freebase: A col- laboratively created graph database for structuring human knowledge. In ACM SIGMOD ICMD.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "On the properties of neural machine translation: Encoder-decoder approaches",
"authors": [
{
"first": "Kyunghyun",
"middle": [],
"last": "Cho",
"suffix": ""
},
{
"first": "Bart",
"middle": [],
"last": "Van Merri\u00ebnboer",
"suffix": ""
},
{
"first": "Dzmitry",
"middle": [],
"last": "Bahdanau",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of Eighth Workshop on Syntax, Semantics and Structure in Statistical Translation",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kyunghyun Cho, Bart Van Merri\u00ebnboer, Dzmitry Bah- danau, and Yoshua Bengio. 2014. On the properties of neural machine translation: Encoder-decoder ap- proaches. In Proceedings of Eighth Workshop on Syntax, Semantics and Structure in Statistical Trans- lation.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Adaptive subgradient methods for online learning and stochastic optimization",
"authors": [
{
"first": "John",
"middle": [],
"last": "Duchi",
"suffix": ""
},
{
"first": "Elad",
"middle": [],
"last": "Hazan",
"suffix": ""
},
{
"first": "Yoram",
"middle": [],
"last": "Singer",
"suffix": ""
}
],
"year": 2011,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "John Duchi, Elad Hazan, and Yoram Singer. 2011. Adaptive subgradient methods for online learning and stochastic optimization. JMLR.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Adaptive semi-supervised learning for cross-domain sentiment classification",
"authors": [
{
"first": "Ruidan",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Hwee Tou",
"middle": [],
"last": "Wee Sun Lee",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Ng",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Dahlmeier",
"suffix": ""
}
],
"year": 2018,
"venue": "EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ruidan He, Wee Sun Lee, Hwee Tou Ng, and Daniel Dahlmeier. 2018. Adaptive semi-supervised learn- ing for cross-domain sentiment classification. In EMNLP.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Improving neural relation extraction with positive and unlabeled learning",
"authors": [
{
"first": "Zhengqiu",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Wenliang",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Yuyi",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Wei",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Guanchun",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Min",
"middle": [],
"last": "Zhang",
"suffix": ""
}
],
"year": 2020,
"venue": "AAAI",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhengqiu He, Wenliang Chen, Yuyi Wang, Wei Zhang, Guanchun Wang, and Min Zhang. 2020. Improv- ing neural relation extraction with positive and un- labeled learning. In AAAI.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Knowledgebased weak supervision for information extraction of overlapping relations",
"authors": [
{
"first": "Raphael",
"middle": [],
"last": "Hoffmann",
"suffix": ""
},
{
"first": "Congle",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Xiao",
"middle": [],
"last": "Ling",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
},
{
"first": "Daniel",
"middle": [
"S"
],
"last": "Weld",
"suffix": ""
}
],
"year": 2011,
"venue": "ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Raphael Hoffmann, Congle Zhang, Xiao Ling, Luke Zettlemoyer, and Daniel S Weld. 2011. Knowledge- based weak supervision for information extraction of overlapping relations. In ACL.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Improving distantly supervised relation extraction using word and entity based attention",
"authors": [
{
"first": "Sharmistha",
"middle": [],
"last": "Jat",
"suffix": ""
},
{
"first": "Siddhesh",
"middle": [],
"last": "Khandelwal",
"suffix": ""
},
{
"first": "Partha",
"middle": [],
"last": "Talukdar",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 6th Workshop on Automated Knowledge Base Construction",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sharmistha Jat, Siddhesh Khandelwal, and Partha Talukdar. 2017. Improving distantly supervised rela- tion extraction using word and entity based attention. In Proceedings of the 6th Workshop on Automated Knowledge Base Construction.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Neural relation extraction with selective attention over instances",
"authors": [
{
"first": "Yankai",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "Shiqi",
"middle": [],
"last": "Shen",
"suffix": ""
},
{
"first": "Zhiyuan",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Huanbo",
"middle": [],
"last": "Luan",
"suffix": ""
},
{
"first": "Maosong",
"middle": [],
"last": "Sun",
"suffix": ""
}
],
"year": 2016,
"venue": "ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yankai Lin, Shiqi Shen, Zhiyuan Liu, Huanbo Luan, and Maosong Sun. 2016. Neural relation extraction with selective attention over instances. In ACL.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Efficient estimation of word representations in vector space",
"authors": [
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
},
{
"first": "Kai",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Greg",
"middle": [],
"last": "Corrado",
"suffix": ""
},
{
"first": "Jeffrey",
"middle": [],
"last": "Dean",
"suffix": ""
}
],
"year": 2013,
"venue": "ICLR",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013. Efficient estimation of word represen- tations in vector space. In ICLR.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Distant supervision for relation extraction without labeled data",
"authors": [
{
"first": "Mike",
"middle": [],
"last": "Mintz",
"suffix": ""
},
{
"first": "Steven",
"middle": [],
"last": "Bills",
"suffix": ""
}
],
"year": 2009,
"venue": "ACL and IJCNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mike Mintz, Steven Bills, Rion Snow, and Dan Juraf- sky. 2009. Distant supervision for relation extrac- tion without labeled data. In ACL and IJCNLP.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Deep neural approaches to relation triplets extraction: A comprehensive survey",
"authors": [
{
"first": "Tapas",
"middle": [],
"last": "Nayak",
"suffix": ""
},
{
"first": "Navonil",
"middle": [],
"last": "Majumder",
"suffix": ""
},
{
"first": "Pawan",
"middle": [],
"last": "Goyal",
"suffix": ""
},
{
"first": "Soujanya",
"middle": [],
"last": "Poria",
"suffix": ""
}
],
"year": 2021,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tapas Nayak, Navonil Majumder, Pawan Goyal, and Soujanya Poria. 2021. Deep neural approaches to relation triplets extraction: A comprehensive survey. Cognitive Computing.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Effective attention modeling for neural relation extraction",
"authors": [
{
"first": "Tapas",
"middle": [],
"last": "Nayak",
"suffix": ""
},
{
"first": "Hwee Tou",
"middle": [],
"last": "Ng",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tapas Nayak and Hwee Tou Ng. 2019. Effective at- tention modeling for neural relation extraction. In CoNLL.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Effective modeling of encoder-decoder architecture for joint entity and relation extraction",
"authors": [
{
"first": "Tapas",
"middle": [],
"last": "Nayak",
"suffix": ""
},
{
"first": "Hwee Tou",
"middle": [],
"last": "Ng",
"suffix": ""
}
],
"year": 2020,
"venue": "AAAI",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tapas Nayak and Hwee Tou Ng. 2020. Effective mod- eling of encoder-decoder architecture for joint entity and relation extraction. In AAAI.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "SELF: Learning to filter noisy labels with self-ensembling",
"authors": [],
"year": null,
"venue": "ICLR",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Duc Tam Nguyen, Chaithanya Kumar Mummadi, Thi Phuong Nhung Ngo, Thi Hoai Phuong Nguyen, Laura Beggel, and Thomas Brox. 2020. SELF: Learning to filter noisy labels with self-ensembling. In ICLR.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "DSGAN: Generative adversarial training for distant supervision relation extraction",
"authors": [
{
"first": "Pengda",
"middle": [],
"last": "Qin",
"suffix": ""
},
{
"first": "Weiran",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "William",
"middle": [
"Yang"
],
"last": "Wang",
"suffix": ""
}
],
"year": 2018,
"venue": "ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pengda Qin, Weiran Xu, and William Yang Wang. 2018a. DSGAN: Generative adversarial training for distant supervision relation extraction. In ACL.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Robust distant supervision relation extraction via deep reinforcement learning",
"authors": [
{
"first": "Pengda",
"middle": [],
"last": "Qin",
"suffix": ""
},
{
"first": "Weiran",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "William",
"middle": [
"Yang"
],
"last": "Wang",
"suffix": ""
}
],
"year": 2018,
"venue": "ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pengda Qin, Weiran Xu, and William Yang Wang. 2018b. Robust distant supervision relation extrac- tion via deep reinforcement learning. In ACL.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "CoType: Joint extraction of typed entities and relations with knowledge bases",
"authors": [
{
"first": "Xiang",
"middle": [],
"last": "Ren",
"suffix": ""
},
{
"first": "Zeqiu",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Wenqi",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Meng",
"middle": [],
"last": "Qu",
"suffix": ""
},
{
"first": "Clare",
"middle": [
"R"
],
"last": "Voss",
"suffix": ""
},
{
"first": "Heng",
"middle": [],
"last": "Ji",
"suffix": ""
},
{
"first": "F",
"middle": [],
"last": "Tarek",
"suffix": ""
},
{
"first": "Jiawei",
"middle": [],
"last": "Abdelzaher",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Han",
"suffix": ""
}
],
"year": 2017,
"venue": "WWW",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xiang Ren, Zeqiu Wu, Wenqi He, Meng Qu, Clare R. Voss, Heng Ji, Tarek F. Abdelzaher, and Jiawei Han. 2017. CoType: Joint extraction of typed entities and relations with knowledge bases. In WWW.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Modeling relations and their mentions without labeled text",
"authors": [
{
"first": "Sebastian",
"middle": [],
"last": "Riedel",
"suffix": ""
},
{
"first": "Limin",
"middle": [],
"last": "Yao",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Mccallum",
"suffix": ""
}
],
"year": 2010,
"venue": "ECML and KDD",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sebastian Riedel, Limin Yao, and Andrew McCallum. 2010. Modeling relations and their mentions with- out labeled text. In ECML and KDD.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Are noisy sentences useless for distant supervised relation extraction",
"authors": [
{
"first": "Yuming",
"middle": [],
"last": "Shang",
"suffix": ""
},
{
"first": "He-Yan",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Xian-Ling",
"middle": [],
"last": "Mao",
"suffix": ""
},
{
"first": "Xin",
"middle": [],
"last": "Sun",
"suffix": ""
},
{
"first": "Wei",
"middle": [],
"last": "Wei",
"suffix": ""
}
],
"year": 2020,
"venue": "AAAI",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yuming Shang, He-Yan Huang, Xian-Ling Mao, Xin Sun, and Wei Wei. 2020. Are noisy sentences use- less for distant supervised relation extraction? In AAAI.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Attentionbased convolutional neural network for semantic relation extraction",
"authors": [
{
"first": "Yatian",
"middle": [],
"last": "Shen",
"suffix": ""
},
{
"first": "Xuanjing",
"middle": [],
"last": "Huang",
"suffix": ""
}
],
"year": 2016,
"venue": "COLING",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yatian Shen and Xuanjing Huang. 2016. Attention- based convolutional neural network for semantic re- lation extraction. In COLING.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Multi-instance multi-label learning for relation extraction",
"authors": [
{
"first": "Mihai",
"middle": [],
"last": "Surdeanu",
"suffix": ""
},
{
"first": "Julie",
"middle": [],
"last": "Tibshirani",
"suffix": ""
},
{
"first": "Ramesh",
"middle": [],
"last": "Nallapati",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
}
],
"year": 2012,
"venue": "EMNLP and CoNLL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mihai Surdeanu, Julie Tibshirani, Ramesh Nallapati, and Christopher D. Manning. 2012. Multi-instance multi-label learning for relation extraction. In EMNLP and CoNLL.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "A hierarchical framework for relation extraction with reinforcement learning",
"authors": [
{
"first": "Ryuichi",
"middle": [],
"last": "Takanobu",
"suffix": ""
},
{
"first": "Tianyang",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Jiexi",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Minlie",
"middle": [],
"last": "Huang",
"suffix": ""
}
],
"year": 2019,
"venue": "AAAI",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ryuichi Takanobu, Tianyang Zhang, Jiexi Liu, and Minlie Huang. 2019. A hierarchical framework for relation extraction with reinforcement learning. In AAAI.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Mean teachers are better role models: Weight-averaged consistency targets improve semi-supervised deep learning results",
"authors": [
{
"first": "Antti",
"middle": [],
"last": "Tarvainen",
"suffix": ""
},
{
"first": "Harri",
"middle": [],
"last": "Valpola",
"suffix": ""
}
],
"year": 2017,
"venue": "NeurIPS",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Antti Tarvainen and Harri Valpola. 2017. Mean teach- ers are better role models: Weight-averaged consis- tency targets improve semi-supervised deep learning results. In NeurIPS.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "RESIDE: Improving distantly-supervised neural relation extraction using side information",
"authors": [
{
"first": "Shikhar",
"middle": [],
"last": "Vashishth",
"suffix": ""
},
{
"first": "Rishabh",
"middle": [],
"last": "Joshi",
"suffix": ""
},
{
"first": "Chiranjib",
"middle": [],
"last": "Sai Suman Prayaga",
"suffix": ""
},
{
"first": "Partha",
"middle": [],
"last": "Bhattacharyya",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Talukdar",
"suffix": ""
}
],
"year": 2018,
"venue": "EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shikhar Vashishth, Rishabh Joshi, Sai Suman Prayaga, Chiranjib Bhattacharyya, and Partha Talukdar. 2018. RESIDE: Improving distantly-supervised neural re- lation extraction using side information. In EMNLP.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Improving distantly supervised relation extraction with neural noise converter and conditional optimal selector",
"authors": [
{
"first": "Shanchan",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Kai",
"middle": [],
"last": "Fan",
"suffix": ""
},
{
"first": "Qiong",
"middle": [],
"last": "Zhang",
"suffix": ""
}
],
"year": 2019,
"venue": "AAAI",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shanchan Wu, Kai Fan, and Qiong Zhang. 2019. Im- proving distantly supervised relation extraction with neural noise converter and conditional optimal selec- tor. In AAAI.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Adversarial training for relation extraction",
"authors": [
{
"first": "Yi",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Bamman",
"suffix": ""
},
{
"first": "Stuart",
"middle": [],
"last": "Russell",
"suffix": ""
}
],
"year": 2017,
"venue": "EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yi Wu, David Bamman, and Stuart Russell. 2017. Ad- versarial training for relation extraction. In EMNLP.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Noise mitigation for neural entity typing and relation extraction",
"authors": [
{
"first": "Yadollah",
"middle": [],
"last": "Yaghoobzadeh",
"suffix": ""
},
{
"first": "Heike",
"middle": [],
"last": "Adel",
"suffix": ""
},
{
"first": "Hinrich",
"middle": [],
"last": "Sch\u00fctze",
"suffix": ""
}
],
"year": 2017,
"venue": "EACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yadollah Yaghoobzadeh, Heike Adel, and Hinrich Sch\u00fctze. 2017. Noise mitigation for neural entity typing and relation extraction. In EACL.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Distant supervision relation extraction with intra-bag and inter-bag attentions",
"authors": [
{
"first": "Zhen-Hua",
"middle": [],
"last": "Zhi-Xiu Ye",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ling",
"suffix": ""
}
],
"year": 2019,
"venue": "NAACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhi-Xiu Ye and Zhen-Hua Ling. 2019. Distant supervi- sion relation extraction with intra-bag and inter-bag attentions. In NAACL.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Distant supervision for relation extraction via piecewise convolutional neural networks",
"authors": [
{
"first": "Daojian",
"middle": [],
"last": "Zeng",
"suffix": ""
},
{
"first": "Kang",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Daojian Zeng, Kang Liu, Yubo Chen, and Jun Zhao. 2015. Distant supervision for relation extraction via piecewise convolutional neural networks. In EMNLP.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Relation classification via convolutional deep neural network",
"authors": [
{
"first": "Daojian",
"middle": [],
"last": "Zeng",
"suffix": ""
},
{
"first": "Kang",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Siwei",
"middle": [],
"last": "Lai",
"suffix": ""
},
{
"first": "Guangyou",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Jun",
"middle": [],
"last": "Zhao",
"suffix": ""
}
],
"year": 2014,
"venue": "COLING",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Daojian Zeng, Kang Liu, Siwei Lai, Guangyou Zhou, and Jun Zhao. 2014. Relation classification via con- volutional deep neural network. In COLING.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"text": "Overview of the self-ensemble noisy samples filtering framework. It starts with the clean and noisy samples generated by distant supervision. During training, a self-ensemble version of the model is maintained. At the end of an iteration, this self-ensemble model is used to identify the noisy samples in the training data. These noisy samples are filtered out from the next iteration of training.",
"num": null,
"type_str": "figure",
"uris": null
},
"FIGREF1": {
"text": ", Hoffmann et al. (2011), Surdeanu et al. (2012), Lin et al. (2016), Yaghoobzadeh et al. (2017), Vashishth et al. (2018), Wu et al. (2019), and Ye and Ling",
"num": null,
"type_str": "figure",
"uris": null
},
"TABREF1": {
"html": null,
"text": "Examples of distantly supervised (DS) clean and noisy samples.",
"type_str": "table",
"num": null,
"content": "<table/>"
},
"TABREF3": {
"html": null,
"text": "The statistics of the NYT dataset.",
"type_str": "table",
"num": null,
"content": "<table/>"
}
}
}
}