{ "paper_id": "2021", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T15:48:52.921038Z" }, "title": "Relation Classification via Relation Validation", "authors": [ { "first": "Jose", "middle": [ "G" ], "last": "Moreno", "suffix": "", "affiliation": { "laboratory": "UMR 5505 CNRS F-31000", "institution": "University of Toulouse IRIT", "location": { "settlement": "Toulouse", "country": "France" } }, "email": "jose.moreno@irit.fr" }, { "first": "Antoine", "middle": [], "last": "Doucet", "suffix": "", "affiliation": { "laboratory": "", "institution": "University", "location": { "addrLine": "of La Rochelle L3i F-17000, La Rochelle", "country": "France" } }, "email": "antoine.doucet@univ-lr.fr" }, { "first": "Brigitte", "middle": [], "last": "Grau", "suffix": "", "affiliation": { "laboratory": "LIMSI, UPR 3251 CNRS F", "institution": "", "location": { "postCode": "91405", "settlement": "Orsay", "country": "France" } }, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Recognising if a relation holds between two entities in a text plays a vital role in information extraction. To address this problem, multiple models have been proposed based on fixed or contextualised word representations. In this paper, we propose a meta relation classification model that can integrate the most recent models by the use of a related task, namely relation validation. To do so, we encode the text that may contain the relation and a relation triplet candidate into a sentence-triplet representation. We grounded our strategy in recent neural architectures that allow single sentence classification as well as pair comparisons. Finally, our model is trained to determine the most relevant sentence-triplet pair from a set of candidates. Experiments on two public data sets for relation extraction show that the use of the sentence-triplet representation outperforms strong baselines and achieves comparable results when compared to larger models.", "pdf_parse": { "paper_id": "2021", "_pdf_hash": "", "abstract": [ { "text": "Recognising if a relation holds between two entities in a text plays a vital role in information extraction. To address this problem, multiple models have been proposed based on fixed or contextualised word representations. In this paper, we propose a meta relation classification model that can integrate the most recent models by the use of a related task, namely relation validation. To do so, we encode the text that may contain the relation and a relation triplet candidate into a sentence-triplet representation. We grounded our strategy in recent neural architectures that allow single sentence classification as well as pair comparisons. Finally, our model is trained to determine the most relevant sentence-triplet pair from a set of candidates. Experiments on two public data sets for relation extraction show that the use of the sentence-triplet representation outperforms strong baselines and achieves comparable results when compared to larger models.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Recognising and classifying relations between two entities in a text plays a vital role in knowledge base population (KBP), a major sub-task of information extraction (IE). Some examples of typical relations in knowledge bases (KB) are spouse, CEO, place of birth, profession, etc. Nowadays, there exist large KB that store millions of facts such as DBpedia (Bizer et al., 2009) or YAGO (Hoffart et al., 2013) . However, more than 70% of people entities have not associated information for relations such as place of birth or nationality (Dong et al., 2014) .", "cite_spans": [ { "start": 358, "end": 378, "text": "(Bizer et al., 2009)", "ref_id": "BIBREF4" }, { "start": 387, "end": 409, "text": "(Hoffart et al., 2013)", "ref_id": "BIBREF15" }, { "start": 538, "end": 557, "text": "(Dong et al., 2014)", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Most approaches model the relation classification (RC) (dos Santos et al., 2015; Nguyen and Grishman, 2015) task as a learning problem where it is required to predict if a passage contains a type of relation (multi-class classification) . This setup requires annotated examples of each class, i.e. each type of relation, which can be difficult to obtain. To overcome this problem, distant supervision has been proposed (Mintz et al., 2009) for automatically annotating texts given relation triplets existing in a KB by projecting triplets into texts to increase the input data. Its main counterpart is that distant supervision models must deal with wrongly annotated examples. The difficulty of the task is shown by the results of the TAC KBP slot filling task 1 . For instance, in 2014, the maximum F1-score of the task was 0.3672 (Surdeanu and Ji, 2014) . Another trend is trying to collect information directly from the web in an unsupervised setting, i.e. the open IE paradigm (Banko et al., 2007) . In these two last settings, one crucial point is to be able to assess the validity of the extracted relations. This point motivated an extra track in TAC KBP 2015 following a divide-and-conquer setup. It consists in validating the relations extracted by relation extraction (RE) systems in order to improve their final scores.", "cite_spans": [ { "start": 55, "end": 80, "text": "(dos Santos et al., 2015;", "ref_id": "BIBREF10" }, { "start": 81, "end": 107, "text": "Nguyen and Grishman, 2015)", "ref_id": "BIBREF20" }, { "start": 208, "end": 236, "text": "(multi-class classification)", "ref_id": null }, { "start": 419, "end": 439, "text": "(Mintz et al., 2009)", "ref_id": "BIBREF18" }, { "start": 832, "end": 855, "text": "(Surdeanu and Ji, 2014)", "ref_id": "BIBREF27" }, { "start": 981, "end": 1001, "text": "(Banko et al., 2007)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The purpose of relation validation (RV) aims at taking advantage of several hypotheses, provided by one or several systems, for improving the recognition of relations in texts and discarding false ones. Given a candidate relation triplet (e1, R, e2) and a passage, this task can be defined as learning to decide if the passage supports the relation in a binary classification setup. Trigger words and relation patterns are usually modelled in relation validation as features for representing the relation type. In Wang and Neumann (2008) , the relation validation setup is modified and presented as an entailment problem, where systems learn whether the text entails the relation based on linguistic features.", "cite_spans": [ { "start": 514, "end": 537, "text": "Wang and Neumann (2008)", "ref_id": "BIBREF32" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this paper, we propose not only to learn the representation of the relation type, but also to learn the representation of the validation knowledge by using a neural architecture for modelling relation validation, inspired by neural entailment models. We aim to decide whether the text supports the relation by encoding the text and the triplet 2 in a transformer architecture as in (Baldini Soares et al., 2019; Zhao et al., 2019) . Once a model for relation validation is learned, we use it to validate the output of a relation classification model. Our experiments show that our proposal outperforms robust neural models for relation classification but fails to improve most recent works.", "cite_spans": [ { "start": 385, "end": 414, "text": "(Baldini Soares et al., 2019;", "ref_id": "BIBREF2" }, { "start": 415, "end": 433, "text": "Zhao et al., 2019)", "ref_id": "BIBREF39" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The remainder of this paper is structured as follows: Section 2 presents some relevant models for relation classification and validation. Section 3 details our strategy to classify relations based on relation validation. Then, the experimental setup and results are presented in Sections 4. Finally, conclusions are drawn in Section 5.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Different ensemble models (Viswanathan et al., 2015) have been defined for the relation validation KBP task based on the prediction made by the RE systems. However, Yu et al. (2014) show that relation validation requires considering linguistic features for recognising if a relation is expressed in a text by exploiting rich linguistic knowledge from multiple lexical, syntactic, and semantic levels. In Wang and Neumann (2008) , the relation to validate is transformed by simple patterns in a sentence and an alignment between the two texts is performed by a kernel-based approach.", "cite_spans": [ { "start": 26, "end": 52, "text": "(Viswanathan et al., 2015)", "ref_id": "BIBREF28" }, { "start": 165, "end": 181, "text": "Yu et al. (2014)", "ref_id": "BIBREF37" }, { "start": 404, "end": 427, "text": "Wang and Neumann (2008)", "ref_id": "BIBREF32" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Traditional methods for relation extraction are based on feature engineering and rely on lexical and syntactic information. Dependency trees provide clues for deciding the presence of a relation in unsupervised relation extraction (Culotta and Sorensen, 2004; Bunescu and Mooney, 2005; Fundel et al., 2007) . Gamallo et al. (2012) defined patterns of relation by parsing the dependencies in open information extraction. Words around the entity mentions in sentences give clues to characterise the semantics of a relation (Niu et al., 2012; Hoffmann et al., 2011; Yao et al., 2011; Riedel et al., 2010; Mintz et al., 2009) . In addition to linguistic information, collective information about the entities and their relations were exploited for RV (Rahman et al., 2018) by adding features based on a graph of entities and for RE by Augenstein (2016) that integrated global information about the object of a relation. The latter model shows the importance of adding information about the entities in the triplet. The above approaches rely on Natural Language Processing (NLP) tools for syntactic analysis and on lexical knowledge for identifying triggers. Thus, it remains difficult to overcome the lexical gap between texts and relation names when learning relation patterns for different types of relations in an open domain.", "cite_spans": [ { "start": 231, "end": 259, "text": "(Culotta and Sorensen, 2004;", "ref_id": "BIBREF6" }, { "start": 260, "end": 285, "text": "Bunescu and Mooney, 2005;", "ref_id": "BIBREF5" }, { "start": 286, "end": 306, "text": "Fundel et al., 2007)", "ref_id": "BIBREF11" }, { "start": 309, "end": 330, "text": "Gamallo et al. (2012)", "ref_id": "BIBREF12" }, { "start": 521, "end": 539, "text": "(Niu et al., 2012;", "ref_id": "BIBREF21" }, { "start": 540, "end": 562, "text": "Hoffmann et al., 2011;", "ref_id": "BIBREF16" }, { "start": 563, "end": 580, "text": "Yao et al., 2011;", "ref_id": "BIBREF35" }, { "start": 581, "end": 601, "text": "Riedel et al., 2010;", "ref_id": "BIBREF24" }, { "start": 602, "end": 621, "text": "Mintz et al., 2009)", "ref_id": "BIBREF18" }, { "start": 747, "end": 768, "text": "(Rahman et al., 2018)", "ref_id": "BIBREF23" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Recently, end-to-end neural network (NN) based approaches have been emerged and getting lots of attention for the relation classification task (dos Santos et al., 2015; Nguyen and Grishman, 2015; Vu et al., 2016; Dligach et al., 2017; Zheng et al., 2016; Zhang et al., 2018) . However, they do not leverage any triplet representation of a relation for better understanding the relatedness between the text and the triplet. A lot of NN models for evaluating the similarity of two sentences have been proposed. They encode each entry by a CNN or an RNN (e.g., LSTM or BiLSTM), and compute a similarity between the sentence representations (Severyn and Moschitti, 2015) or compute interactions between the texts by an attention layer (Yin et al., 2016) .", "cite_spans": [ { "start": 143, "end": 168, "text": "(dos Santos et al., 2015;", "ref_id": "BIBREF10" }, { "start": 169, "end": 195, "text": "Nguyen and Grishman, 2015;", "ref_id": "BIBREF20" }, { "start": 196, "end": 212, "text": "Vu et al., 2016;", "ref_id": "BIBREF29" }, { "start": 213, "end": 234, "text": "Dligach et al., 2017;", "ref_id": "BIBREF8" }, { "start": 235, "end": 254, "text": "Zheng et al., 2016;", "ref_id": "BIBREF40" }, { "start": 255, "end": 274, "text": "Zhang et al., 2018)", "ref_id": "BIBREF38" }, { "start": 637, "end": 666, "text": "(Severyn and Moschitti, 2015)", "ref_id": "BIBREF25" }, { "start": 731, "end": 749, "text": "(Yin et al., 2016)", "ref_id": "BIBREF36" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Most recent models encode one or two sentences by using the pre-trained neural models. Their use in RC has been successfully tested by Baldini Soares et al. (2019) where entities are marked and the sentence representation is used. Then a simple but effective sequence classification is performed using the sentence representation token which encodes the full sentence including the marked tokens. Their performances are boosted by using more documents in an unsupervised fashion. Despite more information being used, Baldini Soares et al. 2019do not use an explicit relation representation. In an effort to cope with this problem, we explore the use of pre-trained neural models into the RV problem by explicitly using a triplet-sentence representation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Our proposal first learns how to validate relations ground on a sentence-triplet representation in order to predict if a relation stands or not in a sentence.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Relation classification via relation validation", "sec_num": "3" }, { "text": "To do so, our model is based on a pre-trained BERT model for sequence classification (Devlin et al., 2018) . Using pre-trained models to address RC is a promising strategy as shown by Baldini Soares et al. (2019). In both cases, i.e. RV or RC, a major consideration is the input definition to correctly identify the target entities, mainly because pre-trained models do not include this option by default. In this section, we present the details of the architecture together with the input transformations to correctly feed a sequence classification model such as BERT.", "cite_spans": [ { "start": 85, "end": 106, "text": "(Devlin et al., 2018)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Relation classification via relation validation", "sec_num": "3" }, { "text": "We opted for a simplified version 3 of the architecture proposed in Baldini Soares et al. 2019for relation classification, namely BERT EM . It is based on fine-tuning of a pre-trained transformer called BERT (Devlin et al., 2018) where an extra layer is added to make the classification of the sentence representation, e.g. a classification task is performed using as input the [CLS] token. As reported by Baldini Soares et al. 2019, an important component is the use of mark symbols to identify the entities to classify.", "cite_spans": [ { "start": 208, "end": 229, "text": "(Devlin et al., 2018)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "BERT-based Architecture", "sec_num": "3.1" }, { "text": "Given a tokenised sentence S = \"t 1 t 2 ... t n \", an origin offset o o \u2208 1, n, a target offset o t \u2208 1, n, and a set of k relations R = {r 1 , r 2 , ..., r k }. The relation extraction problem consists in determining which relation r p \u2208 R stands in the sentence between the tokens in positions o o and o t , respectively. 4", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Problem definition", "sec_num": "3.2.1" }, { "text": "We follow the input considerations for RC proposed by (Baldini Soares et al., 2019) . Thus, to introduce those markers, the original input of RC models", "cite_spans": [ { "start": 54, "end": 83, "text": "(Baldini Soares et al., 2019)", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Input considerations", "sec_num": "3.2.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "input(S) = [CLS] t 1 t 2 ... t n [SEP]", "eq_num": "(1)" } ], "section": "Input considerations", "sec_num": "3.2.2" }, { "text": "is modified to include the entities markers", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Input considerations", "sec_num": "3.2.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "input (S) = [CLS] ...$ t oo $ ...# t ot #... [SEP]", "eq_num": "(2)" } ], "section": "Input considerations", "sec_num": "3.2.2" }, { "text": "Note that length(input (S)) = length(input(S)) + 4, because we added the tokens $ and # twice.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Input considerations", "sec_num": "3.2.2" }, { "text": "Given a tokenised sentence S = \"t 1 t 2 ... t n \", an origin offset o o \u2208 1, n, a target offset o t \u2208 1, n, and a triplet t =< t oo , r, t ot >. The relation validation problem consists in determining whether the relation r between t oo and t ot is supported by the sentence S or not.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Problem definition", "sec_num": "3.3.1" }, { "text": "We transform triplets t =< t oo , r, t ot > into a sequence of its label words. Then we use the sentence S on one side and the triplet t on the other side as input of the model to match the relation validation problem into a text entailment setup as suggested by Wang and Neumann (2008) . So, in this case, the input is modified to", "cite_spans": [ { "start": 263, "end": 286, "text": "Wang and Neumann (2008)", "ref_id": "BIBREF32" } ], "ref_spans": [], "eq_spans": [], "section": "Input considerations", "sec_num": "3.3.2" }, { "text": "input (S) = [CLS]...$ t oo $ ... # t ot #...[SEP] t oo t ot r w 1 r w 2 ... r wm [SEP]", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Input considerations", "sec_num": "3.3.2" }, { "text": "(3)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Input considerations", "sec_num": "3.3.2" }, { "text": "Note that length(input (S)) = length(input(S)) + 4 + (m + 2), because of the tokens $ and #, and the triplet t is represented by m + 2 tokens (m words for the relation r and the two entities tokens). This architecture is possible because of the single or double input capabilities of transformer architectures such as BERT. Our proposed architecture is depicted in Figure 1 . As for RC, we add the mark symbols in the sentence but not for the triplet. The final prediction is based on the sentence representation or the [CLS] token.", "cite_spans": [], "ref_spans": [ { "start": 365, "end": 373, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Input considerations", "sec_num": "3.3.2" }, { "text": "As our work focuses on relation extraction, a prior stage is needed to transform any relation classification data set into a relation validation one (i.e. as many examples as relations/classes). This transformation consists in generating |R| relation validation examples for each relation extraction one, by considering the correct relation as positive and others as negatives. In this case, if S is the set of examples for RC, then the set of examples for RV (S RV ) is |R| times larger than S. However, to prevent imbalance, negative sampling is commonly used. In this case, |S RV | = (ns + 1) \u00d7 |S| where ns is the number of negative examples used to build S RV .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Input considerations", "sec_num": "3.3.2" }, { "text": "Our main contribution is the definition of a new model for RC using RV, namely BERT+RC+RV. Figure 1 : Our relation validation model. Tokens in bold are marked using \"$\" for the Entity1 and \"#\" for the Entity2.", "cite_spans": [], "ref_spans": [ { "start": 91, "end": 99, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Validation of a classification prediction", "sec_num": "3.4" }, { "text": "During training time our RV model behaves as described in Algorithm 1. The set S RV used as input is built as described in Section 3.3.2. createInput generates an input such as in Equation 2. The output is a relation validation model (M RV ) capable of detecting if the input is valid or not. On the other hand, at inference time not all cases are evaluated. Our model can use as input the outputs of multiples RC models 5 (S v ) as described in Algorithm 2. Each example in S v is composed of a sentence and n RC labels predicted by n RC RC models, i.e. each example has a list (L) of n RC predictions. Thus, our RV model defines the most suitable label based on the sentence and the triplet together instead of a classic RC model that only uses the sentence. getTriplet is a function based on a simple dictionary that returns the relation words (r w 1 , ..., r wm ) related to a label l rc and the entities (t oo and t ot ) in S. This way, our model is not only capable of learning from the same data but also capable of aggregating multiple RC predictions.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Validation of a classification prediction", "sec_num": "3.4" }, { "text": "Input:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Algorithm 2: BERT+RC+RV prediction", "sec_num": null }, { "text": "Set of examples to validate S v {Sentence (S), labels (L)}, a Validation model (M RV ) l V = [ ] for S, L \u2208 S v do l i\u2212valid = [ ] for l i \u2208 unique(L) do t = getTriplet(l i ,S) input (S) = createInput(S,t) confid = predict(M RV , input (S)) l i\u2212valid . append(l i , confid) l V .append(labelMaxConfidence(l i\u2212valid )) Output: List of predictions (l V )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Algorithm 2: BERT+RC+RV prediction", "sec_num": null }, { "text": "4 Experiments and Results", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Algorithm 2: BERT+RC+RV prediction", "sec_num": null }, { "text": "In this study, we experimented on two publicly available data set: SemEval10 6 and TACRED 7 . Statistics of these standard relation classification data sets are presented in Table 1 . We created a relation validation version from both data sets as described in Section 3.3.2. The input of our RV model needs a set of relation words which, originally, are not present in the data sets. Thus, to obtain these words, we used a rather simple strategy that consists of tokenising the relations names and using them as relation words. If needed it considers the relation direction by reversing the position of the tokenised words. Table 2 shows some examples of the selected words.", "cite_spans": [], "ref_spans": [ { "start": 174, "end": 181, "text": "Table 1", "ref_id": null }, { "start": 625, "end": 632, "text": "Table 2", "ref_id": "TABREF3" } ], "eq_spans": [], "section": "Data Sets", "sec_num": "4.1" }, { "text": "In both cases, we used the respective official F 1 metric 8 for evaluation. ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Data Sets", "sec_num": "4.1" }, { "text": "We implemented BERT EM (EntityMarkers[CLS] version) of Baldini Soares et al. 2019for RC and adapted it to perform RV 9 . For SemEval10, we used 10% of training data as validation data which allows fair comparison against previous works. A maximum number of epochs was fixed to 5 and the best epoch in validation used for prediction 10 . Negative sampling was fixed to 10 where the input sentence remains and the entities remain the same but the words used for the relation representation (r w 1 , r w 2 , ..., r wm ) are sampled from other classes. Binary Cross Entropy was used as loss function, Adam as optimiser, bert-base-uncased 11 as pretrained model, and other parameters were assigned following the library recommendations (Wolf et al., 2019) . 12 The final layer is composed of as many neurons as classes in each data set for RC and equal to two for RV (negative or positive).", "cite_spans": [ { "start": 731, "end": 750, "text": "(Wolf et al., 2019)", "ref_id": "BIBREF33" }, { "start": 753, "end": 755, "text": "12", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Implementation details", "sec_num": "4.2" }, { "text": "Cause-Effect(e1,e2) Cause, Effect", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Data set Relation Words", "sec_num": null }, { "text": "Cause-Effect(e2,e1) Effect, Cause ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "SemEval10", "sec_num": null }, { "text": "Average and best result of 5 runs of our implementation of (Baldini Soares et al., 2019) using the SemEval10 data set are presented in Table 3 (BERT EM *). The reported results are within the values reported in the original paper for this configuration, but we used bert-base-uncased instead 9 Our code is publicly available at https://github.com/jgmorenof/rcviarv2020.", "cite_spans": [ { "start": 292, "end": 293, "text": "9", "ref_id": null } ], "ref_spans": [ { "start": 135, "end": 142, "text": "Table 3", "ref_id": "TABREF5" } ], "eq_spans": [], "section": "Results", "sec_num": "4.3" }, { "text": "10 Our models got the best validation performances at epoch 5, no further epochs were explored.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results", "sec_num": "4.3" }, { "text": "11 https://github.com/google-research/bert 12 We did not perform parameters search. Table 5 : Performances for one run of our method vs BERT EM runs in terms of F 1 using the SemEval10 data set. We calculated our results by epoch after training.", "cite_spans": [ { "start": 43, "end": 45, "text": "12", "ref_id": null } ], "ref_spans": [ { "start": 84, "end": 91, "text": "Table 5", "ref_id": null } ], "eq_spans": [], "section": "Results", "sec_num": "4.3" }, { "text": "BERT EM * -run1 - - - - 0.8760 BERT EM * -run2 - - - - 0.8683 BERT EM * -run3 - - - - 0.8688 BERT EM * -run4 - - - - 0.8770 BERT EM * -run5 - - - - 0.8614", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results", "sec_num": "4.3" }, { "text": "of bert-large-uncased due to computational constraints. In both cases, for average and best, our results using the relation validation model outperform their counterparts by a non-negligible margin. In order to understand the cases in which BERT+RC+RV makes the right prediction, we have reported the percentage of correct and incorrect predictions grouped by the number of candidates in Table 4 . Note that at this stage BERT+RC+RV does not consider the number of predictions made for a candidate (as is made by voting) but analyse each candidate independently of its popularity. Although we used 5 runs, none of the examples obtained five candidates as for every test example at least two models predicted the same class. The number of correct predictions made by our validation model is 68.69% when there are only 2 candidates but decreases as the number of candidates increase (down to 33.33% for 4 candidates). However, in most of the cases, the predictions of the relation classification model only get 2 candidates (83.81%). Clearly, this result shows that there is still room for improvement by proposing better RV models. Following this direction, we apply majority voting 13 over the predictions of BERT EM and BERT+RC+RV. Results are included in Table 3 . Note that voting benefits our baseline but also our method by a similar margin. The lower part of Table 3 allows comparing our results to those of the most recent RC models. The best result, giving an F 1 score of 0.8941 is obtained based on majority voting of the prediction from the RV model. When compared against results reported in SemEval10, our method achieves the third position slightly behind BERT EM +M T B, but quite far from EPGNN (Zhao et al., 2019) . However, BERT+RC+RV remains an easy-to-implement model as no special modification is needed when compared with BERT EM +M T B which uses extra auto-supervised training plus a larger model 14 and EPGNN which needs graph embeddings. Moreover, we believe that BERT EM +M T B can be improved if more robust models are validated.", "cite_spans": [ { "start": 1711, "end": 1730, "text": "(Zhao et al., 2019)", "ref_id": "BIBREF39" } ], "ref_spans": [ { "start": 388, "end": 395, "text": "Table 4", "ref_id": "TABREF6" }, { "start": 1257, "end": 1264, "text": "Table 3", "ref_id": "TABREF5" }, { "start": 1365, "end": 1372, "text": "Table 3", "ref_id": "TABREF5" } ], "eq_spans": [], "section": "Results", "sec_num": "4.3" }, { "text": "We also studied the performance of our method by epoch, as reported in Table 5 . Results of BERT EM * are presented for epoch 5 as this epoch got the best validation result. Note that our method 13 The class that receives the highest number of votes will be chosen.", "cite_spans": [ { "start": 195, "end": 197, "text": "13", "ref_id": null } ], "ref_spans": [ { "start": 71, "end": 78, "text": "Table 5", "ref_id": null } ], "eq_spans": [], "section": "Results", "sec_num": "4.3" }, { "text": "14 bert-large-uncased uses three times more parameters (340 millions) than bert-base-uncased (110 millions). outperforms all individual RC predictions from the first epoch and no underperformance is observed across epochs. This result suggests that our method is an effective way to mixture RC predictions.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results", "sec_num": "4.3" }, { "text": "Finally, we experimented with our model using the TACRED data set. Results are reported in Table 3 . The results follow the same pattern as with the SemEval10 data set, except for one important difference: The performance obtained with BERT EM * (F 1 = 65.50) is much lower than the value reported by the authors (F 1 = 69.13). This can be explained from the fact that the number of relations in TACRED is twice as high as in Se-mEval10. Subsequently, more parameters allowed a richer representation and a better starting point (+4.5 absolute points w.r.t. F 1 ).", "cite_spans": [], "ref_spans": [ { "start": 91, "end": 98, "text": "Table 3", "ref_id": "TABREF5" } ], "eq_spans": [], "section": "Results", "sec_num": "4.3" }, { "text": "In this paper, we presented a new strategy to improve the neural models for relation classification by using relation validation knowledge, i.e. the sentence-triplet representation. Experiments with two public data sets experimentally support our hypothesis. The proposed strategy enables new ways to improve existing methods as it can be easily plugged into more recent (or future) and powerful models. Future work will be focused on the use of this strategy across tasks from different (and far) domains as our relation validation architecture can validate triplets with unseen relations. This opened an interesting research direction for relation classification by focusing more on triplet-sentence representations rather than exclusively on the sentence.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "5" }, { "text": "https://catalog.ldc.upenn.edu/LDC2018T22", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "We are aware that our model mainly based its improvements on input modification. However, we strongly believe that this is unfairly underestimated in the field.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "We used the EntityMarkers[CLS] version. Other configurations were not explored and are left for future work.4 Note that a non-relation or other relation may be part of the set R.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "In our experiments, we used the outputs of our implementation of a state-of-the-art RC model, BERTEM , described in Section 4.2.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "This work has been partly supported by the European Union's Horizon 2020 research and innovation programme under grant 825153 (EMBED-DIA). We also thank the anonymous reviewers for their careful reading of this paper and their many insightful comments and suggestions.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgements", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Improving Relation Extraction by Pretrained Language Representations", "authors": [ { "first": "Christoph", "middle": [], "last": "Alt", "suffix": "" }, { "first": "Marc", "middle": [], "last": "H\u00fcbner", "suffix": "" }, { "first": "Leonhard", "middle": [], "last": "Hennig", "suffix": "" } ], "year": 2019, "venue": "Proceedings of AKBC", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Christoph Alt, Marc H\u00fcbner, and Leonhard Hen- nig. 2019. Improving Relation Extraction by Pre- trained Language Representations. In Proceedings of AKBC.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Web Relation Extraction with Distant Supervision", "authors": [ { "first": "Isabelle", "middle": [], "last": "Augenstein", "suffix": "" } ], "year": 2016, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Isabelle Augenstein. 2016. Web Relation Extraction with Distant Supervision. Ph.D. Dissertation. Uni- versity of Sheffield.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Matching the Blanks: Distributional Similarity for Relation Learning", "authors": [ { "first": "", "middle": [], "last": "Livio Baldini", "suffix": "" }, { "first": "Nicholas", "middle": [], "last": "Soares", "suffix": "" }, { "first": "Jeffrey", "middle": [], "last": "Fitzgerald", "suffix": "" }, { "first": "Tom", "middle": [], "last": "Ling", "suffix": "" }, { "first": "", "middle": [], "last": "Kwiatkowski", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 57th Annual Meeting of the ACL", "volume": "", "issue": "", "pages": "2895--2905", "other_ids": {}, "num": null, "urls": [], "raw_text": "Livio Baldini Soares, Nicholas FitzGerald, Jeffrey Ling, and Tom Kwiatkowski. 2019. Matching the Blanks: Distributional Similarity for Relation Learn- ing. In Proceedings of the 57th Annual Meeting of the ACL. 2895-2905.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Open Information Extraction from the Web", "authors": [ { "first": "Michele", "middle": [], "last": "Banko", "suffix": "" }, { "first": "J", "middle": [], "last": "Michael", "suffix": "" }, { "first": "Stephen", "middle": [], "last": "Cafarella", "suffix": "" }, { "first": "Matthew", "middle": [], "last": "Soderland", "suffix": "" }, { "first": "Oren", "middle": [], "last": "Broadhead", "suffix": "" }, { "first": "", "middle": [], "last": "Etzioni", "suffix": "" } ], "year": 2007, "venue": "IJ-CAI", "volume": "7", "issue": "", "pages": "2670--2676", "other_ids": {}, "num": null, "urls": [], "raw_text": "Michele Banko, Michael J Cafarella, Stephen Soder- land, Matthew Broadhead, and Oren Etzioni. 2007. Open Information Extraction from the Web.. In IJ- CAI, Vol. 7. 2670-2676.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "DBpedia-A crystallization point for the Web of Data", "authors": [ { "first": "Christian", "middle": [], "last": "Bizer", "suffix": "" }, { "first": "Jens", "middle": [], "last": "Lehmann", "suffix": "" }, { "first": "Georgi", "middle": [], "last": "Kobilarov", "suffix": "" }, { "first": "S\u00f6ren", "middle": [], "last": "Auer", "suffix": "" }, { "first": "Christian", "middle": [], "last": "Becker", "suffix": "" }, { "first": "Richard", "middle": [], "last": "Cyganiak", "suffix": "" }, { "first": "Sebastian", "middle": [], "last": "Hellmann", "suffix": "" } ], "year": 2009, "venue": "Journal of web semantics", "volume": "7", "issue": "", "pages": "154--165", "other_ids": {}, "num": null, "urls": [], "raw_text": "Christian Bizer, Jens Lehmann, Georgi Kobilarov, S\u00f6ren Auer, Christian Becker, Richard Cyganiak, and Sebastian Hellmann. 2009. DBpedia-A crystal- lization point for the Web of Data. Journal of web semantics 7, 3 (2009), 154-165.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "A shortest path dependency kernel for relation extraction", "authors": [ { "first": "C", "middle": [], "last": "Razvan", "suffix": "" }, { "first": "Raymond J", "middle": [], "last": "Bunescu", "suffix": "" }, { "first": "", "middle": [], "last": "Mooney", "suffix": "" } ], "year": 2005, "venue": "Proceedings of the conference on HLT and EMNLP", "volume": "", "issue": "", "pages": "724--731", "other_ids": {}, "num": null, "urls": [], "raw_text": "Razvan C Bunescu and Raymond J Mooney. 2005. A shortest path dependency kernel for relation extrac- tion. In Proceedings of the conference on HLT and EMNLP. Association for Computational Linguistics, 724-731.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Dependency Tree Kernels for Relation Extraction", "authors": [ { "first": "Aron", "middle": [], "last": "Culotta", "suffix": "" }, { "first": "Jeffrey", "middle": [], "last": "Sorensen", "suffix": "" } ], "year": 2004, "venue": "Proceedings of the 42nd Annual Meeting on ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Aron Culotta and Jeffrey Sorensen. 2004. Dependency Tree Kernels for Relation Extraction. In Proceedings of the 42nd Annual Meeting on ACL.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding", "authors": [ { "first": "Jacob", "middle": [], "last": "Devlin", "suffix": "" }, { "first": "Ming-Wei", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Kenton", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Kristina", "middle": [], "last": "Toutanova", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1810.04805" ] }, "num": null, "urls": [], "raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:1810.04805 (2018).", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Neural Temporal Relation Extraction", "authors": [ { "first": "Dmitriy", "middle": [], "last": "Dligach", "suffix": "" }, { "first": "Timothy", "middle": [], "last": "Miller", "suffix": "" }, { "first": "Chen", "middle": [], "last": "Lin", "suffix": "" }, { "first": "Steven", "middle": [], "last": "Bethard", "suffix": "" }, { "first": "Guergana", "middle": [], "last": "Savova", "suffix": "" } ], "year": 2017, "venue": "", "volume": "2017", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dmitriy Dligach, Timothy Miller, Chen Lin, Steven Bethard, and Guergana Savova. 2017. Neural Tem- poral Relation Extraction. EACL 2017 (2017), 746.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Knowledge vault: A web-scale approach to probabilistic knowledge fusion", "authors": [ { "first": "Xin", "middle": [], "last": "Dong", "suffix": "" }, { "first": "Evgeniy", "middle": [], "last": "Gabrilovich", "suffix": "" }, { "first": "Geremy", "middle": [], "last": "Heitz", "suffix": "" }, { "first": "Wilko", "middle": [], "last": "Horn", "suffix": "" }, { "first": "Ni", "middle": [], "last": "Lao", "suffix": "" }, { "first": "Kevin", "middle": [], "last": "Murphy", "suffix": "" }, { "first": "Thomas", "middle": [], "last": "Strohmann", "suffix": "" }, { "first": "Shaohua", "middle": [], "last": "Sun", "suffix": "" }, { "first": "Wei", "middle": [], "last": "Zhang", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the 20th ACM SIGKDD international conference on Knowledge discovery and data mining", "volume": "", "issue": "", "pages": "601--610", "other_ids": {}, "num": null, "urls": [], "raw_text": "Xin Dong, Evgeniy Gabrilovich, Geremy Heitz, Wilko Horn, Ni Lao, Kevin Murphy, Thomas Strohmann, Shaohua Sun, and Wei Zhang. 2014. Knowl- edge vault: A web-scale approach to probabilis- tic knowledge fusion. In Proceedings of the 20th ACM SIGKDD international conference on Knowl- edge discovery and data mining. 601-610.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Classifying Relations by Ranking with Convolutional Neural Networks", "authors": [ { "first": "Santos", "middle": [], "last": "Cicero Dos", "suffix": "" }, { "first": "Bing", "middle": [], "last": "Xiang", "suffix": "" }, { "first": "Bowen", "middle": [], "last": "Zhou", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the 53rd Annual Meeting of ACL and the 7th International JCNLP", "volume": "", "issue": "", "pages": "626--634", "other_ids": {}, "num": null, "urls": [], "raw_text": "Cicero dos Santos, Bing Xiang, and Bowen Zhou. 2015. Classifying Relations by Ranking with Con- volutional Neural Networks. In Proceedings of the 53rd Annual Meeting of ACL and the 7th Interna- tional JCNLP. 626-634.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "RelEx-Relation extraction using dependency parse trees", "authors": [ { "first": "Katrin", "middle": [], "last": "Fundel", "suffix": "" }, { "first": "Robert", "middle": [], "last": "K\u00fcffner", "suffix": "" }, { "first": "Ralf", "middle": [], "last": "Zimmer", "suffix": "" } ], "year": 2007, "venue": "Bioinformatics", "volume": "23", "issue": "", "pages": "365--371", "other_ids": {}, "num": null, "urls": [], "raw_text": "Katrin Fundel, Robert K\u00fcffner, and Ralf Zimmer. 2007. RelEx-Relation extraction using dependency parse trees. Bioinformatics 23, 3 (2007), 365-371.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Dependency-based open information extraction", "authors": [ { "first": "Pablo", "middle": [], "last": "Gamallo", "suffix": "" }, { "first": "Marcos", "middle": [], "last": "Garcia", "suffix": "" }, { "first": "Santiago", "middle": [], "last": "Fern\u00e1ndez-Lanza", "suffix": "" } ], "year": 2012, "venue": "Proceedings of the joint workshop on unsupervised and semi-supervised learning in NLP", "volume": "", "issue": "", "pages": "10--18", "other_ids": {}, "num": null, "urls": [], "raw_text": "Pablo Gamallo, Marcos Garcia, and Santiago Fern\u00e1ndez-Lanza. 2012. Dependency-based open information extraction. In Proceedings of the joint workshop on unsupervised and semi-supervised learning in NLP. 10-18.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Attention Guided Graph Convolutional Networks for Relation Extraction", "authors": [ { "first": "Zhijiang", "middle": [], "last": "Guo", "suffix": "" }, { "first": "Yan", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Wei", "middle": [], "last": "Lu", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 57th Annual Meeting of ACL", "volume": "", "issue": "", "pages": "241--251", "other_ids": {}, "num": null, "urls": [], "raw_text": "Zhijiang Guo, Yan Zhang, and Wei Lu. 2019. Attention Guided Graph Convolutional Networks for Relation Extraction. In Proceedings of the 57th Annual Meet- ing of ACL. 241-251.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "SemEval-2010 Task 8: Multi-Way Classification of Semantic Relations between Pairs of Nominals", "authors": [ { "first": "Iris", "middle": [], "last": "Hendrickx", "suffix": "" }, { "first": "Su", "middle": [ "Nam" ], "last": "Kim", "suffix": "" }, { "first": "Zornitsa", "middle": [], "last": "Kozareva", "suffix": "" }, { "first": "Preslav", "middle": [], "last": "Nakov", "suffix": "" }, { "first": "Diarmuid\u00f3", "middle": [], "last": "S\u00e9aghdha", "suffix": "" }, { "first": "Sebastian", "middle": [], "last": "Pad\u00f3", "suffix": "" }, { "first": "Marco", "middle": [], "last": "Pennacchiotti", "suffix": "" }, { "first": "Lorenza", "middle": [], "last": "Romano", "suffix": "" }, { "first": "Stan", "middle": [], "last": "Szpakowicz", "suffix": "" } ], "year": 2010, "venue": "Proceedings of the 5th SemEval", "volume": "", "issue": "", "pages": "33--38", "other_ids": {}, "num": null, "urls": [], "raw_text": "Iris Hendrickx, Su Nam Kim, Zornitsa Kozareva, Preslav Nakov, Diarmuid\u00d3 S\u00e9aghdha, Sebastian Pad\u00f3, Marco Pennacchiotti, Lorenza Romano, and Stan Szpakowicz. 2010. SemEval-2010 Task 8: Multi-Way Classification of Semantic Relations be- tween Pairs of Nominals. In Proceedings of the 5th SemEval. 33-38.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "YAGO2: A spatially and temporally enhanced knowledge base from Wikipedia", "authors": [ { "first": "Johannes", "middle": [], "last": "Hoffart", "suffix": "" }, { "first": "M", "middle": [], "last": "Fabian", "suffix": "" }, { "first": "Klaus", "middle": [], "last": "Suchanek", "suffix": "" }, { "first": "Gerhard", "middle": [], "last": "Berberich", "suffix": "" }, { "first": "", "middle": [], "last": "Weikum", "suffix": "" } ], "year": 2013, "venue": "Artificial Intelligence", "volume": "194", "issue": "", "pages": "28--61", "other_ids": {}, "num": null, "urls": [], "raw_text": "Johannes Hoffart, Fabian M Suchanek, Klaus Berberich, and Gerhard Weikum. 2013. YAGO2: A spatially and temporally enhanced knowledge base from Wikipedia. Artificial Intelligence 194 (2013), 28-61.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Knowledgebased weak supervision for information extraction of overlapping relations", "authors": [ { "first": "Raphael", "middle": [], "last": "Hoffmann", "suffix": "" }, { "first": "Congle", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Xiao", "middle": [], "last": "Ling", "suffix": "" }, { "first": "Luke", "middle": [], "last": "Zettlemoyer", "suffix": "" }, { "first": "Daniel", "middle": [ "S" ], "last": "Weld", "suffix": "" } ], "year": 2011, "venue": "Proceedings of the 49th Annual Meeting of ACL", "volume": "", "issue": "", "pages": "541--550", "other_ids": {}, "num": null, "urls": [], "raw_text": "Raphael Hoffmann, Congle Zhang, Xiao Ling, Luke Zettlemoyer, and Daniel S Weld. 2011. Knowledge- based weak supervision for information extraction of overlapping relations. In Proceedings of the 49th Annual Meeting of ACL. 541-550.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Spanbert: Improving pre-training by representing and predicting spans", "authors": [ { "first": "Mandar", "middle": [], "last": "Joshi", "suffix": "" }, { "first": "Danqi", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Yinhan", "middle": [], "last": "Liu", "suffix": "" }, { "first": "S", "middle": [], "last": "Daniel", "suffix": "" }, { "first": "Luke", "middle": [], "last": "Weld", "suffix": "" }, { "first": "Omer", "middle": [], "last": "Zettlemoyer", "suffix": "" }, { "first": "", "middle": [], "last": "Levy", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1907.10529" ] }, "num": null, "urls": [], "raw_text": "Mandar Joshi, Danqi Chen, Yinhan Liu, Daniel S Weld, Luke Zettlemoyer, and Omer Levy. 2019. Spanbert: Improving pre-training by representing and predict- ing spans. arXiv preprint arXiv:1907.10529 (2019).", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Distant supervision for relation extraction without labeled data", "authors": [ { "first": "Mike", "middle": [], "last": "Mintz", "suffix": "" }, { "first": "Steven", "middle": [], "last": "Bills", "suffix": "" } ], "year": 2009, "venue": "Proceedings of the 47th", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mike Mintz, Steven Bills, Rion Snow, and Dan Juraf- sky. 2009. Distant supervision for relation extrac- tion without labeled data. In Proceedings of the 47th", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Annual Meeting of the ACL and the 4th International JCNLP of the AFNLP", "authors": [], "year": null, "venue": "", "volume": "", "issue": "", "pages": "1003--1011", "other_ids": {}, "num": null, "urls": [], "raw_text": "Annual Meeting of the ACL and the 4th International JCNLP of the AFNLP. 1003-1011.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Relation extraction: Perspective from convolutional neural networks", "authors": [ { "first": "Huu", "middle": [], "last": "Thien", "suffix": "" }, { "first": "Ralph", "middle": [], "last": "Nguyen", "suffix": "" }, { "first": "", "middle": [], "last": "Grishman", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the 1st Workshop on Vector Space Modeling for Natural Language Processing", "volume": "", "issue": "", "pages": "39--48", "other_ids": {}, "num": null, "urls": [], "raw_text": "Thien Huu Nguyen and Ralph Grishman. 2015. Rela- tion extraction: Perspective from convolutional neu- ral networks. In Proceedings of the 1st Workshop on Vector Space Modeling for Natural Language Pro- cessing. 39-48.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "DeepDive: Web-scale Knowledgebase Construction using Statistical Learning and Inference", "authors": [ { "first": "Feng", "middle": [], "last": "Niu", "suffix": "" }, { "first": "Ce", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Christopher", "middle": [], "last": "R\u00e9", "suffix": "" }, { "first": "Jude", "middle": [ "W" ], "last": "Shavlik", "suffix": "" } ], "year": 2012, "venue": "VLDS", "volume": "12", "issue": "", "pages": "25--28", "other_ids": {}, "num": null, "urls": [], "raw_text": "Feng Niu, Ce Zhang, Christopher R\u00e9, and Jude W Shavlik. 2012. DeepDive: Web-scale Knowledge- base Construction using Statistical Learning and In- ference. VLDS 12 (2012), 25-28.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Knowledge Enhanced Contextual Word Representations", "authors": [ { "first": "Matthew", "middle": [ "E" ], "last": "Peters", "suffix": "" }, { "first": "Mark", "middle": [], "last": "Neumann", "suffix": "" }, { "first": "L", "middle": [], "last": "Robert", "suffix": "" }, { "first": "Roy", "middle": [], "last": "Logan", "suffix": "" }, { "first": "Vidur", "middle": [], "last": "Schwartz", "suffix": "" }, { "first": "Sameer", "middle": [], "last": "Joshi", "suffix": "" }, { "first": "Noah", "middle": [ "A" ], "last": "Singh", "suffix": "" }, { "first": "", "middle": [], "last": "Smith", "suffix": "" } ], "year": 2019, "venue": "EMNLP", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Matthew E. Peters, Mark Neumann, Robert L Lo- gan, Roy Schwartz, Vidur Joshi, Sameer Singh, and Noah A. Smith. 2019. Knowledge Enhanced Con- textual Word Representations. In EMNLP.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Impact of Entity Graphs on Extracting Semantic Relations", "authors": [ { "first": "Rashedur", "middle": [], "last": "Rahman", "suffix": "" }, { "first": "Brigitte", "middle": [], "last": "Grau", "suffix": "" }, { "first": "Sophie", "middle": [], "last": "Rosset", "suffix": "" } ], "year": 2018, "venue": "Information Management and Big Data", "volume": "", "issue": "", "pages": "31--47", "other_ids": {}, "num": null, "urls": [], "raw_text": "Rashedur Rahman, Brigitte Grau, and Sophie Rosset. 2018. Impact of Entity Graphs on Extracting Se- mantic Relations. In Information Management and Big Data. 31-47.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Modeling relations and their mentions without labeled text", "authors": [ { "first": "Sebastian", "middle": [], "last": "Riedel", "suffix": "" }, { "first": "Limin", "middle": [], "last": "Yao", "suffix": "" }, { "first": "Andrew", "middle": [], "last": "Mccallum", "suffix": "" } ], "year": 2010, "venue": "Machine Learning and Knowledge Discovery in Databases", "volume": "", "issue": "", "pages": "148--163", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sebastian Riedel, Limin Yao, and Andrew McCallum. 2010. Modeling relations and their mentions with- out labeled text. In Machine Learning and Knowl- edge Discovery in Databases. Springer, 148-163.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Learning to Rank Short Text Pairs with Convolutional Deep Neural Networks", "authors": [ { "first": "Aliaksei", "middle": [], "last": "Severyn", "suffix": "" }, { "first": "Alessandro", "middle": [], "last": "Moschitti", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the 38th International ACM SIGIR", "volume": "", "issue": "", "pages": "373--382", "other_ids": {}, "num": null, "urls": [], "raw_text": "Aliaksei Severyn and Alessandro Moschitti. 2015. Learning to Rank Short Text Pairs with Convolu- tional Deep Neural Networks. In Proceedings of the 38th International ACM SIGIR. 373-382.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "Simple BERT Models for Relation Extraction and Semantic Role Labeling", "authors": [ { "first": "Peng", "middle": [], "last": "Shi", "suffix": "" }, { "first": "Jimmy", "middle": [], "last": "Lin", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1904.05255" ] }, "num": null, "urls": [], "raw_text": "Peng Shi and Jimmy Lin. 2019. Simple BERT Models for Relation Extraction and Semantic Role Labeling. arXiv preprint arXiv:1904.05255 (2019).", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "Overview of the english slot filling track at the tac2014 knowledge base population evaluation", "authors": [ { "first": "Mihai", "middle": [], "last": "Surdeanu", "suffix": "" }, { "first": "Heng", "middle": [], "last": "Ji", "suffix": "" } ], "year": 2014, "venue": "Proc. TAC", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mihai Surdeanu and Heng Ji. 2014. Overview of the english slot filling track at the tac2014 knowledge base population evaluation. In Proc. TAC.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "Stacked Ensembles of Information Extractors for Knowledge-Base Population", "authors": [ { "first": "Vidhoon", "middle": [], "last": "Viswanathan", "suffix": "" }, { "first": "Yinon", "middle": [], "last": "Nazneen Fatema Rajani", "suffix": "" }, { "first": "Raymond", "middle": [], "last": "Bentor", "suffix": "" }, { "first": "", "middle": [], "last": "Mooney", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the 53rd Annual Meeting of ACL and the 7th International JCNLP", "volume": "", "issue": "", "pages": "177--187", "other_ids": {}, "num": null, "urls": [], "raw_text": "Vidhoon Viswanathan, Nazneen Fatema Rajani, Yinon Bentor, and Raymond Mooney. 2015. Stacked En- sembles of Information Extractors for Knowledge- Base Population. In Proceedings of the 53rd Annual Meeting of ACL and the 7th International JCNLP. 177-187.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "Combining Recurrent and Convolutional Neural Networks for Relation Classification", "authors": [ { "first": "Ngoc", "middle": [ "Thang" ], "last": "Vu", "suffix": "" }, { "first": "Heike", "middle": [], "last": "Adel", "suffix": "" }, { "first": "Pankaj", "middle": [], "last": "Gupta", "suffix": "" }, { "first": "Hinrich", "middle": [], "last": "Sch\u00fctze", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 2016 Conference of the NAACL-HTL", "volume": "", "issue": "", "pages": "534--539", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ngoc Thang Vu, Heike Adel, Pankaj Gupta, and Hin- rich Sch\u00fctze. 2016. Combining Recurrent and Con- volutional Neural Networks for Relation Classifica- tion. In Proceedings of the 2016 Conference of the NAACL-HTL. 534-539.", "links": null }, "BIBREF30": { "ref_id": "b30", "title": "Extracting Multiple-Relations in One-Pass with Pre-Trained Transformers", "authors": [ { "first": "Haoyu", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Ming", "middle": [], "last": "Tan", "suffix": "" }, { "first": "Mo", "middle": [], "last": "Yu", "suffix": "" }, { "first": "Shiyu", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Dakuo", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Kun", "middle": [], "last": "Xu", "suffix": "" }, { "first": "Xiaoxiao", "middle": [], "last": "Guo", "suffix": "" }, { "first": "Saloni", "middle": [], "last": "Potdar", "suffix": "" } ], "year": 2019, "venue": "Proc. of the 57th Annual Meeting of ACL", "volume": "", "issue": "", "pages": "1371--1377", "other_ids": {}, "num": null, "urls": [], "raw_text": "Haoyu Wang, Ming Tan, Mo Yu, Shiyu Chang, Dakuo Wang, Kun Xu, Xiaoxiao Guo, and Saloni Potdar. 2019. Extracting Multiple-Relations in One-Pass with Pre-Trained Transformers. In Proc. of the 57th Annual Meeting of ACL. 1371-1377.", "links": null }, "BIBREF31": { "ref_id": "b31", "title": "Relation Classification via Multi-Level Attention CNNs", "authors": [ { "first": "Linlin", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Zhu", "middle": [], "last": "Cao", "suffix": "" }, { "first": "Gerard", "middle": [], "last": "De Melo", "suffix": "" }, { "first": "Zhiyuan", "middle": [], "last": "Liu", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 54th Annual Meeting of the ACL", "volume": "", "issue": "", "pages": "1298--1307", "other_ids": {}, "num": null, "urls": [], "raw_text": "Linlin Wang, Zhu Cao, Gerard de Melo, and Zhiyuan Liu. 2016. Relation Classification via Multi-Level Attention CNNs. In Proceedings of the 54th Annual Meeting of the ACL. 1298-1307.", "links": null }, "BIBREF32": { "ref_id": "b32", "title": "Relation validation via textual entailment. Ontology-based information extraction systems", "authors": [ { "first": "Rui", "middle": [], "last": "Wang", "suffix": "" }, { "first": "G\u00fcnter", "middle": [], "last": "Neumann", "suffix": "" } ], "year": 2008, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Rui Wang and G\u00fcnter Neumann. 2008. Relation vali- dation via textual entailment. Ontology-based infor- mation extraction systems (obies 2008) (2008).", "links": null }, "BIBREF33": { "ref_id": "b33", "title": "Morgan Funtowicz, and Jamie Brew. 2019. HuggingFace's Transformers: State-of-the-art Natural Language Processing", "authors": [ { "first": "Thomas", "middle": [], "last": "Wolf", "suffix": "" }, { "first": "Lysandre", "middle": [], "last": "Debut", "suffix": "" }, { "first": "Victor", "middle": [], "last": "Sanh", "suffix": "" }, { "first": "Julien", "middle": [], "last": "Chaumond", "suffix": "" }, { "first": "Clement", "middle": [], "last": "Delangue", "suffix": "" }, { "first": "Anthony", "middle": [], "last": "Moi", "suffix": "" }, { "first": "Pierric", "middle": [], "last": "Cistac", "suffix": "" }, { "first": "Tim", "middle": [], "last": "Rault", "suffix": "" }, { "first": "R\u00e9mi", "middle": [], "last": "Louf", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1910.03771" ] }, "num": null, "urls": [], "raw_text": "Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pier- ric Cistac, Tim Rault, R\u00e9mi Louf, Morgan Funtow- icz, and Jamie Brew. 2019. HuggingFace's Trans- formers: State-of-the-art Natural Language Process- ing. arXiv:1910.03771 (2019).", "links": null }, "BIBREF34": { "ref_id": "b34", "title": "Enriching Pretrained Language Model with Entity Information for Relation Classification", "authors": [ { "first": "Shanchan", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Yifan", "middle": [], "last": "He", "suffix": "" } ], "year": 2019, "venue": "CIKM", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Shanchan Wu and Yifan He. 2019. Enriching Pre- trained Language Model with Entity Information for Relation Classification. In CIKM.", "links": null }, "BIBREF35": { "ref_id": "b35", "title": "Structured relation discovery using generative models", "authors": [ { "first": "Limin", "middle": [], "last": "Yao", "suffix": "" }, { "first": "Aria", "middle": [], "last": "Haghighi", "suffix": "" }, { "first": "Sebastian", "middle": [], "last": "Riedel", "suffix": "" }, { "first": "Andrew", "middle": [], "last": "Mccallum", "suffix": "" } ], "year": 2011, "venue": "Proceedings of the Conference on EMNLP", "volume": "", "issue": "", "pages": "1456--1466", "other_ids": {}, "num": null, "urls": [], "raw_text": "Limin Yao, Aria Haghighi, Sebastian Riedel, and An- drew McCallum. 2011. Structured relation discov- ery using generative models. In Proceedings of the Conference on EMNLP. 1456-1466.", "links": null }, "BIBREF36": { "ref_id": "b36", "title": "ABCNN: Attention-Based Convolutional Neural Network for Modeling Sentence Pairs", "authors": [ { "first": "Wenpeng", "middle": [], "last": "Yin", "suffix": "" }, { "first": "Hinrich", "middle": [], "last": "Sch\u00fctze", "suffix": "" }, { "first": "Bing", "middle": [], "last": "Xiang", "suffix": "" }, { "first": "Bowen", "middle": [], "last": "Zhou", "suffix": "" } ], "year": 2016, "venue": "Transactions of the Association for Computational Linguistics", "volume": "4", "issue": "", "pages": "259--272", "other_ids": {}, "num": null, "urls": [], "raw_text": "Wenpeng Yin, Hinrich Sch\u00fctze, Bing Xiang, and Bowen Zhou. 2016. ABCNN: Attention-Based Con- volutional Neural Network for Modeling Sentence Pairs. Transactions of the Association for Computa- tional Linguistics 4 (2016), 259-272.", "links": null }, "BIBREF37": { "ref_id": "b37", "title": "The Wisdom of Minority: Unsupervised Slot Filling Validation based on Multidimensional Truth-Finding", "authors": [ { "first": "Dian", "middle": [], "last": "Yu", "suffix": "" }, { "first": "Hongzhao", "middle": [], "last": "Huang", "suffix": "" }, { "first": "Taylor", "middle": [], "last": "Cassidy", "suffix": "" }, { "first": "Heng", "middle": [], "last": "Ji", "suffix": "" }, { "first": "Chi", "middle": [], "last": "Wang", "suffix": "" } ], "year": 2014, "venue": "Proceedings of 2014 International CICLING", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dian Yu, Hongzhao Huang, Taylor Cassidy, Heng Ji, Chi Wang, and et al. 2014. The Wisdom of Minority: Unsupervised Slot Filling Validation based on Multi- dimensional Truth-Finding, In Proceedings of 2014 International CICLING.", "links": null }, "BIBREF38": { "ref_id": "b38", "title": "Graph Convolution over Pruned Dependency Trees Improves Relation Extraction", "authors": [ { "first": "Yuhao", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Peng", "middle": [], "last": "Qi", "suffix": "" }, { "first": "Christopher", "middle": [ "D" ], "last": "Manning", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2018 Conference on EMNLP", "volume": "", "issue": "", "pages": "2205--2215", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yuhao Zhang, Peng Qi, and Christopher D. Manning. 2018. Graph Convolution over Pruned Dependency Trees Improves Relation Extraction. In Proceedings of the 2018 Conference on EMNLP. 2205-2215.", "links": null }, "BIBREF39": { "ref_id": "b39", "title": "Improving Relation Classification by Entity Pair Graph", "authors": [ { "first": "Yi", "middle": [], "last": "Zhao", "suffix": "" }, { "first": "Huaiyu", "middle": [], "last": "Wan", "suffix": "" }, { "first": "Jianwei", "middle": [], "last": "Gao", "suffix": "" }, { "first": "Youfang", "middle": [], "last": "Lin", "suffix": "" } ], "year": 2019, "venue": "ACML", "volume": "", "issue": "", "pages": "1156--1171", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yi Zhao, Huaiyu Wan, Jianwei Gao, and Youfang Lin. 2019. Improving Relation Classification by Entity Pair Graph. In ACML. 1156-1171.", "links": null }, "BIBREF40": { "ref_id": "b40", "title": "A neural network framework for relation extraction: Learning entity semantic and relation pattern", "authors": [ { "first": "Suncong", "middle": [], "last": "Zheng", "suffix": "" }, { "first": "Jiaming", "middle": [], "last": "Xu", "suffix": "" }, { "first": "Peng", "middle": [], "last": "Zhou", "suffix": "" }, { "first": "Hongyun", "middle": [], "last": "Bao", "suffix": "" }, { "first": "Zhenyu", "middle": [], "last": "Qi", "suffix": "" }, { "first": "Bo", "middle": [], "last": "Xu", "suffix": "" } ], "year": 2016, "venue": "Knowledge-Based Systems", "volume": "114", "issue": "", "pages": "12--23", "other_ids": {}, "num": null, "urls": [], "raw_text": "Suncong Zheng, Jiaming Xu, Peng Zhou, Hongyun Bao, Zhenyu Qi, and Bo Xu. 2016. A neural net- work framework for relation extraction: Learning en- tity semantic and relation pattern. Knowledge-Based Systems 114 (2016), 12-23.", "links": null } }, "ref_entries": { "TABREF1": { "html": null, "text": "Both scripts exclude the other class during evaluation.", "type_str": "table", "num": null, "content": "
Data setTrain Dev Test # Relations
SemEval10 8000 -271719
TACRED 68124 22631 1550942
Table 1: Summary of SemEval10 and TACRED data
sets for relation classification.
6 Task8(Hendrickxetal.,2010)from
http://semeval2.fbk.eu/semeval2.php?location=tasks
" }, "TABREF3": { "html": null, "text": "Examples of words used per relation.", "type_str": "table", "num": null, "content": "" }, "TABREF5": { "html": null, "text": "", "type_str": "table", "num": null, "content": "
: Results of official F 1 metric for the Se-
mEval10 and TACRED data sets. Best result of our
tested models is marked in bold. Results that outper-
form our method are underlined. '*' indicates that
the result was obtained by our implementation of (Bal-
dini Soares et al., 2019). Other values were taken from
referenced papers.
Number of candidates
234
Corr.Incorr. Corr.Incorr. Corr.Incorr.
BERT+RC+RV 338154375224
68.69% 31.30% 41.57% 58.42% 33.33% 66.66%
" }, "TABREF6": { "html": null, "text": "", "type_str": "table", "num": null, "content": "
: Percentage of correct (Corr.) and incorrect
(Incorr.) predictions from RV model for the SemEval10
data set grouped by the number of candidates provided
by RC.
Epoch
12345
BERT+RC+RV0.8790 0.8807 0.8793 0.8802 0.8831
" } } } }