ACL-OCL / Base_JSON /prefixD /json /D17 /D17-1005.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "D17-1005",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T16:15:25.267974Z"
},
"title": "Heterogeneous Supervision for Relation Extraction: A Representation Learning Approach",
"authors": [
{
"first": "Liyuan",
"middle": [],
"last": "Liu",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Illinois at Urbana-Champaign",
"location": {
"settlement": "Urbana",
"region": "IL",
"country": "USA"
}
},
"email": ""
},
{
"first": "Xiang",
"middle": [],
"last": "Ren",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Illinois at Urbana-Champaign",
"location": {
"settlement": "Urbana",
"region": "IL",
"country": "USA"
}
},
"email": ""
},
{
"first": "Qi",
"middle": [],
"last": "Zhu",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Illinois at Urbana-Champaign",
"location": {
"settlement": "Urbana",
"region": "IL",
"country": "USA"
}
},
"email": ""
},
{
"first": "Huan",
"middle": [],
"last": "Gui",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Illinois at Urbana-Champaign",
"location": {
"settlement": "Urbana",
"region": "IL",
"country": "USA"
}
},
"email": "huangui2@illinois.edu"
},
{
"first": "Shi",
"middle": [],
"last": "Zhi",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Illinois at Urbana-Champaign",
"location": {
"settlement": "Urbana",
"region": "IL",
"country": "USA"
}
},
"email": "shizhi2@illinois.edu"
},
{
"first": "Heng",
"middle": [],
"last": "Ji",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Rensselaer Polytechnic Institute",
"location": {
"country": "USA"
}
},
"email": ""
},
{
"first": "Jiawei",
"middle": [],
"last": "Han",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Illinois at Urbana-Champaign",
"location": {
"settlement": "Urbana",
"region": "IL",
"country": "USA"
}
},
"email": "hanj@illinois.edu"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Relation extraction is a fundamental task in information extraction. Most existing methods have heavy reliance on annotations labeled by human experts, which are costly and time-consuming. To overcome this drawback, we propose a novel framework, REHESSION, to conduct relation extractor learning using annotations from heterogeneous information source, e.g., knowledge base and domain heuristics. These annotations, referred as heterogeneous supervision, often conflict with each other, which brings a new challenge to the original relation extraction task: how to infer the true label from noisy labels for a given instance. Identifying context information as the backbone of both relation extraction and true label discovery, we adopt embedding techniques to learn the distributed representations of context, which bridges all components with mutual enhancement in an iterative fashion. Extensive experimental results demonstrate the superiority of REHESSION over the state-of-the-art.",
"pdf_parse": {
"paper_id": "D17-1005",
"_pdf_hash": "",
"abstract": [
{
"text": "Relation extraction is a fundamental task in information extraction. Most existing methods have heavy reliance on annotations labeled by human experts, which are costly and time-consuming. To overcome this drawback, we propose a novel framework, REHESSION, to conduct relation extractor learning using annotations from heterogeneous information source, e.g., knowledge base and domain heuristics. These annotations, referred as heterogeneous supervision, often conflict with each other, which brings a new challenge to the original relation extraction task: how to infer the true label from noisy labels for a given instance. Identifying context information as the backbone of both relation extraction and true label discovery, we adopt embedding techniques to learn the distributed representations of context, which bridges all components with mutual enhancement in an iterative fashion. Extensive experimental results demonstrate the superiority of REHESSION over the state-of-the-art.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "One of the most important tasks towards text understanding is to detect and categorize semantic relations between two entities in a given context. For example, in Fig. 1 , with regard to the sentence of c 1 , relation between Jesse James and Missouri should be categorized as died in. With accurate identification, relation extraction systems can provide essential support for many applications. One * Equal contribution. example is question answering, regarding a specific question, relation among entities can provide valuable information, which helps to seek better answers (Bao et al., 2014) . Similarly, for medical science literature, relations like protein-protein interactions (Fundel et al., 2007) and gene disease associations (Chun et al., 2006) can be extracted and used in knowledge base population. Additionally, relation extractors can be used in ontology construction (Schutz and Buitelaar, 2005) .",
"cite_spans": [
{
"start": 577,
"end": 595,
"text": "(Bao et al., 2014)",
"ref_id": "BIBREF1"
},
{
"start": 685,
"end": 706,
"text": "(Fundel et al., 2007)",
"ref_id": "BIBREF9"
},
{
"start": 737,
"end": 756,
"text": "(Chun et al., 2006)",
"ref_id": "BIBREF6"
},
{
"start": 884,
"end": 912,
"text": "(Schutz and Buitelaar, 2005)",
"ref_id": "BIBREF29"
}
],
"ref_spans": [
{
"start": 163,
"end": 169,
"text": "Fig. 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Typically, existing methods follow the supervised learning paradigm, and require extensive annotations from domain experts, which are costly and time-consuming. To alleviate such drawback, attempts have been made to build relation extractors with a small set of seed instances or humancrafted patterns (Nakashole et al., 2011; Carlson et al., 2010) , based on which more patterns and instances will be iteratively generated by bootstrap learning. However, these methods often suffer from semantic drift (Mintz et al., 2009) . Besides, knowledge bases like Freebase have been leveraged to automatically generate training data and provide distant supervision (Mintz et al., 2009) . Nevertheless, for many domain-specific applications, distant supervision is either non-existent or insufficient (usually less than 25% of relation mentions are covered (Ren et al., 2015; Ling and Weld, 2012) ).",
"cite_spans": [
{
"start": 302,
"end": 326,
"text": "(Nakashole et al., 2011;",
"ref_id": "BIBREF20"
},
{
"start": 327,
"end": 348,
"text": "Carlson et al., 2010)",
"ref_id": "BIBREF5"
},
{
"start": 503,
"end": 523,
"text": "(Mintz et al., 2009)",
"ref_id": "BIBREF19"
},
{
"start": 657,
"end": 677,
"text": "(Mintz et al., 2009)",
"ref_id": "BIBREF19"
},
{
"start": 848,
"end": 866,
"text": "(Ren et al., 2015;",
"ref_id": "BIBREF24"
},
{
"start": 867,
"end": 887,
"text": "Ling and Weld, 2012)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Only recently have preliminary studies been developed to unite different supervisions, including knowledge bases and domain specific patterns, which are referred as heterogeneous supervision. As shown in Fig. 1 , these supervisions often conflict with each other (Ratner et al., 2016) . To address these conflicts, data programming (Ratner et al., 2016 ) employs a generative model, which encodes supervisions as labeling functions, and adopts the source consistency assumption: a source is likely to provide true information with Robert Newton \"Bob\" Ford was an American outlaw best known for killing his gang leader Jesse James ( ) in Missouri ( ) Hussein ( ) was born in Amman ( ) on 14 November 1935.",
"cite_spans": [
{
"start": 263,
"end": 284,
"text": "(Ratner et al., 2016)",
"ref_id": "BIBREF23"
},
{
"start": 332,
"end": 352,
"text": "(Ratner et al., 2016",
"ref_id": "BIBREF23"
}
],
"ref_spans": [
{
"start": 204,
"end": 210,
"text": "Fig. 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Gofraid ( ) died in 989, said to be killed in Dal Riata ( ). return died_in for < , , s> if DiedIn( , ) in KB return born_in for < , , s> if match(' * born in * ', s) return died_in for < , , s> if match(' * killed in * ', s) return born_in for < , , s> if BornIn ( , ) the same probability for all instances. This assumption is widely used in true label discovery literature (Li et al., 2016) to model reliabilities of information sources like crowdsourcing and infer the true label from noisy labels. Accordingly, most true label discovery methods would trust a human annotator on all instances to the same level.",
"cite_spans": [
{
"start": 264,
"end": 269,
"text": "( , )",
"ref_id": null
},
{
"start": 376,
"end": 393,
"text": "(Li et al., 2016)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "However, labeling functions, unlike human annotators, do not make casual mistakes but follow certain \"error routine\". Thus, the reliability of a labeling function is not consistent among different pieces of instances. In particular, a labeling function could be more reliable for a certain subset (Varma et al., 2016 ) (also known as its proficient subset) comparing to the rest. We identify these proficient subsets based on context information, only trust labeling functions on these subsets and avoid assuming global source consistency.",
"cite_spans": [
{
"start": 297,
"end": 316,
"text": "(Varma et al., 2016",
"ref_id": "BIBREF35"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Meanwhile, embedding methods have demonstrated great potential in capturing semantic meanings, which also reduce the dimension of overwhelming text features. Here, we present REHES-SION, a novel framework capturing context's semantic meaning through representation learning, and conduct both relation extraction and true label discovery in a context-aware manner. Specifically, as depicted in Fig. 1 , we embed relation mentions in a low-dimension vector space, where similar relation mentions tend to have similar relation types and annotations. 'True' labels are further inferred based on reliabilities of labeling functions, which are calculated with their proficient subsets' representations. Then, these inferred true labels would serve as supervision for all components, including context representation, true label discovery and relation extraction. Besides, the context representation bridges relation extraction with true label dis- To the best of our knowledge, the framework proposed here is the first method that utilizes representation learning to provide heterogeneous supervision for relation extraction. The high-quality context representations serve as the backbone of true label discovery and relation extraction. Extensive experiments on benchmark datasets demonstrate significant improvements over the state-ofthe-art.",
"cite_spans": [],
"ref_spans": [
{
"start": 393,
"end": 399,
"text": "Fig. 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "c 3 e 1 e 2 tanh(W \u2022 1 |f c1 | X fi2fc 1 v i ) v i 2 R nv z c 2 R nz",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The remaining of this paper is organized as follows. Section 2 gives the definition of relation extraction with heterogeneous supervision. We then present the REHESSION model and the learning algorithm in Section 3, and report our experimental evaluation in Section 4. Finally, we briefly survey related work in Section 5 and conclude this study in Section 6.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this section, we would formally define relation extraction and heterogeneous supervision, including the format of labeling functions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Preliminaries",
"sec_num": "2"
},
{
"text": "Here we conduct relation extraction in sentencelevel (Bao et al., 2014) . For a sentence d, an entity mention is a token span in d which represents an entity, and a relation mention is a triple (e 1 , e 2 , d) which consists of an ordered entity pair (e 1 , e 2 ) and d. And the relation extraction task is to categorize relation mentions into a given set of relation types R, or Not-Target-Type (None) which means the type of the relation mention does not belong to R.",
"cite_spans": [
{
"start": 53,
"end": 71,
"text": "(Bao et al., 2014)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Relation Extraction",
"sec_num": "2.1"
},
{
"text": "Similar to (Ratner et al., 2016), we employ labeling functions as basic units to encode supervision information and generate annotations. Since different supervision information may have different proficient subsets, we require each labeling function to encode only one elementary supervision information. Specifically, in the relation extraction scenario, we require each labeling function to only annotate one relation type based on one elementary piece of information, e.g., four examples are listed in Fig. 1 .",
"cite_spans": [],
"ref_spans": [
{
"start": 506,
"end": 512,
"text": "Fig. 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Heterogeneous Supervision",
"sec_num": "2.2"
},
{
"text": "Notice that knowledge-based labeling functions are also considered to be noisy because relation extraction is conducted in sentence-level, e.g. although president of (Obama, USA) exists in KB, it should not be assigned with \"Obama was born in Honolulu, Hawaii, USA\", since president of is irrelevant to the context.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Heterogeneous Supervision",
"sec_num": "2.2"
},
{
"text": "For a POS-tagged corpus D with detected entities, we refer its relation mentions as C = {c i = (e i,1 , e i,2 , d), \u2200d \u2208 D}. Our goal is to annotate entity mentions with relation types of interest (R = {r 1 , . . . , r K }) or None. We require users to provide heterogeneous supervision in the form of labeling function \u039b = {\u03bb 1 , . . . , \u03bb M }, and mark the annotations generated by \u039b as O = {o c,i |\u03bb i generate annotation o c,i for c \u2208 C}. We record relation mentions annotated by \u039b as C l , and refer relation mentions without annotation as C u . Then, our task is to train a relation extractor based on C l and categorize relation mentions in C u .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Problem Definition",
"sec_num": "2.3"
},
{
"text": "Here, we present REHESSION, a novel framework to infer true labels from automatically generated noisy labels, and categorize unlabeled instances fc c's text features set, where c \u2208 C vi text feature embedding for fi \u2208 F zc relation mention embedding for c \u2208 C li embedding for \u03bbi's proficient subset, \u03bbi \u2208 \u039b oc,i annotation for c, generated by labeling function \u03bbi o * c underlying true label for c \u03c1c,i identify whether oc,i is correct",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The REHESSION Framework",
"sec_num": "3"
},
{
"text": "Si the proficient subset of labeling function \u03bbi sc,i identify whether c belongs to \u03bbi's proficient subset ti relation type embedding for ri \u2208 R Table. into a set of relation types. Intuitively, errors of annotations (O) come from mismatch of contexts, e.g., in Fig. 1 , \u03bb 1 annotates c 1 and c 2 with 'true' labels but for mismatched contexts 'killing' and 'killed'. Accordingly, we should only trust labeling functions on matched context, e.g., trust \u03bb 1 on c 3 due to its context 'was born in', but not on c 1 and c 2 . On the other hand, relation extraction can be viewed as matching appropriate relation type to a certain context. These two matching processes are closely related and can enhance each other, while context representation plays an important role in both of them.",
"cite_spans": [],
"ref_spans": [
{
"start": 145,
"end": 151,
"text": "Table.",
"ref_id": null
},
{
"start": 262,
"end": 268,
"text": "Fig. 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "The REHESSION Framework",
"sec_num": "3"
},
{
"text": "Framework Overview.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The REHESSION Framework",
"sec_num": "3"
},
{
"text": "We propose a general framework to learn the relation extractor from automatically generated noisy labels. As plotted in Fig. 1 , distributed representation of context bridges relation extraction with true label discovery, and allows them to enhance each other. Specifically, it follows the steps below:",
"cite_spans": [],
"ref_spans": [
{
"start": 120,
"end": 126,
"text": "Fig. 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "The REHESSION Framework",
"sec_num": "3"
},
{
"text": "1. After being extracted from context, text features are embedded in a low dimension space by representation learning (see 3. With relation mention embeddings, true labels are inferred by calculating labeling functions' reliabilities in a context-aware manner (see Fig. 1 ); 4. Inferred true labels would 'supervise' all components to learn model parameters (see Fig. 1 ).",
"cite_spans": [],
"ref_spans": [
{
"start": 265,
"end": 271,
"text": "Fig. 1",
"ref_id": "FIGREF0"
},
{
"start": 363,
"end": 369,
"text": "Fig. 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "The REHESSION Framework",
"sec_num": "3"
},
{
"text": "We now proceed by introducing these components of the model in further details.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The REHESSION Framework",
"sec_num": "3"
},
{
"text": "As shown in Table 2 , we extract abundant lexical features (Ren et al., 2016; Mintz et al., 2009) to characterize relation mentions. However, this abundance also results in the gigantic dimension of original text features (\u223c 10 7 in our case). In",
"cite_spans": [
{
"start": 59,
"end": 77,
"text": "(Ren et al., 2016;",
"ref_id": "BIBREF26"
},
{
"start": 78,
"end": 97,
"text": "Mintz et al., 2009)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [
{
"start": 12,
"end": 19,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Modeling Relation Mention",
"sec_num": "3.1"
},
{
"text": "Example Entity mention (EM) head Syntactic head token of each entity mention \"HEAD EM1 Hussein\", ...",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Description",
"sec_num": null
},
{
"text": "Tokens in each entity mention \"TKN EM1 Hussein\", ...",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Entity Mention Token",
"sec_num": null
},
{
"text": "Tokens between two EMs \"was\", \"born\", \"in\" Part-of-speech (POS) tag POS tags of tokens between two EMs \"VBD\", \"VBN\", \"IN\" Collocations Bigrams in left/right 3-word window of each EM \"Hussein was\", \"in Amman\" Entity mention order Whether EM 1 is before EM 2 \"EM1 BEFORE EM2\" Entity mention distance Number of tokens between the two EMs \"EM DISTANCE 3\" Body entity mentions numbers Number of EMs between the two EMs \"EM NUMBER 0\" Entity mention context Unigrams before and after each EM \"EM AFTER was\", ...",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tokens between two EMs",
"sec_num": null
},
{
"text": "Brown cluster ID for each token \"BROWN 010011001\", ... Table 2 : Text features F used in this paper. (\"Hussein\", \"Amman\",\"Hussein was born in Amman\") is used as an example.",
"cite_spans": [],
"ref_spans": [
{
"start": 55,
"end": 62,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Brown cluster (learned on D)",
"sec_num": null
},
{
"text": "order to achieve better generalization ability, we represent relation mentions with low dimensional (\u223c 10 2 ) vectors. In Fig. 2 , for example, relation mention c 3 is first represented as bag-of-features.",
"cite_spans": [],
"ref_spans": [
{
"start": 122,
"end": 128,
"text": "Fig. 2",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Brown cluster (learned on D)",
"sec_num": null
},
{
"text": "After learning text feature embeddings, we use the average of feature embedding vectors to derive the embedding vector for c 3 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Brown cluster (learned on D)",
"sec_num": null
},
{
"text": "Text Feature Representation. Similar to other principles of embedding learning, we assume text features occurring in the same contexts tend to have similar meanings (also known as distributional hypothesis (Harris, 1954) ). Furthermore, we let each text feature's embedding vector to predict other text features occurred in the same relation mentions or context. Thus, text features with similar meaning should have similar embedding vectors. Formally, we mark text features as F = {f 1 , \u2022 \u2022 \u2022 , f |F| }, record the feature set for \u2200c \u2208 C as f c , and represent the embedding vector for f i as v i \u2208 R nv , and we aim to maximize the following log likelihood:",
"cite_spans": [
{
"start": 206,
"end": 220,
"text": "(Harris, 1954)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Brown cluster (learned on D)",
"sec_num": null
},
{
"text": "\u2211 c\u2208C l \u2211 f i ,f j \u2208fc log p(fi|fj),",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Brown cluster (learned on D)",
"sec_num": null
},
{
"text": "where",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Brown cluster (learned on D)",
"sec_num": null
},
{
"text": "p(fi|fj) = exp(v T i v * j )/ \u2211 f k \u2208F exp(v T i v * k ).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Brown cluster (learned on D)",
"sec_num": null
},
{
"text": "However, the optimization of this likelihood is impractical because the calculation of \u2207p(f i |f j ) requires summation over all text features, whose size exceeds 10 7 in our case. In order to perform efficient optimization, we adopt the negative sampling technique (Mikolov et al., 2013) to avoid this summation. Accordingly, we replace the log likelihood with Eq. 1 as below:",
"cite_spans": [
{
"start": 266,
"end": 288,
"text": "(Mikolov et al., 2013)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Brown cluster (learned on D)",
"sec_num": null
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "JE = \u2211 c\u2208C l f i ,f j \u2208fc (log \u03c3(v T i v * j )\u2212 V \u2211 k=1 E f k \u2032 \u223cP [log \u03c3(\u2212v T i v * k \u2032 )])",
"eq_num": "(1)"
}
],
"section": "Brown cluster (learned on D)",
"sec_num": null
},
{
"text": "whereP is noise distribution used in (Mikolov et al., 2013) , \u03c3 is the sigmoid function and V is number of negative samples. ",
"cite_spans": [
{
"start": 37,
"end": 59,
"text": "(Mikolov et al., 2013)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Brown cluster (learned on D)",
"sec_num": null
},
{
"text": "z c |C| |\u21e4| l i s c,i \u21e2 c,i |O| Figure 3: Graphical model of o c,i 's correctness",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Brown cluster (learned on D)",
"sec_num": null
},
{
"text": "represent relation mentions is to concatenate or average its text feature embeddings. However, text features embedding may be in a different semantic space with relation types. Thus, we directly learn a mapping g from text feature representations to relation mention representations (Van Gysel et al., 2016a,b) instead of simple heuristic rules like concatenate or average (see Fig. 2 ):",
"cite_spans": [],
"ref_spans": [
{
"start": 378,
"end": 384,
"text": "Fig. 2",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Brown cluster (learned on D)",
"sec_num": null
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "zc = g(fc) = tanh(W \u2022 1 |fc| \u2211 f i \u2208fc vi)",
"eq_num": "(2)"
}
],
"section": "Brown cluster (learned on D)",
"sec_num": null
},
{
"text": "where z c is the representation of c \u2208 C l , W is a n z \u00d7 n v matrix, n z is the dimension of relation mention embeddings and tanh is the element-wise hyperbolic tangent function.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Brown cluster (learned on D)",
"sec_num": null
},
{
"text": "In other words, we represent bag of text features with their average embedding, then apply linear map and hyperbolic tangent to transform the embedding from text feature semantic space to relation mention semantic space. The non-linear tanh function allows non-linear class boundaries in other components, and also regularize relation mention representation to range [\u22121, 1] which avoids numerical instability issues.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Brown cluster (learned on D)",
"sec_num": null
},
{
"text": "Because heterogeneous supervision generates labels in a discriminative way, we suppose its errors follow certain underlying principles, i.e., if a Datasets NYT Wiki-KBP % of None in Training 0.6717 0.5552 % of None in Test 0.8972 0.8532 Table 3 : Proportion of None in Training/Test Set labeling function annotates a instance correctly / wrongly, it would annotate other similar instances correctly / wrongly. For example, \u03bb 1 in Fig. 1 generates wrong annotations for two similar instances c 1 , c 2 and would make the same errors on other similar instances. Since context representation captures the semantic meaning of relation mention and would be used to identify relation types, we also use it to identify the mismatch of context and labeling functions. Thus, we suppose for each labeling function \u03bb i , there exists an proficient subset S i on R nz , containing instances that \u03bb i can precisely annotate. In Fig. 1 , for instance, c 3 is in the proficient subset of \u03bb 1 , while c 1 and c 2 are not. Moreover, the generation of annotations are not really random, and we propose a probabilistic model to describe the level of mismatch from labeling functions to real relation types instead of annotations' generation.",
"cite_spans": [],
"ref_spans": [
{
"start": 237,
"end": 244,
"text": "Table 3",
"ref_id": null
},
{
"start": 430,
"end": 436,
"text": "Fig. 1",
"ref_id": "FIGREF0"
},
{
"start": 915,
"end": 921,
"text": "Fig. 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "True Label Discovery",
"sec_num": "3.2"
},
{
"text": "As shown in Fig. 3 , we assume the indicator of whether c belongs to S i , s c,i = \u03b4(c \u2208 S i ), would first be generated based on context representation",
"cite_spans": [],
"ref_spans": [
{
"start": 12,
"end": 18,
"text": "Fig. 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "True Label Discovery",
"sec_num": "3.2"
},
{
"text": "p(s c,i = 1|z c , l i ) = p(c \u2208 S i ) = \u03c3(z T c l i ) (3) Then the correctness of annotation o c,i , \u03c1 c,i = \u03b4(o c,i = o * c )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "True Label Discovery",
"sec_num": "3.2"
},
{
"text": ", would be generated. Furthermore, we assume p(\u03c1 c,i = 1|s c,i = 1) = \u03d5 1 and p(\u03c1 c,i = 1|s c,i = 0) = \u03d5 0 to be constant for all relation mentions and labeling functions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "True Label Discovery",
"sec_num": "3.2"
},
{
"text": "Because s c,i would not be used in other components of our framework, we integrate out s c,i and write the log likelihood as",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "True Label Discovery",
"sec_num": "3.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "JT = \u2211 o c,i \u2208O log(\u03c3(z T c li)\u03d5 \u03b4(o c,i =o * c ) 1 (1 \u2212 \u03d51) \u03b4(o c,i \u0338 =o * c ) + (1 \u2212 \u03c3(z T c li))\u03d5 \u03b4(o c,i =o * c ) 0 (1 \u2212 \u03d50) \u03b4(o c,i \u0338 =o * c ) )",
"eq_num": "(4)"
}
],
"section": "True Label Discovery",
"sec_num": "3.2"
},
{
"text": "Note that o * c is a hidden variable but not a model parameter, and J T is the likelihood of \u03c1 c,i = \u03b4(o c,i = o * c ). Thus, we would first infer o * c = argmax o * c J T , then train the true label discovery model by maximizing J T .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "True Label Discovery",
"sec_num": "3.2"
},
{
"text": "We now discuss the model for identifying relation types based on context representation. For each relation mention c, its representation z c implies its relation type, and the distribution of relation type can be described by the soft-max function:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Modeling Relation Type",
"sec_num": "3.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "p(ri|zc) = exp(z T c ti) \u2211 r j \u2208R\u222a{None} exp(z T c tj)",
"eq_num": "(5)"
}
],
"section": "Modeling Relation Type",
"sec_num": "3.3"
},
{
"text": "where t i \u2208 R vz is the representation for relation type r i . Moreover, with the inferred true label o * c , the relation extraction model can be trained as a multi-class classifier. Specifically, we use Eq. 5 to approach the distribution",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Modeling Relation Type",
"sec_num": "3.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "p(ri|o * c ) = { 1 ri = o * c 0 ri \u0338 = o * c",
"eq_num": "(6)"
}
],
"section": "Modeling Relation Type",
"sec_num": "3.3"
},
{
"text": "Moreover, we use KL-divergence to measure the dissimilarity between two distributions, and formulate model learning as maximizing J R :",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Modeling Relation Type",
"sec_num": "3.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "JR = \u2212 \u2211 c\u2208C l KL(p(.|zc)||p(.|o * c ))",
"eq_num": "(7)"
}
],
"section": "Modeling Relation Type",
"sec_num": "3.3"
},
{
"text": "where KL(p(.|zc)||p(.|o * c )) is the KL-divergence from p(ri|o * c ) to p(ri|zc), p(ri|zc) and p(ri|o * c ) has the form of Eq. 5 and Eq. 6.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Modeling Relation Type",
"sec_num": "3.3"
},
{
"text": "Based on Eq. 1, Eq. 4 and Eq. 7, we form the joint optimization problem for model parameters as",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model Learning",
"sec_num": "3.4"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "min W,v,v * ,l,t,o * J = \u2212JR \u2212 \u03bb1JE \u2212 \u03bb2JT s.t. \u2200c \u2208 C l , o * c = argmax o * c JT , zc = g(fc)",
"eq_num": "(8)"
}
],
"section": "Model Learning",
"sec_num": "3.4"
},
{
"text": "Collectively optimizing Eq. 8 allows heterogeneous supervision guiding all three components, while these components would refine the context representation, and enhance each other. In order to solve the joint optimization problem in Eq. 8 efficiently, we adopt the stochastic gradient descent algorithm to update {W, v, v * , l, t} iteratively, and o c * is estimated by maximizing J T after calculating z c . Additionally, we apply the widely used dropout techniques (Srivastava et al., 2014) to prevent overfitting and improve generalization performance.",
"cite_spans": [
{
"start": 468,
"end": 493,
"text": "(Srivastava et al., 2014)",
"ref_id": "BIBREF30"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Model Learning",
"sec_num": "3.4"
},
{
"text": "The learning process of REHESSION is summarized as below. In each iteration, we would sample a relation mention c from C l , then sample c's text features and conduct the text features' representation learning. After calculating the representation of c, we would infer its true label o * c based on our true label discovery model, and finally update model parameters based on o * c .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model Learning",
"sec_num": "3.4"
},
{
"text": "We now discuss the strategy of performing type inference for C u . As shown in Table 3 , the proportion of None in C u is usually much larger than in C l . Additionally, not like other relation types in R, None does not have a coherent semantic meaning. Similar to (Ren et al., 2016) , we introduce a heuristic rule: identifying a relation mention as None when (1) our relation extractor predict it as None, or (2) the entropy of p(.|zc) over R exceeds a pre-defined threshold \u03b7. The entropy is calculated as H(p(.|zc)) = \u2212",
"cite_spans": [
{
"start": 265,
"end": 283,
"text": "(Ren et al., 2016)",
"ref_id": "BIBREF26"
}
],
"ref_spans": [
{
"start": 79,
"end": 86,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Relation Type Inference",
"sec_num": "3.5"
},
{
"text": "\u2211 r i \u2208R p(ri|zc)log(p(ri|zc)).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Relation Type Inference",
"sec_num": "3.5"
},
{
"text": "And the second situation means based on relation extractor this relation mention is not likely belonging to any relation types in R.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Relation Type Inference",
"sec_num": "3.5"
},
{
"text": "In this section, we empirically validate our method by comparing to the state-of-the-art relation extraction methods on news and Wikipedia articles.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4"
},
{
"text": "In the experiments, we conduct investigations on two benchmark datasets from different domains: 1 NYT (Riedel et al., 2010 ) is a news corpus sampled from \u223c 294k 1989-2007 New York Times news articles. It consists of 1.18M sentences, while 395 of them are annotated by authors of (Hoffmann et al., 2011) and used as test data;",
"cite_spans": [
{
"start": 102,
"end": 122,
"text": "(Riedel et al., 2010",
"ref_id": "BIBREF27"
},
{
"start": 280,
"end": 303,
"text": "(Hoffmann et al., 2011)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Datasets and settings",
"sec_num": "4.1"
},
{
"text": "Wiki-KBP utilizes 1.5M sentences sampled from 780k Wikipedia articles (Ling and Weld, 2012) as training corpus, while test set consists of the 2k sentences manually annotated in 2013 KBP slot filling assessment results (Ellis et al., 2012) . For both datasets, the training and test sets partitions are maintained in our experiments. Furthermore, we create validation sets by randomly sampling 10% mentions from each test set and used the remaining part as evaluation sets.",
"cite_spans": [
{
"start": 70,
"end": 91,
"text": "(Ling and Weld, 2012)",
"ref_id": "BIBREF16"
},
{
"start": 219,
"end": 239,
"text": "(Ellis et al., 2012)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Datasets and settings",
"sec_num": "4.1"
},
{
"text": "Feature Generation. As summarized in Table 2 , we use a 6-word window to extract context features for each entity mention, apply the Stanford 1 Codes and datasets used in this paper can be downloaded at:",
"cite_spans": [],
"ref_spans": [
{
"start": 37,
"end": 44,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Datasets and settings",
"sec_num": "4.1"
},
{
"text": "https://github.com/LiyuanLucasLiu/ ReHession.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Datasets and settings",
"sec_num": "4.1"
},
{
"text": "Wiki In our experiments, labeling functions are employed to encode two kinds of supervision information. One is knowledge base, the other is handcrafted domain-specific patterns. For domain-specific patterns, we manually design a number of labeling functions 3 ; for knowledge base, annotations are generated following the procedure in (Ren et al., 2016; Riedel et al., 2010) .",
"cite_spans": [
{
"start": 336,
"end": 354,
"text": "(Ren et al., 2016;",
"ref_id": "BIBREF26"
},
{
"start": 355,
"end": 375,
"text": "Riedel et al., 2010)",
"ref_id": "BIBREF27"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Kind",
"sec_num": null
},
{
"text": "Regarding two kinds of supervision information, the statistics of the labeling functions are summarized in Table 4 . We can observe that heuristic patterns can identify more relation types for KBP datasets, while for NYT datasets, knowledge base can provide supervision for more relation types. This observation aligns with our intuition that single kind of information might be insufficient while different kinds of information can complement each other.",
"cite_spans": [],
"ref_spans": [
{
"start": 107,
"end": 114,
"text": "Table 4",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Kind",
"sec_num": null
},
{
"text": "We further summarize the statistics of annotations in Table 6 . It can be observed that a large portion of instances is only annotated as None, while lots of conflicts exist among other instances. This phenomenon justifies the motivation to employ true label discovery model to resolve the conflicts among supervision. Also, we can observe most conflicts involve None type, accordingly, our proposed method should have more advantages over traditional true label discovery methods on the relation extraction task comparing to the relation classification task that excludes None type.",
"cite_spans": [],
"ref_spans": [
{
"start": 54,
"end": 61,
"text": "Table 6",
"ref_id": "TABREF8"
}
],
"eq_spans": [],
"section": "Kind",
"sec_num": null
},
{
"text": "We compare REHESSION with below methods: FIGER (Ling and Weld, 2012) (Bunescu and Mooney, 2005 ) applies bag-offeature kernel to train a support vector machine; DSL (Mintz et al., 2009 ) trains a multi-class logistic classifier 4 on the training data;",
"cite_spans": [
{
"start": 47,
"end": 68,
"text": "(Ling and Weld, 2012)",
"ref_id": "BIBREF16"
},
{
"start": 69,
"end": 94,
"text": "(Bunescu and Mooney, 2005",
"ref_id": "BIBREF4"
},
{
"start": 165,
"end": 184,
"text": "(Mintz et al., 2009",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Compared Methods",
"sec_num": "4.2"
},
{
"text": "MultiR (Hoffmann et al., 2011 ) models training label noise by multi-instance multi-label learning; FCM (Gormley et al., 2015) performs compositional embedding by neural language model. CoType-RM (Ren et al., 2016) adopts partial-label loss to handle label noise and train the extractor. Moreover, two different strategies are adopted to feed heterogeneous supervision to these methods. The first is to keep all noisy labels, marked as 'NL'. Alternatively, a true label discovery method, Investment (Pasternack and Roth, 2010) , is applied to resolve conflicts, which is based on the source consistency assumption and iteratively updates inferred true labels and label functions' reliabilities. Then, the second strategy is to only feed the inferred true labels, referred as 'TD'. Universal Schemas (Riedel et al., 2013) is proposed to unify different information by calculating a low-rank approximation of the annotations O. It can serve as an alternative of the Investment method, i.e., selecting the relation type with highest score in the low-rank approximation as the true type. But it doesnt explicitly model noise and not fit our scenario very well. Due to the constraint of space, we only compared our method to Investment in most experiments, and Universal Schemas is listed as a baseline in Sec. 4.4. Indeed, it performs similarly to the Investment method.",
"cite_spans": [
{
"start": 7,
"end": 29,
"text": "(Hoffmann et al., 2011",
"ref_id": "BIBREF12"
},
{
"start": 196,
"end": 214,
"text": "(Ren et al., 2016)",
"ref_id": "BIBREF26"
},
{
"start": 499,
"end": 526,
"text": "(Pasternack and Roth, 2010)",
"ref_id": "BIBREF22"
},
{
"start": 799,
"end": 820,
"text": "(Riedel et al., 2013)",
"ref_id": "BIBREF28"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Compared Methods",
"sec_num": "4.2"
},
{
"text": "For relation classification task, which excludes None type from training / testing, we use the classification accuracy (Acc) for evaluation, and for relation extraction task, precision (Prec), recall (Rec) and F1 score (Bunescu and Mooney, 2005; Bach and Badaskar, 2007) are employed. Notice that both relation extraction and relation classification are conducted and evaluated in sentence-level (Bao et al., 2014) .",
"cite_spans": [
{
"start": 219,
"end": 245,
"text": "(Bunescu and Mooney, 2005;",
"ref_id": "BIBREF4"
},
{
"start": 246,
"end": 270,
"text": "Bach and Badaskar, 2007)",
"ref_id": "BIBREF0"
},
{
"start": 396,
"end": 414,
"text": "(Bao et al., 2014)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Metrics.",
"sec_num": null
},
{
"text": "Ann Demeulemeester ( born 1959 , Waregem , Belgium ) is ...",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "REHESSION Investment & Universal Schemas",
"sec_num": null
},
{
"text": "Raila Odinga was born at ..., in Maseno, Kisumu District, ...",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "born-in None",
"sec_num": null
},
{
"text": "Ann Demeulemeester ( elected 1959 , Waregem , Belgium ) is ...",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "born-in None",
"sec_num": null
},
{
"text": "Raila Odinga was examined at ..., in Maseno, Kisumu District, ... ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "None None",
"sec_num": null
},
{
"text": "Given the experimental setup described above, the averaged evaluation scores in 10 runs of relation classification and relation extraction on two datasets are summarized in Table 5 .",
"cite_spans": [],
"ref_spans": [
{
"start": 173,
"end": 180,
"text": "Table 5",
"ref_id": "TABREF7"
}
],
"eq_spans": [],
"section": "Performance Comparison",
"sec_num": "4.3"
},
{
"text": "From the comparison, it shows that NL strategy yields better performance than TD strategy, since the true labels inferred by Investment are actually wrong for many instances. On the other hand, as discussed in Sec. 4.4, our method introduces context-awareness to true label discovery, while the inferred true label guides the relation extractor achieving the best performance. This observation justifies the motivation of avoiding the source consistency assumption and the effectiveness of proposed true label discovery model.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Performance Comparison",
"sec_num": "4.3"
},
{
"text": "One could also observe the difference between REHESSION and the compared methods is more significant on the NYT dataset than on the Wiki-KBP dataset. This observation accords with the fact that the NYT dataset contains more conflicts than KBP dataset (see Table 6 ), and the intuition is that our method would have more advantages on more conflicting labels.",
"cite_spans": [],
"ref_spans": [
{
"start": 256,
"end": 263,
"text": "Table 6",
"ref_id": "TABREF8"
}
],
"eq_spans": [],
"section": "Performance Comparison",
"sec_num": "4.3"
},
{
"text": "Among four tasks, the relation classification of Wiki-KBP dataset has highest label quality, i.e. conflicting label ratio, but with least number of training instances. And CoType-RM and DSL reach relatively better performance among all compared methods. CoType-RM performs much better than DSL on Wiki-KBP relation classification task, while DSL gets better or similar performance with CoType-RM on other tasks. This may be because the representation learning method is able to generalize better, thus performs better when the training set size is small. However, it is rather vulnerable to the noisy labels compared to DSL. Our method employs embedding techniques, and also integrates context-aware true label dis- Comparison among REHESSION (Ori), REHESSION-US (US) and REHESSION-TD (TD) on relation extraction and relation classification covery to de-noise labels, making the embedding method rather robust, thus achieves the best performance on all tasks.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Performance Comparison",
"sec_num": "4.3"
},
{
"text": "Although Universal Schemas does not adopted the source consistency assumption, but it's conducted in document-level, and is context-agnostic in our sentence-level setting. Similarly, most true label discovery methods adopt the source consistency assumption, which means if they trust a labeling function, they would trust it on all annotations. And our method infers true labels in a context-aware manner, which means we only trust labeling functions on matched contexts.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Context Awareness of True Label Discovery.",
"sec_num": null
},
{
"text": "For example, Investment and Universal Schemas refer None as true type for all four instances in Table 7 . And our method infers born-in as the true label for the first two relation mentions; after replacing the matched contexts (born) with other words (elected and examined), our method no longer trusts born-in since the modified contexts are no longer matched, then infers None as the true label. In other words, our proposed method infer the true label in a context aware manner.",
"cite_spans": [],
"ref_spans": [
{
"start": 96,
"end": 103,
"text": "Table 7",
"ref_id": "TABREF9"
}
],
"eq_spans": [],
"section": "Context Awareness of True Label Discovery.",
"sec_num": null
},
{
"text": "We explore the effectiveness of the proposed contextaware true label discovery component by comparing REHESSION to its variants REHESSION-TD and REHESSION-US, which uses Investment or Universal Schemas to resolve conflicts. The averaged evaluation scores are summarized in Table 8. We can observe that REHESSION significantly outperforms its variants. Since the only difference between REHESSION and its variants is the model employed to resolve conflicts, this gap verifies the effectiveness of the proposed contextaware true label discovery method.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Effectiveness of True Label Discovery.",
"sec_num": null
},
{
"text": "Relation extraction aims to detect and categorize semantic relations between a pair of entities. To alleviate the dependency of annotations given by human experts, weak supervision (Bunescu and Mooney, 2007; Etzioni et al., 2004) and distant supervision (Ren et al., 2016) have been employed to automatically generate annotations based on knowledge base (or seed patterns/instances). Universal Schemas (Riedel et al., 2013; Verga et al., 2015; Toutanova et al., 2015) has been proposed to unify patterns and knowledge base, but it's designed for document-level relation extraction, i.e., not to categorize relation types based on a specific context, but based on the whole corpus. Thus, it allows one relation mention to have multiple true relation types; and does not fit our scenario very well, which is sentence-level relation extraction and assumes one instance has only one relation type. Here we propose a more general framework to consolidate heterogeneous information and further refine the true label from noisy labels, which gives the relation extractor potential to detect more types of relations in a more precise way.",
"cite_spans": [
{
"start": 181,
"end": 207,
"text": "(Bunescu and Mooney, 2007;",
"ref_id": "BIBREF3"
},
{
"start": 208,
"end": 229,
"text": "Etzioni et al., 2004)",
"ref_id": "BIBREF8"
},
{
"start": 254,
"end": 272,
"text": "(Ren et al., 2016)",
"ref_id": "BIBREF26"
},
{
"start": 402,
"end": 423,
"text": "(Riedel et al., 2013;",
"ref_id": "BIBREF28"
},
{
"start": 424,
"end": 443,
"text": "Verga et al., 2015;",
"ref_id": "BIBREF36"
},
{
"start": 444,
"end": 467,
"text": "Toutanova et al., 2015)",
"ref_id": "BIBREF32"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Relation Extraction",
"sec_num": "5.1"
},
{
"text": "Word embedding has demonstrated great potential in capturing semantic meaning (Mikolov et al., 2013) , and achieved great success in a wide range of NLP tasks like relation extraction (Zeng et al., 2014; Takase and Inui, 2016; Nguyen and Grishman, 2015 ). In our model, we employed the embedding techniques to represent context information, and reduce the dimension of text features, which allows our model to generalize better.",
"cite_spans": [
{
"start": 78,
"end": 100,
"text": "(Mikolov et al., 2013)",
"ref_id": "BIBREF18"
},
{
"start": 184,
"end": 203,
"text": "(Zeng et al., 2014;",
"ref_id": "BIBREF37"
},
{
"start": 204,
"end": 226,
"text": "Takase and Inui, 2016;",
"ref_id": "BIBREF31"
},
{
"start": 227,
"end": 252,
"text": "Nguyen and Grishman, 2015",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Relation Extraction",
"sec_num": "5.1"
},
{
"text": "True label discovery methods have been developed to resolve conflicts among multi-source information under the assumption of source consistency (Li et al., 2016; Zhi et al., 2015) . Specifically, in the spammer-hammer model (Karger et al., 2011), each source could either be a spammer, which annotates instances randomly; or a hammer, which annotates instances precisely. In this paper, we assume each labeling function would be a hammer on its proficient subset, and would be a spammer otherwise, while the proficient subsets are identified in the embedding space.",
"cite_spans": [
{
"start": 144,
"end": 161,
"text": "(Li et al., 2016;",
"ref_id": "BIBREF15"
},
{
"start": 162,
"end": 179,
"text": "Zhi et al., 2015)",
"ref_id": "BIBREF38"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Truth Label Discovery",
"sec_num": "5.2"
},
{
"text": "Besides data programming, socratic learning (Varma et al., 2016) has been developed to conduct binary classification under heterogeneous supervi-sion. Its true label discovery module supervises the discriminative module in label level, while the discriminative module influences the true label discovery module by selecting a feature subset. Although delicately designed, it fails to make full use of the connection between these modules, i.e., not refine the context representation for classifier. Thus, its discriminative module might suffer from the overwhelming size of text features.",
"cite_spans": [
{
"start": 44,
"end": 64,
"text": "(Varma et al., 2016)",
"ref_id": "BIBREF35"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Truth Label Discovery",
"sec_num": "5.2"
},
{
"text": "In this paper, we propose REHESSION, an embedding framework to extract relation under heterogeneous supervision. When dealing with heterogeneous supervisions, one unique challenge is how to resolve conflicts generated by different labeling functions. Accordingly, we go beyond the \"source consistency assumption\" in prior works and leverage context-aware embeddings to induce proficient subsets. The resulting framework bridges true label discovery and relation extraction with context representation, and allows them to mutually enhance each other. Experimental evaluation justifies the necessity of involving contextawareness, the quality of inferred true label, and the effectiveness of the proposed framework on two real-world datasets.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Future Work",
"sec_num": "6"
},
{
"text": "There exist several directions for future work. One is to apply transfer learning techniques to handle label distributions' difference between training set and test set. Another is to incorporate OpenIE methods to automatically find domainspecific patterns and generate pattern-based labeling functions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Future Work",
"sec_num": "6"
},
{
"text": "We use liblinear package from https//github. com/cjlin1/liblinear",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Parameter Settings. Based on the semantic meaning of proficient subset, we set \u03d5 2 to 1/|R\u222a{None}|, i.e., the probability of generating right label with random guess. Then we set \u03d5 1 to 1 \u2212 \u03d5 2 , \u03bb 1 = \u03bb 2 = 1, and the learning rate \u03b1 = 0.025. As for other parameters, they are tuned on the validation sets for each dataset. Similarly, all parameters of compared methods are tuned on validation set, and the parameters achieving highest F1 score are chosen for relation extraction.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "Research was sponsored in part by the U.S. Army Research Lab. under Cooperative Agreement No. W911NF-09-2-0053 (NSCTA), National Science Foundation IIS-1320617, IIS 16-18481, and NSF IIS 17-04532, and grant 1U54GM114838 awarded by NIGMS through funds provided by the trans-NIH Big Data to Knowledge (BD2K) initiative (www.bd2k.nih.gov). The views and conclusions contained in this document are those of the author(s) and should not be interpreted as representing the official policies of the U.S. Army Research Laboratory or the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for Government purposes notwithstanding any copyright notation hereon.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": "7"
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "A review of relation extraction. Literature review for Language and Statistics II",
"authors": [
{
"first": "Nguyen",
"middle": [],
"last": "Bach",
"suffix": ""
},
{
"first": "Sameer",
"middle": [],
"last": "Badaskar",
"suffix": ""
}
],
"year": 2007,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nguyen Bach and Sameer Badaskar. 2007. A review of relation extraction. Literature review for Language and Statistics II.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Knowledge-based question answering as machine translation",
"authors": [
{
"first": "Junwei",
"middle": [],
"last": "Bao",
"suffix": ""
},
{
"first": "Nan",
"middle": [],
"last": "Duan",
"suffix": ""
},
{
"first": "Ming",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Tiejun",
"middle": [],
"last": "Zhao",
"suffix": ""
}
],
"year": 2014,
"venue": "Cell",
"volume": "2",
"issue": "6",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Junwei Bao, Nan Duan, Ming Zhou, and Tiejun Zhao. 2014. Knowledge-based question answering as ma- chine translation. Cell, 2(6).",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Class-based n-gram models of natural language",
"authors": [
{
"first": "",
"middle": [],
"last": "Peter F Brown",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Peter",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Desouza",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Robert",
"suffix": ""
},
{
"first": "Vincent J Della",
"middle": [],
"last": "Mercer",
"suffix": ""
},
{
"first": "Jenifer C",
"middle": [],
"last": "Pietra",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Lai",
"suffix": ""
}
],
"year": 1992,
"venue": "Computational linguistics",
"volume": "18",
"issue": "4",
"pages": "467--479",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Peter F Brown, Peter V Desouza, Robert L Mercer, Vincent J Della Pietra, and Jenifer C Lai. 1992. Class-based n-gram models of natural language. Computational linguistics, 18(4):467-479.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Learning to extract relations from the web using minimal supervision",
"authors": [
{
"first": "Razvan",
"middle": [],
"last": "Bunescu",
"suffix": ""
},
{
"first": "Raymond",
"middle": [],
"last": "Mooney",
"suffix": ""
}
],
"year": 2007,
"venue": "ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Razvan Bunescu and Raymond Mooney. 2007. Learn- ing to extract relations from the web using minimal supervision. In ACL.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Subsequence kernels for relation extraction",
"authors": [
{
"first": "Razvan",
"middle": [],
"last": "Bunescu",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Raymond",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Mooney",
"suffix": ""
}
],
"year": 2005,
"venue": "NIPS",
"volume": "",
"issue": "",
"pages": "171--178",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Razvan Bunescu and Raymond J Mooney. 2005. Sub- sequence kernels for relation extraction. In NIPS, pages 171-178.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Coupled semi-supervised learning for information extraction",
"authors": [
{
"first": "Andrew",
"middle": [],
"last": "Carlson",
"suffix": ""
},
{
"first": "Justin",
"middle": [],
"last": "Betteridge",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Richard",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Tom M",
"middle": [],
"last": "Estevam R Hruschka",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Mitchell",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the third ACM international conference on Web search and data mining",
"volume": "",
"issue": "",
"pages": "101--110",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Andrew Carlson, Justin Betteridge, Richard C Wang, Estevam R Hruschka Jr, and Tom M Mitchell. 2010. Coupled semi-supervised learning for information extraction. In Proceedings of the third ACM inter- national conference on Web search and data mining, pages 101-110. ACM.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Extraction of genedisease relations from medline using domain dictionaries and machine learning",
"authors": [
{
"first": "Hong-Woo",
"middle": [],
"last": "Chun",
"suffix": ""
},
{
"first": "Yoshimasa",
"middle": [],
"last": "Tsuruoka",
"suffix": ""
},
{
"first": "Jin-Dong",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "Rie",
"middle": [],
"last": "Shiba",
"suffix": ""
},
{
"first": "Naoki",
"middle": [],
"last": "Nagata",
"suffix": ""
},
{
"first": "Teruyoshi",
"middle": [],
"last": "Hishiki",
"suffix": ""
},
{
"first": "Jun'ichi",
"middle": [],
"last": "Tsujii",
"suffix": ""
}
],
"year": 2006,
"venue": "Pacific Symposium on Biocomputing",
"volume": "11",
"issue": "",
"pages": "4--15",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hong-Woo Chun, Yoshimasa Tsuruoka, Jin-Dong Kim, Rie Shiba, Naoki Nagata, Teruyoshi Hishiki, and Jun'ichi Tsujii. 2006. Extraction of gene- disease relations from medline using domain dictio- naries and machine learning. In Pacific Symposium on Biocomputing, volume 11, pages 4-15.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Linguistic resources for 2013 knowledge base population evaluations",
"authors": [
{
"first": "Joe",
"middle": [],
"last": "Ellis",
"suffix": ""
},
{
"first": "Xuansong",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Kira",
"middle": [],
"last": "Griffitt",
"suffix": ""
},
{
"first": "Stephanie",
"middle": [],
"last": "Strassel",
"suffix": ""
},
{
"first": "Jonathan",
"middle": [],
"last": "Wright",
"suffix": ""
}
],
"year": 2012,
"venue": "TAC",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Joe Ellis, Xuansong Li, Kira Griffitt, Stephanie Strassel, and Jonathan Wright. 2012. Linguistic re- sources for 2013 knowledge base population evalu- ations. In TAC.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Web-scale information extraction in knowitall:(preliminary results)",
"authors": [
{
"first": "Oren",
"middle": [],
"last": "Etzioni",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Cafarella",
"suffix": ""
},
{
"first": "Doug",
"middle": [],
"last": "Downey",
"suffix": ""
},
{
"first": "Stanley",
"middle": [],
"last": "Kok",
"suffix": ""
},
{
"first": "Ana-Maria",
"middle": [],
"last": "Popescu",
"suffix": ""
},
{
"first": "Tal",
"middle": [],
"last": "Shaked",
"suffix": ""
},
{
"first": "Stephen",
"middle": [],
"last": "Soderland",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Daniel",
"suffix": ""
},
{
"first": "Alexander",
"middle": [],
"last": "Weld",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Yates",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of the 13th international conference on World Wide Web",
"volume": "",
"issue": "",
"pages": "100--110",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Oren Etzioni, Michael Cafarella, Doug Downey, Stan- ley Kok, Ana-Maria Popescu, Tal Shaked, Stephen Soderland, Daniel S Weld, and Alexander Yates. 2004. Web-scale information extraction in know- itall:(preliminary results). In Proceedings of the 13th international conference on World Wide Web, pages 100-110. ACM.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Relexrelation extraction using dependency parse trees",
"authors": [
{
"first": "Katrin",
"middle": [],
"last": "Fundel",
"suffix": ""
},
{
"first": "Robert",
"middle": [],
"last": "K\u00fcffner",
"suffix": ""
},
{
"first": "Ralf",
"middle": [],
"last": "Zimmer",
"suffix": ""
}
],
"year": 2007,
"venue": "Bioinformatics",
"volume": "23",
"issue": "3",
"pages": "365--371",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Katrin Fundel, Robert K\u00fcffner, and Ralf Zimmer. 2007. Relexrelation extraction using dependency parse trees. Bioinformatics, 23(3):365-371.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Improved relation extraction with feature-rich compositional embedding models",
"authors": [
{
"first": "Mo",
"middle": [],
"last": "Matthew R Gormley",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Yu",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Dredze",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1505.02419"
]
},
"num": null,
"urls": [],
"raw_text": "Matthew R Gormley, Mo Yu, and Mark Dredze. 2015. Improved relation extraction with feature-rich compositional embedding models. arXiv preprint arXiv:1505.02419.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Distributional structure. Word",
"authors": [
{
"first": "S",
"middle": [],
"last": "Zellig",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Harris",
"suffix": ""
}
],
"year": 1954,
"venue": "",
"volume": "10",
"issue": "",
"pages": "146--162",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zellig S Harris. 1954. Distributional structure. Word, 10(2-3):146-162.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Knowledgebased weak supervision for information extraction of overlapping relations",
"authors": [
{
"first": "Raphael",
"middle": [],
"last": "Hoffmann",
"suffix": ""
},
{
"first": "Congle",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Xiao",
"middle": [],
"last": "Ling",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
},
{
"first": "Daniel",
"middle": [
"S"
],
"last": "Weld",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the 49th",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Raphael Hoffmann, Congle Zhang, Xiao Ling, Luke Zettlemoyer, and Daniel S Weld. 2011. Knowledge- based weak supervision for information extraction of overlapping relations. In Proceedings of the 49th",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Annual Meeting of the Association for Computational Linguistics: Human Language Technologies",
"authors": [],
"year": null,
"venue": "",
"volume": "1",
"issue": "",
"pages": "541--550",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Annual Meeting of the Association for Computa- tional Linguistics: Human Language Technologies- Volume 1, pages 541-550. Association for Compu- tational Linguistics.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Iterative learning for reliable crowdsourcing systems",
"authors": [
{
"first": "Sewoong",
"middle": [],
"last": "David R Karger",
"suffix": ""
},
{
"first": "Devavrat",
"middle": [],
"last": "Oh",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Shah",
"suffix": ""
}
],
"year": 2011,
"venue": "Advances in neural information processing systems",
"volume": "",
"issue": "",
"pages": "1953--1961",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David R Karger, Sewoong Oh, and Devavrat Shah. 2011. Iterative learning for reliable crowdsourcing systems. In Advances in neural information pro- cessing systems, pages 1953-1961.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "A survey on truth discovery",
"authors": [
{
"first": "Yaliang",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Jing",
"middle": [],
"last": "Gao",
"suffix": ""
},
{
"first": "Chuishi",
"middle": [],
"last": "Meng",
"suffix": ""
},
{
"first": "Qi",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Lu",
"middle": [],
"last": "Su",
"suffix": ""
},
{
"first": "Bo",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Wei",
"middle": [],
"last": "Fan",
"suffix": ""
},
{
"first": "Jiawei",
"middle": [],
"last": "Han",
"suffix": ""
}
],
"year": 2016,
"venue": "SIGKDD Explor. Newsl",
"volume": "17",
"issue": "2",
"pages": "1--16",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yaliang Li, Jing Gao, Chuishi Meng, Qi Li, Lu Su, Bo Zhao, Wei Fan, and Jiawei Han. 2016. A sur- vey on truth discovery. SIGKDD Explor. Newsl., 17(2):1-16.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Fine-grained entity recognition",
"authors": [
{
"first": "Xiao",
"middle": [],
"last": "Ling",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Daniel S Weld",
"suffix": ""
}
],
"year": 2012,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xiao Ling and Daniel S Weld. 2012. Fine-grained en- tity recognition. In AAAI. Citeseer.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "The stanford corenlp natural language processing toolkit",
"authors": [
{
"first": "D",
"middle": [],
"last": "Christopher",
"suffix": ""
},
{
"first": "Mihai",
"middle": [],
"last": "Manning",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Surdeanu",
"suffix": ""
},
{
"first": "Jenny",
"middle": [
"Rose"
],
"last": "Bauer",
"suffix": ""
},
{
"first": "Steven",
"middle": [],
"last": "Finkel",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Bethard",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Mc-Closky",
"suffix": ""
}
],
"year": 2014,
"venue": "ACL (System Demonstrations)",
"volume": "",
"issue": "",
"pages": "55--60",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Christopher D Manning, Mihai Surdeanu, John Bauer, Jenny Rose Finkel, Steven Bethard, and David Mc- Closky. 2014. The stanford corenlp natural lan- guage processing toolkit. In ACL (System Demon- strations), pages 55-60.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Distributed representations of words and phrases and their compositionality",
"authors": [
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
},
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
},
{
"first": "Kai",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Greg",
"middle": [
"S"
],
"last": "Corrado",
"suffix": ""
},
{
"first": "Jeff",
"middle": [],
"last": "Dean",
"suffix": ""
}
],
"year": 2013,
"venue": "Advances in neural information processing systems",
"volume": "",
"issue": "",
"pages": "3111--3119",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Cor- rado, and Jeff Dean. 2013. Distributed representa- tions of words and phrases and their compositional- ity. In Advances in neural information processing systems, pages 3111-3119.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Distant supervision for relation extraction without labeled data",
"authors": [
{
"first": "Mike",
"middle": [],
"last": "Mintz",
"suffix": ""
},
{
"first": "Steven",
"middle": [],
"last": "Bills",
"suffix": ""
},
{
"first": "Rion",
"middle": [],
"last": "Snow",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Jurafsky",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP",
"volume": "2",
"issue": "",
"pages": "1003--1011",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mike Mintz, Steven Bills, Rion Snow, and Dan Juraf- sky. 2009. Distant supervision for relation extrac- tion without labeled data. In Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP: Vol- ume 2-Volume 2, pages 1003-1011. Association for Computational Linguistics.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Scalable knowledge harvesting with high precision and high recall",
"authors": [
{
"first": "Ndapandula",
"middle": [],
"last": "Nakashole",
"suffix": ""
},
{
"first": "Martin",
"middle": [],
"last": "Theobald",
"suffix": ""
},
{
"first": "Gerhard",
"middle": [],
"last": "Weikum",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the fourth ACM international conference on Web search and data mining",
"volume": "",
"issue": "",
"pages": "227--236",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ndapandula Nakashole, Martin Theobald, and Gerhard Weikum. 2011. Scalable knowledge harvesting with high precision and high recall. In Proceedings of the fourth ACM international conference on Web search and data mining, pages 227-236. ACM.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Combining neural networks and log-linear models to improve relation extraction",
"authors": [
{
"first": "Huu",
"middle": [],
"last": "Thien",
"suffix": ""
},
{
"first": "Ralph",
"middle": [],
"last": "Nguyen",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Grishman",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1511.05926"
]
},
"num": null,
"urls": [],
"raw_text": "Thien Huu Nguyen and Ralph Grishman. 2015. Combining neural networks and log-linear mod- els to improve relation extraction. arXiv preprint arXiv:1511.05926.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Knowing what to believe (when you already know something)",
"authors": [
{
"first": "Jeff",
"middle": [],
"last": "Pasternack",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Roth",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the 23rd International Conference on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "877--885",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jeff Pasternack and Dan Roth. 2010. Knowing what to believe (when you already know something). In Proceedings of the 23rd International Conference on Computational Linguistics, pages 877-885. Associ- ation for Computational Linguistics.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Data programming: Creating large training sets, quickly",
"authors": [
{
"first": "J",
"middle": [],
"last": "Alexander",
"suffix": ""
},
{
"first": "Christopher M De",
"middle": [],
"last": "Ratner",
"suffix": ""
},
{
"first": "Sen",
"middle": [],
"last": "Sa",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Selsam",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "R\u00e9",
"suffix": ""
}
],
"year": 2016,
"venue": "Advances in Neural Information Processing Systems",
"volume": "",
"issue": "",
"pages": "3567--3575",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alexander J Ratner, Christopher M De Sa, Sen Wu, Daniel Selsam, and Christopher R\u00e9. 2016. Data programming: Creating large training sets, quickly. In Advances in Neural Information Processing Sys- tems, pages 3567-3575.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Clustype: Effective entity recognition and typing by relation phrase-based clustering",
"authors": [
{
"first": "Xiang",
"middle": [],
"last": "Ren",
"suffix": ""
},
{
"first": "Ahmed",
"middle": [],
"last": "El-Kishky",
"suffix": ""
},
{
"first": "Chi",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Fangbo",
"middle": [],
"last": "Tao",
"suffix": ""
},
{
"first": "Clare",
"middle": [
"R"
],
"last": "Voss",
"suffix": ""
},
{
"first": "Jiawei",
"middle": [],
"last": "Han",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 21th",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xiang Ren, Ahmed El-Kishky, Chi Wang, Fangbo Tao, Clare R Voss, and Jiawei Han. 2015. Clustype: Effective entity recognition and typing by relation phrase-based clustering. In Proceedings of the 21th",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "ACM SIGKDD International Conference on Knowledge Discovery and Data Mining",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "995--1004",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "ACM SIGKDD International Conference on Knowl- edge Discovery and Data Mining, pages 995-1004. ACM.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Cotype: Joint extraction of typed entities and relations with knowledge bases",
"authors": [
{
"first": "Xiang",
"middle": [],
"last": "Ren",
"suffix": ""
},
{
"first": "Zeqiu",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Wenqi",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Meng",
"middle": [],
"last": "Qu",
"suffix": ""
},
{
"first": "Clare",
"middle": [
"R"
],
"last": "Voss",
"suffix": ""
},
{
"first": "Heng",
"middle": [],
"last": "Ji",
"suffix": ""
},
{
"first": "F",
"middle": [],
"last": "Tarek",
"suffix": ""
},
{
"first": "Jiawei",
"middle": [],
"last": "Abdelzaher",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Han",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1610.08763"
]
},
"num": null,
"urls": [],
"raw_text": "Xiang Ren, Zeqiu Wu, Wenqi He, Meng Qu, Clare R Voss, Heng Ji, Tarek F Abdelzaher, and Jiawei Han. 2016. Cotype: Joint extraction of typed entities and relations with knowledge bases. arXiv preprint arXiv:1610.08763.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Modeling relations and their mentions without labeled text",
"authors": [
{
"first": "Sebastian",
"middle": [],
"last": "Riedel",
"suffix": ""
},
{
"first": "Limin",
"middle": [],
"last": "Yao",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Mccallum",
"suffix": ""
}
],
"year": 2010,
"venue": "Joint European Conference on Machine Learning and Knowledge Discovery in Databases",
"volume": "",
"issue": "",
"pages": "148--163",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sebastian Riedel, Limin Yao, and Andrew McCallum. 2010. Modeling relations and their mentions with- out labeled text. In Joint European Conference on Machine Learning and Knowledge Discovery in Databases, pages 148-163. Springer.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Relation extraction with matrix factorization and universal schemas",
"authors": [
{
"first": "Sebastian",
"middle": [],
"last": "Riedel",
"suffix": ""
},
{
"first": "Limin",
"middle": [],
"last": "Yao",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Mccallum",
"suffix": ""
},
{
"first": "Benjamin M",
"middle": [],
"last": "Marlin",
"suffix": ""
}
],
"year": 2013,
"venue": "HLT-NAACL",
"volume": "",
"issue": "",
"pages": "74--84",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sebastian Riedel, Limin Yao, Andrew McCallum, and Benjamin M Marlin. 2013. Relation extraction with matrix factorization and universal schemas. In HLT- NAACL, pages 74-84.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Relext: A tool for relation extraction from text in ontology extension",
"authors": [
{
"first": "Alexander",
"middle": [],
"last": "Schutz",
"suffix": ""
},
{
"first": "Paul",
"middle": [],
"last": "Buitelaar",
"suffix": ""
}
],
"year": 2005,
"venue": "International semantic web conference",
"volume": "",
"issue": "",
"pages": "593--606",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alexander Schutz and Paul Buitelaar. 2005. Relext: A tool for relation extraction from text in ontology ex- tension. In International semantic web conference, volume 2005, pages 593-606. Springer.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Dropout: a simple way to prevent neural networks from overfitting",
"authors": [
{
"first": "Nitish",
"middle": [],
"last": "Srivastava",
"suffix": ""
},
{
"first": "Geoffrey",
"middle": [
"E"
],
"last": "Hinton",
"suffix": ""
},
{
"first": "Alex",
"middle": [],
"last": "Krizhevsky",
"suffix": ""
},
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
},
{
"first": "Ruslan",
"middle": [],
"last": "Salakhutdinov",
"suffix": ""
}
],
"year": 2014,
"venue": "Journal of Machine Learning Research",
"volume": "15",
"issue": "1",
"pages": "1929--1958",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nitish Srivastava, Geoffrey E Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014. Dropout: a simple way to prevent neural networks from overfitting. Journal of Machine Learning Re- search, 15(1):1929-1958.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Composing distributed representations of relational patterns",
"authors": [
{
"first": "Sho",
"middle": [],
"last": "Takase",
"suffix": ""
},
{
"first": "Naoaki Okazaki Kentaro",
"middle": [],
"last": "Inui",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sho Takase and Naoaki Okazaki Kentaro Inui. 2016. Composing distributed representations of relational patterns. In Proceedings of ACL.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Representing text for joint embedding of text and knowledge bases",
"authors": [
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
},
{
"first": "Danqi",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Patrick",
"middle": [],
"last": "Pantel",
"suffix": ""
},
{
"first": "Hoifung",
"middle": [],
"last": "Poon",
"suffix": ""
},
{
"first": "Pallavi",
"middle": [],
"last": "Choudhury",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Gamon",
"suffix": ""
}
],
"year": 2015,
"venue": "EMNLP",
"volume": "15",
"issue": "",
"pages": "1499--1509",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kristina Toutanova, Danqi Chen, Patrick Pantel, Hoi- fung Poon, Pallavi Choudhury, and Michael Gamon. 2015. Representing text for joint embedding of text and knowledge bases. In EMNLP, volume 15, pages 1499-1509.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Learning latent vector spaces for product search",
"authors": [
{
"first": "Christophe",
"middle": [],
"last": "Van Gysel",
"suffix": ""
},
{
"first": "Maarten",
"middle": [],
"last": "De Rijke",
"suffix": ""
},
{
"first": "Evangelos",
"middle": [],
"last": "Kanoulas",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 25th ACM International on Conference on Information and Knowledge Management",
"volume": "",
"issue": "",
"pages": "165--174",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Christophe Van Gysel, Maarten de Rijke, and Evange- los Kanoulas. 2016a. Learning latent vector spaces for product search. In Proceedings of the 25th ACM International on Conference on Information and Knowledge Management, pages 165-174. ACM.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "Unsupervised, efficient and semantic expertise retrieval",
"authors": [
{
"first": "Christophe",
"middle": [],
"last": "Van Gysel",
"suffix": ""
},
{
"first": "Maarten",
"middle": [],
"last": "De Rijke",
"suffix": ""
},
{
"first": "Marcel",
"middle": [],
"last": "Worring",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 25th International Conference on World Wide Web",
"volume": "",
"issue": "",
"pages": "1069--1079",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Christophe Van Gysel, Maarten de Rijke, and Marcel Worring. 2016b. Unsupervised, efficient and seman- tic expertise retrieval. In Proceedings of the 25th In- ternational Conference on World Wide Web, pages 1069-1079. International World Wide Web Confer- ences Steering Committee.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "Socratic learning: Correcting misspecified generative models using discriminative models",
"authors": [
{
"first": "Paroma",
"middle": [],
"last": "Varma",
"suffix": ""
},
{
"first": "Bryan",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Iter",
"suffix": ""
},
{
"first": "Peng",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Rose",
"middle": [],
"last": "Yu",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"De"
],
"last": "Sa",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "R\u00e9",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1610.08123"
]
},
"num": null,
"urls": [],
"raw_text": "Paroma Varma, Bryan He, Dan Iter, Peng Xu, Rose Yu, Christopher De Sa, and Christopher R\u00e9. 2016. So- cratic learning: Correcting misspecified generative models using discriminative models. arXiv preprint arXiv:1610.08123.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "Multilingual relation extraction using compositional universal schema",
"authors": [
{
"first": "Patrick",
"middle": [],
"last": "Verga",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Belanger",
"suffix": ""
},
{
"first": "Emma",
"middle": [],
"last": "Strubell",
"suffix": ""
},
{
"first": "Benjamin",
"middle": [],
"last": "Roth",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Mccallum",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1511.06396"
]
},
"num": null,
"urls": [],
"raw_text": "Patrick Verga, David Belanger, Emma Strubell, Ben- jamin Roth, and Andrew McCallum. 2015. Multi- lingual relation extraction using compositional uni- versal schema. arXiv preprint arXiv:1511.06396.",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "Relation classification via convolutional deep neural network",
"authors": [
{
"first": "Daojian",
"middle": [],
"last": "Zeng",
"suffix": ""
},
{
"first": "Kang",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Siwei",
"middle": [],
"last": "Lai",
"suffix": ""
},
{
"first": "Guangyou",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Jun",
"middle": [],
"last": "Zhao",
"suffix": ""
}
],
"year": 2014,
"venue": "COLING",
"volume": "",
"issue": "",
"pages": "2335--2344",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Daojian Zeng, Kang Liu, Siwei Lai, Guangyou Zhou, Jun Zhao, et al. 2014. Relation classification via convolutional deep neural network. In COLING, pages 2335-2344.",
"links": null
},
"BIBREF38": {
"ref_id": "b38",
"title": "Modeling truth existence in truth discovery",
"authors": [
{
"first": "Bo",
"middle": [],
"last": "Shi Zhi",
"suffix": ""
},
{
"first": "Wenzhu",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Jing",
"middle": [],
"last": "Tong",
"suffix": ""
},
{
"first": "Dian",
"middle": [],
"last": "Gao",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Yu",
"suffix": ""
},
{
"first": "Ji",
"middle": [],
"last": "Heng",
"suffix": ""
},
{
"first": "Jiawei",
"middle": [],
"last": "Han",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining",
"volume": "",
"issue": "",
"pages": "1543--1552",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shi Zhi, Bo Zhao, Wenzhu Tong, Jing Gao, Dian Yu, Heng Ji, and Jiawei Han. 2015. Modeling truth ex- istence in truth discovery. In Proceedings of the 21th ACM SIGKDD International Conference on Knowl- edge Discovery and Data Mining, pages 1543-1552. ACM.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"uris": null,
"type_str": "figure",
"text": "REHESSION Framework except Extraction and Representation of Text Features",
"num": null
},
"FIGREF1": {
"uris": null,
"type_str": "figure",
"text": "Relation Mention Representation covery, and allows them to enhance each other.",
"num": null
},
"FIGREF2": {
"uris": null,
"type_str": "figure",
"text": "Fig. 2); 2. Text feature embeddings are utilized to calculate relation mention embeddings (see Fig. 2);",
"num": null
},
"TABREF2": {
"html": null,
"num": null,
"content": "<table/>",
"type_str": "table",
"text": ""
},
"TABREF3": {
"html": null,
"num": null,
"content": "<table/>",
"type_str": "table",
"text": "With text feature embeddings learned by Eq. 1, a naive method to"
},
"TABREF5": {
"html": null,
"num": null,
"content": "<table><tr><td>CoreNLP tool (Manning et al., 2014) to generate</td></tr><tr><td>entity mentions and get POS tags for both datasets.</td></tr><tr><td>Brown clusters(Brown et al., 1992) are derived for</td></tr><tr><td>each corpus using public implementation 2 . All</td></tr><tr><td>these features are shared with all compared meth-</td></tr><tr><td>ods in our experiments.</td></tr></table>",
"type_str": "table",
"text": "Number of labeling functions and the relation types they can annotated w.r.t. two kinds of information"
},
"TABREF7": {
"html": null,
"num": null,
"content": "<table><tr><td>Dataset</td><td>Wiki-KBP</td><td>NYT</td></tr><tr><td>Total Number of RM</td><td>225977</td><td>530767</td></tr><tr><td>RM annotated as None</td><td>100521</td><td>356497</td></tr><tr><td>RM with conflicts</td><td>32008</td><td>58198</td></tr><tr><td>Conflicts involving None</td><td>30559</td><td>38756</td></tr></table>",
"type_str": "table",
"text": "Performance comparison of relation extraction and relation classification"
},
"TABREF8": {
"html": null,
"num": null,
"content": "<table><tr><td>BFK</td></tr></table>",
"type_str": "table",
"text": "Number of relation mentions (RM), relation mentions annotated as None, relation mentions with conflicting annotations and conflicts involving None learning with Perceptron algorithm."
},
"TABREF9": {
"html": null,
"num": null,
"content": "<table/>",
"type_str": "table",
"text": "Example output of true label discovery. The first two relation mentions come from Wiki-KBP, and their annotations are {born-in, None}. The last two are created by replacing key words of the first two. Key words are marked as bold and entity mentions are marked as Italics."
},
"TABREF10": {
"html": null,
"num": null,
"content": "<table><tr><td colspan=\"2\">Dataset &amp; Method</td><td>Prec</td><td>Rec</td><td>F1</td><td>Acc</td></tr><tr><td>Wiki-KBP</td><td/><td/><td/><td/></tr><tr><td/><td colspan=\"5\">Ori 0.4122 0.5726 0.4792 0.8381</td></tr><tr><td>NYT</td><td colspan=\"5\">TD 0.3758 0.4887 0.4239 0.7387</td></tr><tr><td/><td colspan=\"5\">US 0.3573 0.5145 0.4223 0.7362</td></tr></table>",
"type_str": "table",
"text": "Ori 0.3677 0.4933 0.4208 0.7277 TD 0.3032 0.5279 0.3850 0.7271 US 0.3380 0.4779 0.3960 0.7268"
},
"TABREF11": {
"html": null,
"num": null,
"content": "<table/>",
"type_str": "table",
"text": ""
}
}
}
}