| { |
| "paper_id": "N18-1003", |
| "header": { |
| "generated_with": "S2ORC 1.0.0", |
| "date_generated": "2023-01-19T13:54:43.259749Z" |
| }, |
| "title": "Joint Bootstrapping Machines for High Confidence Relation Extraction", |
| "authors": [ |
| { |
| "first": "Pankaj", |
| "middle": [], |
| "last": "Gupta", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "Siemens AG Munich", |
| "location": { |
| "country": "Germany" |
| } |
| }, |
| "email": "pankaj.gupta@siemens.com" |
| }, |
| { |
| "first": "Benjamin", |
| "middle": [], |
| "last": "Roth", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "University of Munich (LMU) Munich", |
| "location": { |
| "country": "Germany" |
| } |
| }, |
| "email": "beroth@cis.lmu.de" |
| }, |
| { |
| "first": "Hinrich", |
| "middle": [], |
| "last": "Sch\u00fctze", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "University of Munich (LMU) Munich", |
| "location": { |
| "country": "Germany" |
| } |
| }, |
| "email": "" |
| } |
| ], |
| "year": "", |
| "venue": null, |
| "identifiers": {}, |
| "abstract": "Semi-supervised bootstrapping techniques for relationship extraction from text iteratively expand a set of initial seed instances. Due to the lack of labeled data, a key challenge in bootstrapping is semantic drift: if a false positive instance is added during an iteration, then all following iterations are contaminated. We introduce BREX, a new bootstrapping method that protects against such contamination by highly effective confidence assessment. This is achieved by using entity and template seeds jointly (as opposed to just one as in previous work), by expanding entities and templates in parallel and in a mutually constraining fashion in each iteration and by introducing higherquality similarity measures for templates. Experimental results show that BREX achieves an F 1 that is 0.13 (0.87 vs. 0.74) better than the state of the art for four relationships.", |
| "pdf_parse": { |
| "paper_id": "N18-1003", |
| "_pdf_hash": "", |
| "abstract": [ |
| { |
| "text": "Semi-supervised bootstrapping techniques for relationship extraction from text iteratively expand a set of initial seed instances. Due to the lack of labeled data, a key challenge in bootstrapping is semantic drift: if a false positive instance is added during an iteration, then all following iterations are contaminated. We introduce BREX, a new bootstrapping method that protects against such contamination by highly effective confidence assessment. This is achieved by using entity and template seeds jointly (as opposed to just one as in previous work), by expanding entities and templates in parallel and in a mutually constraining fashion in each iteration and by introducing higherquality similarity measures for templates. Experimental results show that BREX achieves an F 1 that is 0.13 (0.87 vs. 0.74) better than the state of the art for four relationships.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Abstract", |
| "sec_num": null |
| } |
| ], |
| "body_text": [ |
| { |
| "text": "Traditional semi-supervised bootstrapping relation extractors (REs) such as BREDS (Batista et al., 2015) , SnowBall (Agichtein and Gravano, 2000) and DIPRE (Brin, 1998) require an initial set of seed entity pairs for the target binary relation. They find occurrences of positive seed entity pairs in the corpus, which are converted into extraction patterns, i.e., extractors, where we define an extractor as a cluster of instances generated from the corpus. The initial seed entity pair set is expanded with the relationship entity pairs newly extracted by the extractors from the text iteratively. The augmented set is then used to extract new relationships until a stopping criterion is met.", |
| "cite_spans": [ |
| { |
| "start": 82, |
| "end": 104, |
| "text": "(Batista et al., 2015)", |
| "ref_id": "BIBREF3" |
| }, |
| { |
| "start": 116, |
| "end": 145, |
| "text": "(Agichtein and Gravano, 2000)", |
| "ref_id": "BIBREF0" |
| }, |
| { |
| "start": 150, |
| "end": 168, |
| "text": "DIPRE (Brin, 1998)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "Due to lack of sufficient labeled data, rulebased systems dominate commercial use (Chiticariu et al., 2013) . Rules are typically defined by creating patterns around the entities (entity extraction) or entity pairs (relation extraction). Recently, supervised machine learning, especially deep learning techniques (Gupta et al., 2015; Nguyen and Grishman, 2015; Vu et al., 2016a,b; Gupta et al., 2016) , have shown promising results in entity and relation extraction; however, they need sufficient hand-labeled data to train models, which can be costly and time consuming for webscale extractions. Bootstrapping machine-learned rules can make extractions easier on large corpora. Thus, open information extraction systems (Carlson et al., 2010; Fader et al., 2011; Mausam et al., 2012; Mesquita et al., 2013; Angeli et al., 2015) have recently been popular for domain specific or independent pattern learning.", |
| "cite_spans": [ |
| { |
| "start": 82, |
| "end": 107, |
| "text": "(Chiticariu et al., 2013)", |
| "ref_id": "BIBREF8" |
| }, |
| { |
| "start": 313, |
| "end": 333, |
| "text": "(Gupta et al., 2015;", |
| "ref_id": "BIBREF10" |
| }, |
| { |
| "start": 334, |
| "end": 360, |
| "text": "Nguyen and Grishman, 2015;", |
| "ref_id": "BIBREF19" |
| }, |
| { |
| "start": 361, |
| "end": 380, |
| "text": "Vu et al., 2016a,b;", |
| "ref_id": null |
| }, |
| { |
| "start": 381, |
| "end": 400, |
| "text": "Gupta et al., 2016)", |
| "ref_id": "BIBREF11" |
| }, |
| { |
| "start": 721, |
| "end": 743, |
| "text": "(Carlson et al., 2010;", |
| "ref_id": null |
| }, |
| { |
| "start": 744, |
| "end": 763, |
| "text": "Fader et al., 2011;", |
| "ref_id": "BIBREF9" |
| }, |
| { |
| "start": 764, |
| "end": 784, |
| "text": "Mausam et al., 2012;", |
| "ref_id": "BIBREF16" |
| }, |
| { |
| "start": 785, |
| "end": 807, |
| "text": "Mesquita et al., 2013;", |
| "ref_id": "BIBREF17" |
| }, |
| { |
| "start": 808, |
| "end": 828, |
| "text": "Angeli et al., 2015)", |
| "ref_id": "BIBREF1" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "Hearst (1992) used hand written rules to generate more rules to extract hypernym-hyponym pairs, without distributional similarity. For entity extraction, Riloff (1996) used seed entities to generate extractors with heuristic rules and scored them by counting positive extractions. Prior work (Lin et al., 2003; investigated different extractor scoring measures. improved scores by introducing expected number of negative entities. Brin (1998) developed the bootstrapping relation extraction system DIPRE that generates extractors by clustering contexts based on string matching. SnowBall (Agichtein and Gravano, 2000) is inspired by DIPRE but computes a TF-IDF representation of each context. BREDS (Batista et al., 2015) uses word embeddings (Mikolov et al., 2013) to bootstrap relationships.", |
| "cite_spans": [ |
| { |
| "start": 7, |
| "end": 13, |
| "text": "(1992)", |
| "ref_id": null |
| }, |
| { |
| "start": 154, |
| "end": 167, |
| "text": "Riloff (1996)", |
| "ref_id": "BIBREF22" |
| }, |
| { |
| "start": 292, |
| "end": 310, |
| "text": "(Lin et al., 2003;", |
| "ref_id": "BIBREF15" |
| }, |
| { |
| "start": 431, |
| "end": 442, |
| "text": "Brin (1998)", |
| "ref_id": "BIBREF4" |
| }, |
| { |
| "start": 588, |
| "end": 617, |
| "text": "(Agichtein and Gravano, 2000)", |
| "ref_id": "BIBREF0" |
| }, |
| { |
| "start": 699, |
| "end": 721, |
| "text": "(Batista et al., 2015)", |
| "ref_id": "BIBREF3" |
| }, |
| { |
| "start": 743, |
| "end": 765, |
| "text": "(Mikolov et al., 2013)", |
| "ref_id": "BIBREF18" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "Related work investigated adapting extractor scoring measures in bootstrapping entity extraction with either entities or templates (Table 1) as seeds ( Table 2 ). The state-of-the-art relation extractors bootstrap with only seed entity pairs and suffer due to a surplus of unknown extractions and the lack of labeled data, leading to low confidence extractors. This in turn leads to to low confidence in the system output. Prior RE sys-BREE Bootstrapping Relation Extractor with Entity pair BRET Bootstrapping Relation Extractor with Template BREJ Bootstrapping Relation Extractor in Joint learning type a named entity type, e.g., person typed entity a typed entity, e.g., \"Obama\",person\u00a1 entity pair a pair of two typed entities template a triple of vectors ( tems do not focus on improving the extractors' scores. In addition, SnowBall and BREDS used a weighting scheme to incorporate the importance of contexts around entities and compute a similarity score that introduces additional parameters and does not generalize well.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 131, |
| "end": 140, |
| "text": "(Table 1)", |
| "ref_id": "TABREF1" |
| }, |
| { |
| "start": 152, |
| "end": 159, |
| "text": "Table 2", |
| "ref_id": "TABREF4" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "v \u00a11 , v 0 , v", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "We propose a Joint Bootstrapping Machine 1 (JBM), an alternative to the entity-pair-centered bootstrapping for relation extraction that can take advantage of both entity-pair and template-centered methods to jointly learn extractors consisting of instances due to the occurrences of both entity pair and template seeds. It scales up the number of positive extractions for non-noisy extractors and boosts their confidence scores. We focus on improving the scores for non-noisy-low-confidence extractors, resulting in higher recall. The relation extractors bootstrapped with entity pair, template and joint seeds are named as BREE, BRET and BREJ (Table 1) , respectively.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 644, |
| "end": 653, |
| "text": "(Table 1)", |
| "ref_id": "TABREF1" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Contributions. (1)", |
| "sec_num": null |
| }, |
| { |
| "text": "(2) Prior work on embedding-based context comparison has assumed that relations have consistent syntactic expression and has mainly addressed synonymy by using embeddings (e.g.,\"acquired\" -\"bought\"). In reality, there is large variation in the syntax of how relations are expressed, e.g., \"MSFT to acquire NOK for $8B\" vs. \"MSFT earnings hurt by NOK acquisition\". We introduce cross-context similarities that compare all parts of the context (e.g., \"to acquire\" and \"acquisition\") and show that these perform better (in terms of recall) than measures assuming consistent syntactic expression of relations.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Contributions. (1)", |
| "sec_num": null |
| }, |
| { |
| "text": "(3) Experimental results demonstrate a 13% gain in F 1 score on average for four relationships and suggest eliminating four parameters, compared to the state-of-the-art method.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Contributions. (1)", |
| "sec_num": null |
| }, |
| { |
| "text": "The motivation and benefits of the proposed JBM for relation extraction is discussed in depth in section 2.3. The method is applicable for both entity and relation extraction tasks. However, in context of relation extraction, we call it BREJ.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Contributions. (1)", |
| "sec_num": null |
| }, |
| { |
| "text": "We first introduce the notation and terms (Table 1) .", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 42, |
| "end": 51, |
| "text": "(Table 1)", |
| "ref_id": "TABREF1" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Method 2.1 Notation and definitions", |
| "sec_num": "2" |
| }, |
| { |
| "text": "Given a relationship like \"x acquires y\", the task is to extract pairs of entities from a corpus for which the relationship is true. We assume that the arguments of the relationship are typed, e.g., x and y are organizations. We run a named entity tagger in preprocessing, so that the types of all candidate entities are given. The objects the bootstrapping algorithm generally handles are therefore typed entities (an entity associated with a type).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Method 2.1 Notation and definitions", |
| "sec_num": "2" |
| }, |
| { |
| "text": "For a particular sentence in a corpus that states that the relationship (e.g., \"acquires\") holds between x and y, a template consists of three vectors that represent the context of x and y. v \u00a11 represents the context before x, v 0 the context between x and y and v 1 the context after y. These vectors are simply sums of the embeddings of the corresponding words. A template is \"typed\", i.e., in addition to the three vectors it specifies the types of the two entities. An instance joins an entity pair and a template. The types of entity pair and template must be the same.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Method 2.1 Notation and definitions", |
| "sec_num": "2" |
| }, |
| { |
| "text": "The first step of bootstrapping is to extract a set of instances from the input corpus. We refer to this set as \u03b3. We will use i and j to refer to instances.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Method 2.1 Notation and definitions", |
| "sec_num": "2" |
| }, |
| { |
| "text": "xpiq is the entity pair of instance i and xpiq is the template of instance i.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Method 2.1 Notation and definitions", |
| "sec_num": "2" |
| }, |
| { |
| "text": "A required input to our algorithm are sets of positive and negative seeds for either entity pairs (G p and G n ) or templates (G p and G n ) or both. We define G to be a tuple of all four seed sets.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Method 2.1 Notation and definitions", |
| "sec_num": "2" |
| }, |
| { |
| "text": "We run our bootstrapping algorithm for k it iterations where k it is a parameter.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Method 2.1 Notation and definitions", |
| "sec_num": "2" |
| }, |
| { |
| "text": "A key notion is the similarity between two instances. We will experiment with different similarity measures. The baseline is (Batista et al., 2015) 's measure given in Figure 4 , first line: the similarity of two instances is given as a weighted sum of the dot products of their before contexts ( v \u00a11 ), their between contexts ( v 0 ) and their after contexts ( v 1 ) where the weights w p are parameters. We give this definition for instances, but it also applies to templates since only the context vectors of an instance are used, not the entities.", |
| "cite_spans": [ |
| { |
| "start": 125, |
| "end": 147, |
| "text": "(Batista et al., 2015)", |
| "ref_id": "BIBREF3" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 168, |
| "end": 176, |
| "text": "Figure 4", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Method 2.1 Notation and definitions", |
| "sec_num": "2" |
| }, |
| { |
| "text": "The similarity between an instance i and a cluster \u03bb of instances is defined as the maximum similarity of i with any member of the cluster; see Figure 2 , right, Eq. 5. Again, there is a straightforward extension to a cluster of templates: see Figure 2 , right, Eq. 6.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 144, |
| "end": 152, |
| "text": "Figure 2", |
| "ref_id": "FIGREF0" |
| }, |
| { |
| "start": 244, |
| "end": 252, |
| "text": "Figure 2", |
| "ref_id": "FIGREF0" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Method 2.1 Notation and definitions", |
| "sec_num": "2" |
| }, |
| { |
| "text": "The extractors \u039b can be categorized as follows:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Method 2.1 Notation and definitions", |
| "sec_num": "2" |
| }, |
| { |
| "text": "\u039bNNHC t\u03bb \u039b| \u03bb \u00de \u00d1 R loomoon non\u00a1noisy cnfp\u03bb, Gq \u00a5 \u03c4 cnf u (1) \u039bNNLC t\u03bb \u039b|\u03bb \u00de \u00d1 R cnfp\u03bb, Gq \u03c4 cnf u (2) \u039bNHC t\u03bb \u039b| \u03bb \u00de \u00d1 R loomoon noisy cnfp\u03bb, Gq \u00a5 \u03c4 cnf u (3) \u039bNLC t\u03bb \u039b|\u03bb \u00de \u00d1 R cnfp\u03bb, Gq \u03c4 cnf u (4)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Method 2.1 Notation and definitions", |
| "sec_num": "2" |
| }, |
| { |
| "text": "where R is the relation to be bootstrapped. The \u03bb cat is a member of \u039b cat . For instance, a \u03bb N N LC is called as a non-noisy-low-confidence extractor if it represents the target relation (i.e., \u03bb \u00de \u00d1 R), however with the confidence below a certain threshold (\u03c4 cnf ). Extractors of types \u039b N N HC and \u039b N LC are desirable, those of types \u039b N HC and \u039b N N LC undesirable within bootstrapping.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Method 2.1 Notation and definitions", |
| "sec_num": "2" |
| }, |
| { |
| "text": "To describe BREX ( Figure 1 ) in its most general form, we use the term item to refer to an entity pair, a template or both. The input to BREX (Figure 2 , left, line 01) is a set \u03b3 of instances extracted from a corpus and G seed , a structure consisting of one set of positive and one set of negative seed items. G yield (line 02) collects the items that BREX extracts in several iterations. In each of k it iterations (line 03), BREX first initializes the cache G cache (line 04); this cache collects the items that are extracted in this iteration. The design of the algorithm balances elements that ensure high recall with elements that ensure high precision.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 19, |
| "end": 27, |
| "text": "Figure 1", |
| "ref_id": null |
| }, |
| { |
| "start": 143, |
| "end": 152, |
| "text": "(Figure 2", |
| "ref_id": "FIGREF0" |
| } |
| ], |
| "eq_spans": [], |
| "section": "The Bootstrapping Machines: BREX", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "High recall is achieved by starting with the seeds and making three \"hops\" that consecutively consider order-1, order-2 and order-3 neighbors . . . Figure 1 : Joint Bootstrapping Machine. The red and blue filled circles/rings are the instances generated due to seed entity pairs and templates, respectively. Each dashed rectangular box represents a cluster of instances. Numbers indicate the flow. Follow the notations from Table 1 and Figure 2 .", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 148, |
| "end": 156, |
| "text": "Figure 1", |
| "ref_id": null |
| }, |
| { |
| "start": 424, |
| "end": 431, |
| "text": "Table 1", |
| "ref_id": "TABREF1" |
| }, |
| { |
| "start": 436, |
| "end": 444, |
| "text": "Figure 2", |
| "ref_id": "FIGREF0" |
| } |
| ], |
| "eq_spans": [], |
| "section": "The Bootstrapping Machines: BREX", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "of the seeds. On line 05, we make the first hop: all instances that are similar to a seed are collected where \"similarity\" is defined differently for different BREX configurations (see below). The collected instances are then clustered, similar to work on bootstrapping by Agichtein and Gravano (2000) and Batista et al. (2015) . On line 06, we make the second hop: all instances that are within \u03c4 sim of a hop-1 instance are added; each such instance is only added to one cluster, the closest one; see definition of \u00b5: Figure 2 , Eq. 8. On line 07, we make the third hop: we include all instances that are within \u03c4 sim of a hop-2 instance; see definition of \u03c8: Figure 2 , Eq. 7. In summary, every instance that can be reached by three hops from a seed is being considered at this point. A cluster of hop-2 instances is named as extractor.", |
| "cite_spans": [ |
| { |
| "start": 273, |
| "end": 301, |
| "text": "Agichtein and Gravano (2000)", |
| "ref_id": "BIBREF0" |
| }, |
| { |
| "start": 306, |
| "end": 327, |
| "text": "Batista et al. (2015)", |
| "ref_id": "BIBREF3" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 520, |
| "end": 528, |
| "text": "Figure 2", |
| "ref_id": "FIGREF0" |
| }, |
| { |
| "start": 662, |
| "end": 670, |
| "text": "Figure 2", |
| "ref_id": "FIGREF0" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Seed", |
| "sec_num": null |
| }, |
| { |
| "text": "High precision is achieved by imposing, on line 08, a stringent check on each instance before its information is added to the cache. The core function of this check is given in Figure 2 , Eq. 9. This definition is a soft version of the following hard max, which is easier to explain:", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 177, |
| "end": 185, |
| "text": "Figure 2", |
| "ref_id": "FIGREF0" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Seed", |
| "sec_num": null |
| }, |
| { |
| "text": "cnfpi, \u039b, Gq max t\u03bb\u039b|i\u03c8p\u03bbqu cnfpi, \u03bb, Gq", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Seed", |
| "sec_num": null |
| }, |
| { |
| "text": "We are looking for a cluster \u03bb in \u039b that licenses the extraction of i with high confidence. cnfpi, \u03bb, Gq (Figure 2 , Eq. 10), the confidence of a single cluster (i.e., extractor) \u03bb for an instance, is defined as the product of the overall reliability of \u03bb (which is independent of i) and the similarity of i to \u03bb, the second factor in Eq. 10, i.e., simpi, \u03bbq. This factor simpi, \u03bbq prevents an extraction by a cluster whose members are all distant from the instance -even if the cluster itself is highly reliable.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 105, |
| "end": 114, |
| "text": "(Figure 2", |
| "ref_id": "FIGREF0" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Seed", |
| "sec_num": null |
| }, |
| { |
| "text": "Algorithm: BREX 01 INPUT: \u03b3, G seed 02 G yield : G seed 03 for k it iterations: 04 The first factor in Eq. 10, i.e., cnfp\u03bb, Gq, assesses the reliability of a cluster \u03bb: we compute the ratio N p\u03bb,Gnq N p\u03bb,Gpq , i.e., the ratio between the number of instances in \u03bb that match a negative and positive gold seed, respectively; see Figure 3 , line (i). If this ratio is close to zero, then likely false positive extractions are few compared to likely true positive extractions. For the simple version of the algorithm (for which we set w n 1, w u 0), this results in cnfp\u03bb, Gq being close to 1 and the reliability measure it not discounted. On the other hand, if N p\u03bb,Gnq N p\u03bb,Gpq is larger, meaning that the relative number of likely false positive extractions is high, then cnfp\u03bb, Gq shrinks towards 0, resulting in progressive discounting of cnfp\u03bb, Gq and leading to non-noisy-low-confidence extractor, particularly for a reliable \u03bb. Due to lack of labeled data, the scoring mechanism cannot distinguish between noisy and non-noisy extractors. Therefore, an extractor is judged by its ability to extract more positive and less negative extractions. Note that we carefully designed this precision component to give good assessments while at the same time making maximum use of the available seeds. The reliability statistics are computed on \u03bb, i.e., on hop-2 instances (not on hop-3 instances). The ratio N p\u03bb,Gnq N p\u03bb,Gpq is computed on instances that directly match a gold seed -this is the most reliable information we have available.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 327, |
| "end": 335, |
| "text": "Figure 3", |
| "ref_id": "FIGREF5" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Seed", |
| "sec_num": null |
| }, |
| { |
| "text": "G cache : r 05 \u0398 : pti \u03b3|matchpi, G yield qq 06 \u039b : t\u00b5p\u03b8, \u0398q|\u03b8 \u0398u 07 for each i \u03bb\u039b \u03c8p\u03bbq: 08 if checkpi, \u039b, G yield q : 09 addpi, G cache q 10 G yield G cache 11 OUTPUT: G yield , \u039b simpi, \u03bbq max i I \u03bb simpi, i I q (5) simpi, Gq max tG simpi, tq (6) \u03c8p\u03bbq ti \u03b3|simpi, \u03bbq \u00a5 \u03c4 sim u (7) \u00b5p\u03b8, \u0398q ti \u03b3|simpi, \u03b8q d d max \u03b8\u0398 simpi, \u03b8q \u00a5 \u03c4 sim u (8) cnfpi, \u039b, Gq 1 \u00a1 \u00b9 t\u03bb\u039b|i\u03c8p\u03bbqu p1\u00a1cnfpi, \u03bb, Gqq (9) cnfpi, \u03bb, Gq cnfp\u03bb, Gqsimpi, \u03bbq (10) cnfp\u03bb, Gq 1 1 w n N p\u03bb,Gnq N p\u03bb,Gpq w u N 0 p\u03bb,Gq N p\u03bb,Gpq (11) N 0 p\u03bb, Gq |ti \u03bb|xpiq pG p G n qu| (12)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Seed", |
| "sec_num": null |
| }, |
| { |
| "text": "(i) N p\u03bb, G l q |ti\u03bb|xpiqG l u| |ti\u03bb|simpi, G l q\u00a5\u03c4 sim u| |ti\u03bb|xpiqG l u| |ti\u03bb|simpi, G l q\u00a5\u03c4 sim u| (ii)pw n , w u q p1.0, 0.0q p1.0, 0.0q p1.0, 0.0q 05 matchpi, Gq xpiq G p simpi, G p q\u00a5\u03c4 sim xpiq G p simpi, G p q\u00a5\u03c4 sim 08 checkpi, \u039b, Gq cnfpi, \u039b, Gq \u00a5\u03c4 cnf cnfpi, \u039b, Gq \u00a5\u03c4 cnf cnfpi, \u039b, Gq \u00a5\u03c4 cnf simpi, G p q\u00a5\u03c4 sim 09 addpi, Gq G p txpiqu G p txpiqu G p txpiqu, G p txpiqu", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Seed", |
| "sec_num": null |
| }, |
| { |
| "text": "After all instances have been checked (line 08) and (if they passed muster) added to the cache (line 09), the inner loop ends and the cache is merged into the yield (line 10). Then a new loop (lines 03-10) of hop-1, hop-2 and hop-3 extensions and cluster reliability tests starts.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Seed", |
| "sec_num": null |
| }, |
| { |
| "text": "Thus, the algorithm consists of k it iterations. There is a tradeoff here between \u03c4 sim and k it . We will give two extreme examples, assuming that we want to extract a fixed number of m instances where m is given. We can achieve this goal either by setting k it =1 and choosing a small \u03c4 sim , which will result in very large hops. Or we can achieve this goal by setting \u03c4 sim to a large value and running the algorithm for a larger number of k it . The flexibility that the two hyperparameters k it and \u03c4 sim afford is important for good performance. Figure 4 : Similarity measures. These definitions for instances equally apply to templates since the definitions only depend on the \"template part\" of an instance, i.e., its vectors. (value is 0 if types are different) ", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 553, |
| "end": 561, |
| "text": "Figure 4", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Seed", |
| "sec_num": null |
| }, |
| { |
| "text": "sim match pi, jq \u00b0p t\u00a11,0,1u wp vppiq vppjq ; sim asym cc pi, jq max pt\u00a11,0,1u vppiq v0pjq (13) sim sym1 cc pi, jq max max pt\u00a11,0,1u vppiq v0pjq, max pt\u00a1 1,0,1u vppjq v0piq\u00a8(14) sim sym2 cc pi, jq max \u00a1 v\u00a11piq v1piq\u00a8 v0pjq, v\u00a11pjq v1pjq\u00a8 v0piq, v0piq v0pjq \u00a9 (15)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Seed", |
| "sec_num": null |
| }, |
| { |
| "text": "i N + ( ,G ) N 0 1 +0 Y +1 0 1 +0 Y +1 N N +0 Y +1 N Y +1 Y +1 N +0 1 2 1 0 +0 BREE BREJ BRET p N + ( ,G ) p N + ( , ) p N + ( , ) p N + ( , ) p", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Seed", |
| "sec_num": null |
| }, |
| { |
| "text": "The main contribution of this paper is that we propose, as an alternative to entity-pair-centered BREE (Batista et al., 2015) , template-centered BRET as well as BREJ (Figure 1) , an instantiation of BREX that can take advantage of both entity pairs and templates. The differences and advantages of BREJ over BREE and BRET are:", |
| "cite_spans": [ |
| { |
| "start": 103, |
| "end": 125, |
| "text": "(Batista et al., 2015)", |
| "ref_id": "BIBREF3" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 167, |
| "end": 177, |
| "text": "(Figure 1)", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "BREE, BRET and BREJ", |
| "sec_num": "2.3" |
| }, |
| { |
| "text": "(1) Disjunctive Matching of Instances: The first difference is realized in how the three algorithms match instances with seeds (line 05 in Figure 3) . BREE checks whether the entity pair of an instance is one of the entity pair seeds, BRET checks whether the template of an instance is one of the template seeds and BREJ checks whether the disjunction of the two is true. The disjunction facilitates a higher hit rate in matching instances with seeds. The introduction of a few handcrafted templates along with seed entity pairs allows BREJ to leverage discriminative patterns and learn similar ones via distributional semantics. In Figure 1 , the joint approach results in hybrid extractors \u039b that contain instances due to seed occurrences \u0398 of both entity pairs and templates.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 139, |
| "end": 148, |
| "text": "Figure 3)", |
| "ref_id": "FIGREF5" |
| }, |
| { |
| "start": 633, |
| "end": 641, |
| "text": "Figure 1", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "BREE, BRET and BREJ", |
| "sec_num": "2.3" |
| }, |
| { |
| "text": "(2) Hybrid Augmentation of Seeds: On line 09 in Figure 3 , we see that the bootstrapping step is defined in a straightforward fashion: the entity pair of an instance is added for BREE, the template for BRET and both for BREJ. the hybrid augmentation of seeds via red and blue rings of output instances.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 48, |
| "end": 56, |
| "text": "Figure 3", |
| "ref_id": "FIGREF5" |
| } |
| ], |
| "eq_spans": [], |
| "section": "BREE, BRET and BREJ", |
| "sec_num": "2.3" |
| }, |
| { |
| "text": "(3) Scaling Up Positives in Extractors: As discussed in section 2.2, a good measure of the quality of an extractor is crucial and N , the number of instances in an extractor \u03bb that match a seed, is an important component of that. For BREE and BRET, the definition follows directly from the fact that these are entity-pair and template-centered instantiations of BREX, respectively. However, the disjunctive matching of instances for an extractor with entity pair and template seeds in BREJ (Figure 3 line \"(i)\" ) boosts the likelihood of finding positive instances. In Figure 5 , we demonstrate computing the count of positive instances is based on the ratio N p\u03bb,Gnq N p\u03bb,Gpq , therefore suggesting that the scaling boosts its confidence.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 490, |
| "end": 499, |
| "text": "(Figure 3", |
| "ref_id": "FIGREF5" |
| }, |
| { |
| "start": 569, |
| "end": 577, |
| "text": "Figure 5", |
| "ref_id": "FIGREF2" |
| } |
| ], |
| "eq_spans": [], |
| "section": "BREE, BRET and BREJ", |
| "sec_num": "2.3" |
| }, |
| { |
| "text": "In Figure 6 , we demonstrate with an example how the joint bootstrapping scales up the positive instances for a non-noisy extractor \u03bb, resulting in \u03bb N N HC for BREJ compared to \u03bb N N LC in BREE.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 3, |
| "end": 11, |
| "text": "Figure 6", |
| "ref_id": "FIGREF3" |
| } |
| ], |
| "eq_spans": [], |
| "section": "BREE, BRET and BREJ", |
| "sec_num": "2.3" |
| }, |
| { |
| "text": "Due to unlabeled data, the instances not matching in seeds are considered either to be ignored/unknown N 0 or negatives in the confidence measure (Eq. 11). The former leads to high confidences for noisy extractors by assigning high scores, the latter to low confidences for non-noisy extractors by penalizing them. For a simple version of the algorithm in the illustration, we consider them as negatives and set w n 1. Figure 6 shows the three extractors (\u03bb) generated and their confidence scores in BREE, BRET and BREJ. Observe that the scaling up of positives in BREJ due to BRET extractions (without w n ) discounts cnfp\u03bb, Gq relatively lower than BREE. The discounting results in \u03bb N N HC in BREJ and \u03bb N N LC in BREE. The discounting in BREJ is adapted for non-noisy extractors facilitated by BRET in generating mostly non-noisy extractors due to stringent checks (Figure 3 , line \"(i)\" and 05). Intuitively, the intermixing of non-noisy extractors (i.e., hybrid) promotes the scaling and boosts recall.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 419, |
| "end": 427, |
| "text": "Figure 6", |
| "ref_id": "FIGREF3" |
| }, |
| { |
| "start": 869, |
| "end": 878, |
| "text": "(Figure 3", |
| "ref_id": "FIGREF5" |
| } |
| ], |
| "eq_spans": [], |
| "section": "BREE, BRET and BREJ", |
| "sec_num": "2.3" |
| }, |
| { |
| "text": "The before ( v \u00a11 ) and after ( v 1 ) contexts around the entities are highly sparse due to large variation in the syntax of how relations are expressed. SnowBall, DIPRE and BREE assumed that the between ( v 0 ) context mostly defines the syntactic expression for a relation and used weighted mechanism on the three contextual similarities in (Figure 4 ). They assigned higher weights to the similarity in between (p 0) contexts, that resulted in lower recall. We introduce attentive (max) similarity across all contexts (for example, v \u00a11 piq v 0 pjq) to automatically capture the large variation in the syntax of how relations are expressed, without using any weights. We investigate asymmetric (Eq 13) and symmetric (Eq 14 and 15) similarity measures, and name them as cross-context attentive (sim cc ) similarity.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 343, |
| "end": 352, |
| "text": "(Figure 4", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Similarity Measures", |
| "sec_num": "2.4" |
| }, |
| { |
| "text": "We re-run BREE (Batista et al., 2015) for baseline with a set of 5.5 million news articles from AFP and APW (Parker et al., 2011) . We use processed dataset of 1.2 million sentences (released by BREE) containing at least two entities linked to FreebaseEasy (Bast et al., 2014) . We extract four relationships: acquired (ORG-ORG), founderof (ORG-PER), headquartered (ORG-LOC) and affiliation (ORG-PER) for Organization (ORG), Person (PER) and Location (LOC) entity types. We bootstrap relations in BREE, BRET and BREJ, each with 4 similarity measures using seed entity Table 5 : Precision (P ), Recall (R) and F 1 compared to the state-of-the-art (baseline). #out: count of output instances with cnfpi, \u039b, Gq \u00a5 0.5. avg: average. Bold and underline: Maximum due to BREJ and sim cc , respectively. pairs and templates (Table 2) . See Tables 3, 4 and 5 for the count of candidates, hyperparameters and different configurations, respectively. Our evaluation is based on Bronzi et al. 2012's framework to estimate precision and recall of large-scale RE systems using FreebaseEasy (Bast et al., 2014) . Also following Bronzi et al. (2012), we use Pointwise Mutual Information (PMI) (Turney, 2001) to evaluate our system automatically, in addition to relying on an external knowledge base. We consider only extracted relationship instances with confidence scores cnfpi, \u039b, Gq equal or above 0.5. We follow the same approach as BREE (Batista et al., 2015) to detect the correct order of entities in a relational triple, where we try to identify the presence of passive voice using partof-speech (POS) tags and considering any form of the verb to be, followed by a verb in the past tense or past participle, and ending in the word 'by'. We use GloVe (Pennington et al., 2014) embeddings. Table 5 shows the experimental results in the three systems for the different relationships with ordered entity pairs and similarity measures (sim match , sim cc ). Observe that BRET (config 5 ) is precision-oriented while BREJ (config 9 ) recalloriented when compared to BREE (baseline). We see the number of output instances #out are also higher in BREJ, therefore the higher recall. The BREJ system in the different similarity configura- Table 7 : Comparative analysis using different thresholds \u03c4 to evaluate the extracted instances for acquired tions outperforms the baseline BREE and BRET in terms of F 1 score. On an average for the four relations, BREJ in configurations config 9 and config 10 results in F 1 that is 0.11 (0.85 vs 0.74) and 0.13 (0.87 vs 0.74) better than the baseline BREE. We discover that sim cc improves #out and recall over sim match correspondingly in all three systems. Observe that sim cc performs better with BRET than BREE due to non-noisy extractors in BRET. The results suggest an alternative to the weighting scheme in sim match and therefore, the state-of-the-art (sim cc ) performance with the 3 parameters (w \u00a11 , w 0 and w 1 ) ignored in bootstrap- ping. Observe that sim asym cc gives higher recall than the two symmetric similarity measures. Table 6 shows the performance of BREJ in different iterations trained with different similarity \u03c4 sim and confidence \u03c4 cnf thresholds. Table 7 shows a comparative analysis of the three systems, where we consider and evaluate the extracted relationship instances at different confidence scores.", |
| "cite_spans": [ |
| { |
| "start": 15, |
| "end": 37, |
| "text": "(Batista et al., 2015)", |
| "ref_id": "BIBREF3" |
| }, |
| { |
| "start": 108, |
| "end": 129, |
| "text": "(Parker et al., 2011)", |
| "ref_id": "BIBREF20" |
| }, |
| { |
| "start": 257, |
| "end": 276, |
| "text": "(Bast et al., 2014)", |
| "ref_id": "BIBREF2" |
| }, |
| { |
| "start": 1075, |
| "end": 1094, |
| "text": "(Bast et al., 2014)", |
| "ref_id": "BIBREF2" |
| }, |
| { |
| "start": 1425, |
| "end": 1447, |
| "text": "(Batista et al., 2015)", |
| "ref_id": "BIBREF3" |
| }, |
| { |
| "start": 1741, |
| "end": 1766, |
| "text": "(Pennington et al., 2014)", |
| "ref_id": "BIBREF21" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 568, |
| "end": 575, |
| "text": "Table 5", |
| "ref_id": null |
| }, |
| { |
| "start": 816, |
| "end": 825, |
| "text": "(Table 2)", |
| "ref_id": "TABREF4" |
| }, |
| { |
| "start": 832, |
| "end": 843, |
| "text": "Tables 3, 4", |
| "ref_id": "TABREF6" |
| }, |
| { |
| "start": 1779, |
| "end": 1786, |
| "text": "Table 5", |
| "ref_id": null |
| }, |
| { |
| "start": 2220, |
| "end": 2227, |
| "text": "Table 7", |
| "ref_id": null |
| }, |
| { |
| "start": 3065, |
| "end": 3072, |
| "text": "Table 6", |
| "ref_id": "TABREF10" |
| }, |
| { |
| "start": 3200, |
| "end": 3207, |
| "text": "Table 7", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Dataset and Experimental Setup", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "As discussed in section 2.3, BREJ facilitates disjunctive matching of instances (line 05 Figure 3 ) with seed entity pairs and templates. Table 8 shows #hit in the three systems, where the higher values of #hit in BREJ conform to the desired property. Observe that some instances in BREJ are found to be matched in both the seed types.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 89, |
| "end": 97, |
| "text": "Figure 3", |
| "ref_id": "FIGREF5" |
| }, |
| { |
| "start": 138, |
| "end": 145, |
| "text": "Table 8", |
| "ref_id": "TABREF12" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Disjunctive Seed Matching of Instances", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": "We analyze the extractors \u039b generated in BREE, BRET and BREJ for the 4 relations to demonstrate the impact of joint bootstrapping. Table 9 shows the attributes of \u039b. We manually annotate the extractors as noisy and non-noisy. We compute AN N LC and the lower values in BREJ compared to BREE suggest fewer non-noisy extractors with lower confidence in BREJ due to the scaled confi- BREJ that shrink N p\u03bb,Gnq N p\u03bb,Gpq i.e., ANP. It facilitates \u03bb N N LC to boost its confidence, i.e., \u03bb N N HC in BREJ suggested by AES that results in higher #out and recall (Table 5, BREJ).", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 131, |
| "end": 138, |
| "text": "Table 9", |
| "ref_id": "TABREF13" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Deep Dive into Attributes of Extractors", |
| "sec_num": "3.4" |
| }, |
| { |
| "text": "As discussed, Table 5 shows the performance of BREE, BRET and BREJ with the parameter w n 0.5 in computing extractors' confidence cnfp\u03bb, Gq(Eq. 11). In other words, config 9 (Table 5) is combination of both weighted negative and scaled positive extractions. However, we also investigate ignoring w n p 1.0q in order to demonstrate the capability of BREJ with only scaling positives and without weighting negatives. In Table 10 , observe that BREJ outperformed both BREE and BRET for all the relationships due to higher #out and recall. In addition, BREJ scores are comparable to config 9 (Table 5) suggesting that the scaling in BREJ is capable enough to remove the parameter w n . However, the combination of both weighting negatives and scaling positives results in the state-of-the-art performance. Table 11 lists some of the non-noisy extractors (simplified) learned in different configurations to illustrate boosting extractor confidence cnfp\u03bb, Gq. Since, an extractor \u03bb is a cluster of instances, therefore to simplify, we show one in- Table 11 : Subset of the non-noisy extractors (simplified) with their confidence scores cnfp\u03bb, Gq learned in different configurations for each relation. \u00a6 denotes that the extractor was never learned in config 1 and config 5 . X indicates that the extractor was never learned in config 1 , config 5 and config 9 .", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 14, |
| "end": 21, |
| "text": "Table 5", |
| "ref_id": null |
| }, |
| { |
| "start": 418, |
| "end": 426, |
| "text": "Table 10", |
| "ref_id": "TABREF1" |
| }, |
| { |
| "start": 588, |
| "end": 597, |
| "text": "(Table 5)", |
| "ref_id": null |
| }, |
| { |
| "start": 802, |
| "end": 810, |
| "text": "Table 11", |
| "ref_id": "TABREF1" |
| }, |
| { |
| "start": 1042, |
| "end": 1050, |
| "text": "Table 11", |
| "ref_id": "TABREF1" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Weighting Negatives Vs Scaling Positives", |
| "sec_num": "3.5" |
| }, |
| { |
| "text": "[X] and [Y] indicate placeholders for entities.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Qualitative Inspection of Extractors", |
| "sec_num": "3.6" |
| }, |
| { |
| "text": "stance (mostly populated) from every \u03bb. Each cell in Table 11 represents either a simplified representation of \u03bb or its confidence. We demonstrate how the confidence score of a non-noisy extractor in BREE (config 1 ) is increased in BREJ (config 9 and config 10 ). For instance, for the relation acquired, an extractor {[X] acquiring [Y]} is generated by BREE, BRET and BREJ; however, its confidence is boosted from 0.75 in BREE (config 1 ) to 0.95 in BREJ (config 9 ). Observe that BRET generates high confidence extractors. We also show extractors (marked by X) learned by BREJ with sim cc (config 10 ) but not by config 1 , config 5 and config 9 .", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 53, |
| "end": 61, |
| "text": "Table 11", |
| "ref_id": "TABREF1" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Qualitative Inspection of Extractors", |
| "sec_num": "3.6" |
| }, |
| { |
| "text": "3.7 Entity Pairs: Ordered Vs Bi-Set", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Qualitative Inspection of Extractors", |
| "sec_num": "3.6" |
| }, |
| { |
| "text": "In Table 5 , we use ordered pairs of typed entities. Additionally, we also investigate using entity sets and observe improved recall due to higher #out in both BREE and BREJ, comparing correspondingly Table 12 and 5 (baseline and config 9 ).", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 3, |
| "end": 10, |
| "text": "Table 5", |
| "ref_id": null |
| }, |
| { |
| "start": 201, |
| "end": 209, |
| "text": "Table 12", |
| "ref_id": "TABREF1" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Qualitative Inspection of Extractors", |
| "sec_num": "3.6" |
| }, |
| { |
| "text": "We have proposed a Joint Bootstrapping Machine for relation extraction (BREJ) that takes advantage of both entity-pair-centered and template-centered approaches. We have demonstrated that the joint approach scales up positive instances that boosts the confidence of NNLC extractors and improves recall. The experiments showed that the crosscontext similarity measures improved recall and suggest removing in total four parameters.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusion", |
| "sec_num": "4" |
| }, |
| { |
| "text": "github.com/pgcool/Joint-Bootstrapping-Machines", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| } |
| ], |
| "back_matter": [ |
| { |
| "text": "We thank our colleagues Bernt Andrassy, Mark Buckley, Stefan Langer, Ulli Waltinger and Usama Yaseen, and anonymous reviewers for their review comments. This research was supported by Bundeswirtschaftsministerium (bmwi.de), grant 01MD15010A (Smart Data Web) at Siemens AG-CT Machine Intelligence, Munich Germany.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Acknowledgments", |
| "sec_num": null |
| } |
| ], |
| "bib_entries": { |
| "BIBREF0": { |
| "ref_id": "b0", |
| "title": "Snowball: Extracting relations from large plain-text collections", |
| "authors": [ |
| { |
| "first": "Eugene", |
| "middle": [], |
| "last": "Agichtein", |
| "suffix": "" |
| }, |
| { |
| "first": "Luis", |
| "middle": [], |
| "last": "Gravano", |
| "suffix": "" |
| } |
| ], |
| "year": 2000, |
| "venue": "Proceedings of the 15th ACM conference on Digital libraries. Association for Computing Machinery", |
| "volume": "", |
| "issue": "", |
| "pages": "85--94", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Eugene Agichtein and Luis Gravano. 2000. Snow- ball: Extracting relations from large plain-text col- lections. In Proceedings of the 15th ACM confer- ence on Digital libraries. Association for Comput- ing Machinery, Washington, DC USA, pages 85-94.", |
| "links": null |
| }, |
| "BIBREF1": { |
| "ref_id": "b1", |
| "title": "Leveraging linguistic structure for open domain information extraction", |
| "authors": [ |
| { |
| "first": "Gabor", |
| "middle": [], |
| "last": "Angeli", |
| "suffix": "" |
| }, |
| { |
| "first": "Melvin Jose Johnson", |
| "middle": [], |
| "last": "Premkumar", |
| "suffix": "" |
| }, |
| { |
| "first": "Christopher D", |
| "middle": [], |
| "last": "Manning", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing", |
| "volume": "1", |
| "issue": "", |
| "pages": "344--354", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Gabor Angeli, Melvin Jose Johnson Premkumar, and Christopher D Manning. 2015. Leveraging linguis- tic structure for open domain information extraction. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Lan- guage Processing. Association for Computational Linguistics, Beijing, China, volume 1, pages 344- 354.", |
| "links": null |
| }, |
| "BIBREF2": { |
| "ref_id": "b2", |
| "title": "Easy access to the freebase dataset", |
| "authors": [ |
| { |
| "first": "Hannah", |
| "middle": [], |
| "last": "Bast", |
| "suffix": "" |
| }, |
| { |
| "first": "Florian", |
| "middle": [], |
| "last": "B\u00e4urle", |
| "suffix": "" |
| }, |
| { |
| "first": "Bj\u00f6rn", |
| "middle": [], |
| "last": "Buchhold", |
| "suffix": "" |
| }, |
| { |
| "first": "Elmar", |
| "middle": [], |
| "last": "Hau\u00dfmann", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "Proceedings of the 23rd International Conference on World Wide Web", |
| "volume": "", |
| "issue": "", |
| "pages": "95--98", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Hannah Bast, Florian B\u00e4urle, Bj\u00f6rn Buchhold, and El- mar Hau\u00dfmann. 2014. Easy access to the free- base dataset. In Proceedings of the 23rd Interna- tional Conference on World Wide Web. Association for Computing Machinery, Seoul, Republic of Ko- rea, pages 95-98.", |
| "links": null |
| }, |
| "BIBREF3": { |
| "ref_id": "b3", |
| "title": "Semi-supervised bootstrapping of relationship extractors with distributional semantics", |
| "authors": [ |
| { |
| "first": "David", |
| "middle": [ |
| "S" |
| ], |
| "last": "Batista", |
| "suffix": "" |
| }, |
| { |
| "first": "Bruno", |
| "middle": [], |
| "last": "Martins", |
| "suffix": "" |
| }, |
| { |
| "first": "M\u00e1rio", |
| "middle": [ |
| "J" |
| ], |
| "last": "Silva", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, Lisbon, Portugal", |
| "volume": "", |
| "issue": "", |
| "pages": "499--504", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "David S. Batista, Bruno Martins, and M\u00e1rio J. Silva. 2015. Semi-supervised bootstrapping of relation- ship extractors with distributional semantics. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing. Associ- ation for Computational Linguistics, Lisbon, Portu- gal, pages 499-504.", |
| "links": null |
| }, |
| "BIBREF4": { |
| "ref_id": "b4", |
| "title": "Extracting patterns and relations from the world wide web", |
| "authors": [ |
| { |
| "first": "", |
| "middle": [], |
| "last": "Sergey Brin", |
| "suffix": "" |
| } |
| ], |
| "year": 1998, |
| "venue": "International Workshop on The World Wide Web and Databases", |
| "volume": "", |
| "issue": "", |
| "pages": "172--183", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Sergey Brin. 1998. Extracting patterns and rela- tions from the world wide web. In International Workshop on The World Wide Web and Databases. Springer, Valencia, Spain, pages 172-183.", |
| "links": null |
| }, |
| "BIBREF5": { |
| "ref_id": "b5", |
| "title": "Automatic evaluation of relation extraction systems on large-scale", |
| "authors": [ |
| { |
| "first": "Mirko", |
| "middle": [], |
| "last": "Bronzi", |
| "suffix": "" |
| }, |
| { |
| "first": "Zhaochen", |
| "middle": [], |
| "last": "Guo", |
| "suffix": "" |
| }, |
| { |
| "first": "Filipe", |
| "middle": [], |
| "last": "Mesquita", |
| "suffix": "" |
| }, |
| { |
| "first": "Denilson", |
| "middle": [], |
| "last": "Barbosa", |
| "suffix": "" |
| }, |
| { |
| "first": "Paolo", |
| "middle": [], |
| "last": "Merialdo", |
| "suffix": "" |
| } |
| ], |
| "year": 2012, |
| "venue": "Proceedings of the Joint Workshop on Automatic Knowledge Base Construction and Webscale Knowledge Extraction (AKBC-WEKEX). Association for Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "19--24", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Mirko Bronzi, Zhaochen Guo, Filipe Mesquita, De- nilson Barbosa, and Paolo Merialdo. 2012. Auto- matic evaluation of relation extraction systems on large-scale. In Proceedings of the Joint Workshop on Automatic Knowledge Base Construction and Web- scale Knowledge Extraction (AKBC-WEKEX). As- sociation for Computational Linguistics, Montr\u00e8al, Canada, pages 19-24.", |
| "links": null |
| }, |
| "BIBREF7": { |
| "ref_id": "b7", |
| "title": "Toward an architecture for neverending language learning", |
| "authors": [ |
| { |
| "first": "", |
| "middle": [], |
| "last": "Mitchell", |
| "suffix": "" |
| } |
| ], |
| "year": 2010, |
| "venue": "Proceedings of the 24th National Conference on Artificial Intelligence (AAAI)", |
| "volume": "5", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Mitchell. 2010. Toward an architecture for never- ending language learning. In Proceedings of the 24th National Conference on Artificial Intelligence (AAAI). Atlanta, Georgia USA, volume 5, page 3.", |
| "links": null |
| }, |
| "BIBREF8": { |
| "ref_id": "b8", |
| "title": "Rule-based information extraction is dead! long live rule-based information extraction systems!", |
| "authors": [ |
| { |
| "first": "Laura", |
| "middle": [], |
| "last": "Chiticariu", |
| "suffix": "" |
| }, |
| { |
| "first": "Yunyao", |
| "middle": [], |
| "last": "Li", |
| "suffix": "" |
| }, |
| { |
| "first": "Frederick", |
| "middle": [ |
| "R" |
| ], |
| "last": "Reiss", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "827--832", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Laura Chiticariu, Yunyao Li, and Frederick R. Reiss. 2013. Rule-based information extraction is dead! long live rule-based information extraction systems! In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing. Associa- tion for Computational Linguistics, Seattle, Wash- ington USA, pages 827-832.", |
| "links": null |
| }, |
| "BIBREF9": { |
| "ref_id": "b9", |
| "title": "Identifying relations for open information extraction", |
| "authors": [ |
| { |
| "first": "Anthony", |
| "middle": [], |
| "last": "Fader", |
| "suffix": "" |
| }, |
| { |
| "first": "Stephen", |
| "middle": [], |
| "last": "Soderland", |
| "suffix": "" |
| }, |
| { |
| "first": "Oren", |
| "middle": [], |
| "last": "Etzioni", |
| "suffix": "" |
| } |
| ], |
| "year": 2011, |
| "venue": "Proceedings of the Conference on Empirical Methods in Natural Language Processing", |
| "volume": "", |
| "issue": "", |
| "pages": "1535--1545", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Anthony Fader, Stephen Soderland, and Oren Etzioni. 2011. Identifying relations for open information extraction. In Proceedings of the Conference on Empirical Methods in Natural Language Process- ing. Association for Computational Linguistics, Ed- inburgh, Scotland UK, pages 1535-1545.", |
| "links": null |
| }, |
| "BIBREF10": { |
| "ref_id": "b10", |
| "title": "Deep learning methods for the extraction of relations in natural language text", |
| "authors": [ |
| { |
| "first": "Pankaj", |
| "middle": [], |
| "last": "Gupta", |
| "suffix": "" |
| }, |
| { |
| "first": "Thomas", |
| "middle": [], |
| "last": "Runkler", |
| "suffix": "" |
| }, |
| { |
| "first": "Heike", |
| "middle": [], |
| "last": "Adel", |
| "suffix": "" |
| }, |
| { |
| "first": "Bernt", |
| "middle": [], |
| "last": "Andrassy", |
| "suffix": "" |
| }, |
| { |
| "first": "Hans-Georg", |
| "middle": [], |
| "last": "Zimmermann", |
| "suffix": "" |
| }, |
| { |
| "first": "Hinrich", |
| "middle": [], |
| "last": "Sch\u00fctze", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Pankaj Gupta, Thomas Runkler, Heike Adel, Bernt Andrassy, Hans-Georg Zimmermann, and Hinrich Sch\u00fctze. 2015. Deep learning methods for the ex- traction of relations in natural language text. Tech- nical report, Technical University of Munich, Ger- many.", |
| "links": null |
| }, |
| "BIBREF11": { |
| "ref_id": "b11", |
| "title": "Table filling multi-task recurrent neural network for joint entity and relation extraction", |
| "authors": [ |
| { |
| "first": "Pankaj", |
| "middle": [], |
| "last": "Gupta", |
| "suffix": "" |
| }, |
| { |
| "first": "Hinrich", |
| "middle": [], |
| "last": "Sch\u00fctze", |
| "suffix": "" |
| }, |
| { |
| "first": "Bernt", |
| "middle": [], |
| "last": "Andrassy", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "Proceedings of the 26th International Conference on Computational Linguistics: Technical Papers. Osaka", |
| "volume": "", |
| "issue": "", |
| "pages": "2537--2547", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Pankaj Gupta, Hinrich Sch\u00fctze, and Bernt Andrassy. 2016. Table filling multi-task recurrent neural net- work for joint entity and relation extraction. In Pro- ceedings of the 26th International Conference on Computational Linguistics: Technical Papers. Os- aka, Japan, pages 2537-2547.", |
| "links": null |
| }, |
| "BIBREF12": { |
| "ref_id": "b12", |
| "title": "Induced lexicosyntactic patterns improve information extraction from online medical forums", |
| "authors": [ |
| { |
| "first": "Sonal", |
| "middle": [], |
| "last": "Gupta", |
| "suffix": "" |
| }, |
| { |
| "first": "Diana", |
| "middle": [ |
| "L" |
| ], |
| "last": "Maclean", |
| "suffix": "" |
| }, |
| { |
| "first": "Jeffrey", |
| "middle": [], |
| "last": "Heer", |
| "suffix": "" |
| }, |
| { |
| "first": "Christopher", |
| "middle": [ |
| "D" |
| ], |
| "last": "Manning", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "Journal of the American Medical Informatics Association", |
| "volume": "21", |
| "issue": "5", |
| "pages": "902--909", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Sonal Gupta, Diana L. MacLean, Jeffrey Heer, and Christopher D. Manning. 2014. Induced lexico- syntactic patterns improve information extraction from online medical forums. Journal of the Amer- ican Medical Informatics Association 21(5):902- 909.", |
| "links": null |
| }, |
| "BIBREF13": { |
| "ref_id": "b13", |
| "title": "Improved pattern learning for bootstrapped entity extraction", |
| "authors": [ |
| { |
| "first": "Sonal", |
| "middle": [], |
| "last": "Gupta", |
| "suffix": "" |
| }, |
| { |
| "first": "Christopher", |
| "middle": [], |
| "last": "Manning", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "Proceedings of the 18th Conference on Computational Natural Language Learning (CoNLL)", |
| "volume": "", |
| "issue": "", |
| "pages": "98--108", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Sonal Gupta and Christopher Manning. 2014. Im- proved pattern learning for bootstrapped entity ex- traction. In Proceedings of the 18th Confer- ence on Computational Natural Language Learning (CoNLL). Association for Computational Linguis- tics, Baltimore, Maryland USA, pages 98-108.", |
| "links": null |
| }, |
| "BIBREF14": { |
| "ref_id": "b14", |
| "title": "Automatic acquisition of hyponyms from large text corpora", |
| "authors": [ |
| { |
| "first": "A", |
| "middle": [], |
| "last": "Marti", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Hearst", |
| "suffix": "" |
| } |
| ], |
| "year": 1992, |
| "venue": "Proceedings of the 15th International Conference on Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "539--545", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Marti A Hearst. 1992. Automatic acquisition of hy- ponyms from large text corpora. In Proceedings of the 15th International Conference on Computational Linguistics. Nantes, France, pages 539-545.", |
| "links": null |
| }, |
| "BIBREF15": { |
| "ref_id": "b15", |
| "title": "Bootstrapped learning of semantic classes from positive and negative examples", |
| "authors": [ |
| { |
| "first": "Winston", |
| "middle": [], |
| "last": "Lin", |
| "suffix": "" |
| }, |
| { |
| "first": "Roman", |
| "middle": [], |
| "last": "Yangarber", |
| "suffix": "" |
| }, |
| { |
| "first": "Ralph", |
| "middle": [], |
| "last": "Grishman", |
| "suffix": "" |
| } |
| ], |
| "year": 2003, |
| "venue": "Proceedings of ICML 2003 Workshop on The Continuum from Labeled to Unlabeled Data in Machine Learning and Data Mining", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Winston Lin, Roman Yangarber, and Ralph Grish- man. 2003. Bootstrapped learning of semantic classes from positive and negative examples. In Pro- ceedings of ICML 2003 Workshop on The Contin- uum from Labeled to Unlabeled Data in Machine Learning and Data Mining. Washington, DC USA, page 21.", |
| "links": null |
| }, |
| "BIBREF16": { |
| "ref_id": "b16", |
| "title": "Open language learning for information extraction", |
| "authors": [ |
| { |
| "first": "Michael", |
| "middle": [], |
| "last": "Mausam", |
| "suffix": "" |
| }, |
| { |
| "first": "Robert", |
| "middle": [], |
| "last": "Schmitz", |
| "suffix": "" |
| }, |
| { |
| "first": "Stephen", |
| "middle": [], |
| "last": "Bart", |
| "suffix": "" |
| }, |
| { |
| "first": "Oren", |
| "middle": [], |
| "last": "Soderland", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Etzioni", |
| "suffix": "" |
| } |
| ], |
| "year": 2012, |
| "venue": "Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning", |
| "volume": "", |
| "issue": "", |
| "pages": "523--534", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Mausam, Michael Schmitz, Robert Bart, Stephen Soderland, and Oren Etzioni. 2012. Open language learning for information extraction. In Proceedings of the 2012 Joint Conference on Empirical Meth- ods in Natural Language Processing and Compu- tational Natural Language Learning. Association for Computational Linguistics, Jeju Island, Korea, pages 523-534.", |
| "links": null |
| }, |
| "BIBREF17": { |
| "ref_id": "b17", |
| "title": "Effectiveness and efficiency of open relation extraction", |
| "authors": [ |
| { |
| "first": "Filipe", |
| "middle": [], |
| "last": "Mesquita", |
| "suffix": "" |
| }, |
| { |
| "first": "Jordan", |
| "middle": [], |
| "last": "Schmidek", |
| "suffix": "" |
| }, |
| { |
| "first": "Denilson", |
| "middle": [], |
| "last": "Barbosa", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "447--457", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Filipe Mesquita, Jordan Schmidek, and Denilson Bar- bosa. 2013. Effectiveness and efficiency of open re- lation extraction. In Proceedings of the 2013 Con- ference on Empirical Methods in Natural Language Processing. Association for Computational Linguis- tics, Seattle, Washington USA, pages 447-457.", |
| "links": null |
| }, |
| "BIBREF18": { |
| "ref_id": "b18", |
| "title": "Efficient estimation of word representations in vector space", |
| "authors": [ |
| { |
| "first": "Tomas", |
| "middle": [], |
| "last": "Mikolov", |
| "suffix": "" |
| }, |
| { |
| "first": "Kai", |
| "middle": [], |
| "last": "Chen", |
| "suffix": "" |
| }, |
| { |
| "first": "Greg", |
| "middle": [], |
| "last": "Corrado", |
| "suffix": "" |
| }, |
| { |
| "first": "Jeffrey", |
| "middle": [], |
| "last": "Dean", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "Proceedings of the Workshop at the International Conference on Learning Representations. ICLR", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013. Efficient estimation of word represen- tations in vector space. In Proceedings of the Work- shop at the International Conference on Learning Representations. ICLR, Scottsdale, Arizona USA.", |
| "links": null |
| }, |
| "BIBREF19": { |
| "ref_id": "b19", |
| "title": "Relation extraction: Perspective from convolutional neural networks", |
| "authors": [ |
| { |
| "first": "Huu", |
| "middle": [], |
| "last": "Thien", |
| "suffix": "" |
| }, |
| { |
| "first": "Ralph", |
| "middle": [], |
| "last": "Nguyen", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Grishman", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "Proceedings of the 1st Workshop on Vector Space Modeling for Natural Language Processing", |
| "volume": "", |
| "issue": "", |
| "pages": "39--48", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Thien Huu Nguyen and Ralph Grishman. 2015. Rela- tion extraction: Perspective from convolutional neu- ral networks. In Proceedings of the 1st Workshop on Vector Space Modeling for Natural Language Pro- cessing. Association for Computational Linguistics, Denver, Colorado USA, pages 39-48.", |
| "links": null |
| }, |
| "BIBREF20": { |
| "ref_id": "b20", |
| "title": "English gigaword. Linguistic Data Consortium", |
| "authors": [ |
| { |
| "first": "Robert", |
| "middle": [], |
| "last": "Parker", |
| "suffix": "" |
| }, |
| { |
| "first": "David", |
| "middle": [], |
| "last": "Graff", |
| "suffix": "" |
| }, |
| { |
| "first": "Junbo", |
| "middle": [], |
| "last": "Kong", |
| "suffix": "" |
| }, |
| { |
| "first": "Ke", |
| "middle": [], |
| "last": "Chen", |
| "suffix": "" |
| }, |
| { |
| "first": "Kazuaki", |
| "middle": [], |
| "last": "Maeda", |
| "suffix": "" |
| } |
| ], |
| "year": 2011, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Robert Parker, David Graff, Junbo Kong, Ke Chen, and Kazuaki Maeda. 2011. English gigaword. Linguis- tic Data Consortium .", |
| "links": null |
| }, |
| "BIBREF21": { |
| "ref_id": "b21", |
| "title": "Glove: Global vectors for word representation", |
| "authors": [ |
| { |
| "first": "Jeffrey", |
| "middle": [], |
| "last": "Pennington", |
| "suffix": "" |
| }, |
| { |
| "first": "Richard", |
| "middle": [], |
| "last": "Socher", |
| "suffix": "" |
| }, |
| { |
| "first": "Christopher", |
| "middle": [], |
| "last": "Manning", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "1532--1543", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word representation. In Proceedings of the 2014 Con- ference on Empirical Methods in Natural Language Processing. Association for Computational Linguis- tics, Doha, Qatar, pages 1532-1543.", |
| "links": null |
| }, |
| "BIBREF22": { |
| "ref_id": "b22", |
| "title": "Automatically generating extraction patterns from untagged text", |
| "authors": [ |
| { |
| "first": "Ellen", |
| "middle": [], |
| "last": "Riloff", |
| "suffix": "" |
| } |
| ], |
| "year": 1996, |
| "venue": "Proceedings of the 13th National Conference on Artificial Intelligence (AAAI)", |
| "volume": "", |
| "issue": "", |
| "pages": "1044--1049", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Ellen Riloff. 1996. Automatically generating extrac- tion patterns from untagged text. In Proceedings of the 13th National Conference on Artificial Intelli- gence (AAAI). Portland, Oregon USA, pages 1044- 1049.", |
| "links": null |
| }, |
| "BIBREF23": { |
| "ref_id": "b23", |
| "title": "Mining the web for synonyms: Pmi-ir versus lsa on toefl", |
| "authors": [ |
| { |
| "first": "D", |
| "middle": [], |
| "last": "Peter", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Turney", |
| "suffix": "" |
| } |
| ], |
| "year": 2001, |
| "venue": "Proceedings of the 12th European Conference on Machine Learning", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Peter D. Turney. 2001. Mining the web for synonyms: Pmi-ir versus lsa on toefl. In Proceedings of the 12th European Conference on Machine Learning.", |
| "links": null |
| }, |
| "BIBREF25": { |
| "ref_id": "b25", |
| "title": "Combining recurrent and convolutional neural networks for relation classification", |
| "authors": [ |
| { |
| "first": "Ngoc", |
| "middle": [ |
| "Thang" |
| ], |
| "last": "Vu", |
| "suffix": "" |
| }, |
| { |
| "first": "Heike", |
| "middle": [], |
| "last": "Adel", |
| "suffix": "" |
| }, |
| { |
| "first": "Pankaj", |
| "middle": [], |
| "last": "Gupta", |
| "suffix": "" |
| }, |
| { |
| "first": "Hinrich", |
| "middle": [], |
| "last": "Sch\u00fctze", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "Proceedings of the 15th Annual Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL-HLT). Association for Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "534--539", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Ngoc Thang Vu, Heike Adel, Pankaj Gupta, and Hin- rich Sch\u00fctze. 2016a. Combining recurrent and con- volutional neural networks for relation classifica- tion. In Proceedings of the 15th Annual Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL-HLT). Association for Com- putational Linguistics, San Diego, California USA, pages 534-539.", |
| "links": null |
| }, |
| "BIBREF26": { |
| "ref_id": "b26", |
| "title": "Bi-directional recurrent neural network with ranking loss for spoken language understanding", |
| "authors": [ |
| { |
| "first": "Ngoc", |
| "middle": [ |
| "Thang" |
| ], |
| "last": "Vu", |
| "suffix": "" |
| }, |
| { |
| "first": "Pankaj", |
| "middle": [], |
| "last": "Gupta", |
| "suffix": "" |
| }, |
| { |
| "first": "Heike", |
| "middle": [], |
| "last": "Adel", |
| "suffix": "" |
| }, |
| { |
| "first": "Hinrich", |
| "middle": [], |
| "last": "Sch\u00fctze", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)", |
| "volume": "", |
| "issue": "", |
| "pages": "6060--6064", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Ngoc Thang Vu, Pankaj Gupta, Heike Adel, and Hin- rich Sch\u00fctze. 2016b. Bi-directional recurrent neu- ral network with ranking loss for spoken language understanding. In Proceedings of the IEEE Inter- national Conference on Acoustics, Speech and Sig- nal Processing (ICASSP). IEEE, Shanghai, China, pages 6060-6064.", |
| "links": null |
| } |
| }, |
| "ref_entries": { |
| "FIGREF0": { |
| "text": "BREX algorithm (left) and definition of key concepts (right) BREE BRET BREJ Seed Type Entity pairs Templates Joint (Entity pairs + Templates)", |
| "uris": null, |
| "num": null, |
| "type_str": "figure" |
| }, |
| "FIGREF1": { |
| "text": "Figure 3: BREX configurations", |
| "uris": null, |
| "num": null, |
| "type_str": "figure" |
| }, |
| "FIGREF2": { |
| "text": "Illustration of Scaling-up Positive Instances. i: an instance in extractor, \u03bb. Y: YES and N: NO", |
| "uris": null, |
| "num": null, |
| "type_str": "figure" |
| }, |
| "FIGREF3": { |
| "text": "'s purchase of <DoubleClick> is intriguing. I2: <Google> 's acquisition of <DoubleClick> is approved. I3: <Dynegy> 's purchase of <Enron> triggered a clause. I4: <Google> 's acquisition of <YouTube> was in its final stages. An illustration of scaling positive extractions and computing confidence for a non-noisy extractor generated for acquired relation. The dashed rectangular box represents an extractor \u03bb, where \u03bb (BREJ) is hybrid with 6 instances. Text segments matched with seed template are shown in italics. Unknowns (bold in black) are considered as negatives. G cache is a set of output instances where \u03c4 cnf 0.70.", |
| "uris": null, |
| "num": null, |
| "type_str": "figure" |
| }, |
| "FIGREF4": { |
| "text": "acquired", |
| "uris": null, |
| "num": null, |
| "type_str": "figure" |
| }, |
| "FIGREF5": { |
| "text": "for positive and negative seeds, respectively for \u03bb \u039b in the three systems. Observe the impact of scaling positive extractions (AP) in", |
| "uris": null, |
| "num": null, |
| "type_str": "figure" |
| }, |
| "TABREF0": { |
| "content": "<table><tr><td>\u03b3</td><td>instance set extracted from corpus</td></tr><tr><td>i</td><td>a member of \u03b3, i.e., an instance</td></tr><tr><td>xpiq</td><td>the entity pair of instance i</td></tr><tr><td>xpiq</td><td>the template of instance i</td></tr><tr><td>G p</td><td>a set of positive seed entity pairs</td></tr><tr><td>G n</td><td>a set of negative seed entity pairs</td></tr><tr><td>G p</td><td>a set of positive seed templates</td></tr><tr><td>G n G k it</td><td>a set of negative seed templates G p , G n , G p , G n \u00a1 number of iterations</td></tr><tr><td>\u03bb cat</td><td>cluster of instances (extractor)</td></tr><tr><td>cat</td><td>category of extractor \u03bb</td></tr><tr><td>\u03bb N N HC</td><td>Non-Noisy-High-Confidence extractor (True Positive)</td></tr><tr><td>\u03bb N N LC</td><td>Non-Noisy-Low-Confidence extractor (True Negative)</td></tr><tr><td>\u03bb N HC</td><td>Noisy-High-Confidence extractor (False Positive)</td></tr><tr><td>\u03bb N LC</td><td>Noisy-Low-Confidence extractor (False Negative)</td></tr></table>", |
| "type_str": "table", |
| "html": null, |
| "text": "1 ) and an entity pair instance entity pair and template (types must be the same)", |
| "num": null |
| }, |
| "TABREF1": { |
| "content": "<table/>", |
| "type_str": "table", |
| "html": null, |
| "text": "", |
| "num": null |
| }, |
| "TABREF4": { |
| "content": "<table><tr><td>N p\u03bb, Gq by a factor of maximum 2 in BREJ if i is matched in both entity pair and template seeds.</td></tr><tr><td>The reliability cnfp\u03bb, Gq (Eq. 11) of an extractor \u03bb</td></tr></table>", |
| "type_str": "table", |
| "html": null, |
| "text": "Seed Entity Pairs and Templates for each relation. [X] and [Y] are slots for entity type tags. Gq for an extractor \u03bb within the three systems. Observe that an instance i in \u03bb can scale its", |
| "num": null |
| }, |
| "TABREF6": { |
| "content": "<table><tr><td>Parameter</td><td>Description/ Search</td><td>Optimal</td></tr><tr><td>|v \u00a11 | |v 0 | |v 1 |</td><td>maximum number of tokens in before context maximum number of tokens in between context maximum number of tokens in after context</td><td>2 6 2</td></tr><tr><td>\u03c4 sim</td><td>similarity threshold [0.6, 0.7, 0.8]</td><td>0.7</td></tr><tr><td>\u03c4 cnf</td><td>instance confidence thresholds [0.6, 0.7, 0.8]</td><td>0.7</td></tr><tr><td>w n</td><td>weights to negative extractions [0.0, 0.5, 1.0, 2.0]</td><td>0.5</td></tr><tr><td>w u</td><td colspan=\"2\">weights to unknown extractions [0.0001, 0.00001] 0.0001</td></tr><tr><td>k it</td><td>number of bootstrapping epochs</td><td>3</td></tr><tr><td>dim emb</td><td>dimension of embedding vector, V</td><td>300</td></tr><tr><td>P M I</td><td>PMI threshold in evaluation</td><td>0.5</td></tr><tr><td>Entity Pairs</td><td>Ordered Pairs (OP ) or Bisets (BS)</td><td>OP</td></tr></table>", |
| "type_str": "table", |
| "html": null, |
| "text": "Count of entity-type pairs in corpus", |
| "num": null |
| }, |
| "TABREF7": { |
| "content": "<table/>", |
| "type_str": "table", |
| "html": null, |
| "text": "Hyperparameters in BREE, BRET and BREJ", |
| "num": null |
| }, |
| "TABREF10": { |
| "content": "<table><tr><td/><td>\u03c4</td><td>#out P</td><td>R F 1</td><td>\u03c4</td><td>#out P</td><td>R F 1</td></tr><tr><td>BREE</td><td colspan=\"6\">.60 1785 .91 .39 .55 .70 1222 .94 .31 .47 .80 868 .95 .25 .39 .90 626 .96 .19 .32</td></tr><tr><td>BRET</td><td colspan=\"6\">.60 2995 .89 .51 .65 .70 1859 .90 .40 .55 .80 1312 .91 .32 .47 .90 752 .94 .22 .35</td></tr><tr><td>BREJ</td><td colspan=\"6\">.60 18271 .81 .85 .83 .70 14900 .84 .83 .83 .80 8896 .88 .75 .81 .90 5158 .93 .65 .77</td></tr></table>", |
| "type_str": "table", |
| "html": null, |
| "text": "Iterations (k it ) Vs Scores with thresholds (\u03c4 ) for relation acquired in BREJ. \u03c4 refers to \u03c4 sim and \u03c4 cnf", |
| "num": null |
| }, |
| "TABREF12": { |
| "content": "<table><tr><td colspan=\"2\">Attributes</td><td colspan=\"8\">|\u039b| AIE AES AN E AN N E AN N LC AP AN AN P</td></tr><tr><td>acquired</td><td colspan=\"2\">BREE 167 BRET 17 BREJ 555</td><td>12.7 305.2 1.00 0.51 41.6 0.74</td><td>0.84 0.11 0.71</td><td>0.16 0.89 0.29</td><td>0.14 0.00 0.03</td><td>37.7 671.8 313.2</td><td>93.1 0.12 44.8</td><td>2.46 0.00 0.14</td></tr><tr><td>founder-of</td><td colspan=\"3\">BREE BRET BREJ 492 109.1 0.90 8 13.3 0.46 5 179.0 1.00</td><td>0.75 0.00 0.94</td><td>0.25 1.00 0.06</td><td>0.12 0.00 0.00</td><td>44.9 372.2 451.8</td><td colspan=\"2\">600.5 13.37 0.0 0.00 79.5 0.18</td></tr><tr><td>headquartered</td><td colspan=\"3\">BREE 655 BRET 7 BREJ 1311 45.5 18.4 365.7 1.00 0.60 0.80</td><td>0.97 0.00 0.98</td><td>0.03 1.00 0.02</td><td>0.02 0.00 0.00</td><td>46.3 848.6 324.1</td><td>82.7 0.0 77.5</td><td>1.78 0.00 0.24</td></tr><tr><td>affiliation</td><td colspan=\"3\">BREE 198 BRET 19 BREJ 470 130.2 0.72 99.7 0.55 846.9 1.00</td><td>0.25 0.00 0.21</td><td>0.75 1.00 0.79</td><td>0.34 0.00 0.06</td><td colspan=\"3\">240.5 152.2 0.63 2137.0 0.0 0.00 567.6 122.7 0.22</td></tr></table>", |
| "type_str": "table", |
| "html": null, |
| "text": "Disjunctive matching of Instances. #hit: the count of instances matched to positive seeds in k it 1", |
| "num": null |
| }, |
| "TABREF13": { |
| "content": "<table/>", |
| "type_str": "table", |
| "html": null, |
| "text": "Analyzing the attributes of extractors \u039b learned for each relationship. Attributes are: number of extractors (|\u039b|), avg number of instances in \u039b (AIE), avg \u039b score (AES), avg number of noisy \u039b (ANE), avg number of non-noisy \u039b (ANNE), avg number of \u039b N N LC below confidence 0.5 (ANNLC), avg number of positives (AP) and negatives (AN), ratio of AN to AP (ANP). The bold indicates comparison of BREE and BREJ with sim match . avg: average", |
| "num": null |
| }, |
| "TABREF15": { |
| "content": "<table/>", |
| "type_str": "table", |
| "html": null, |
| "text": "BREX+sim match :Scores when w n ignored dence scores. ANNE (higher), ANNLC (lower), AP (higher) and AN (lower) collectively indicate that BRET mostly generates NNHC extractors. AP and AN indicate an average of N p\u03bb, G", |
| "num": null |
| }, |
| "TABREF16": { |
| "content": "<table><tr><td>Relationships</td><td colspan=\"2\">BREE + sim match #out P R F 1 #out P BREJ + sim match R F 1</td></tr><tr><td colspan=\"3\">acquired 2786 .90 .50 .64 21733 .80 .87 .83</td></tr><tr><td>founder-of</td><td>543</td><td>1.0</td></tr></table>", |
| "type_str": "table", |
| "html": null, |
| "text": ".67 .80 31890 .80 .99 .89 headquartered 16832 .62 .81 .70 52286 .64 .94 .76 affiliation 21812 .99 .74 .85 42601 .96 .85 .90 avg 10493 .88 .68 .75 37127 .80 .91 .85", |
| "num": null |
| }, |
| "TABREF17": { |
| "content": "<table/>", |
| "type_str": "table", |
| "html": null, |
| "text": "BREX+sim match :Scores with entity bisets", |
| "num": null |
| } |
| } |
| } |
| } |