| { |
| "paper_id": "Q13-1030", |
| "header": { |
| "generated_with": "S2ORC 1.0.0", |
| "date_generated": "2023-01-19T15:08:27.161685Z" |
| }, |
| "title": "Modeling Missing Data in Distant Supervision for Information Extraction", |
| "authors": [ |
| { |
| "first": "Alan", |
| "middle": [], |
| "last": "Ritter", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "Carnegie Mellon University", |
| "location": {} |
| }, |
| "email": "rittera@cs.cmu.edu" |
| }, |
| { |
| "first": "Luke", |
| "middle": [], |
| "last": "Zettlemoyer", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "University of Washington", |
| "location": {} |
| }, |
| "email": "" |
| }, |
| { |
| "first": "Oren", |
| "middle": [], |
| "last": "Etzioni", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "University of Washington", |
| "location": {} |
| }, |
| "email": "orene@vulcan.com" |
| } |
| ], |
| "year": "", |
| "venue": null, |
| "identifiers": {}, |
| "abstract": "Distant supervision algorithms learn information extraction models given only large readily available databases and text collections. Most previous work has used heuristics for generating labeled data, for example assuming that facts not contained in the database are not mentioned in the text, and facts in the database must be mentioned at least once. In this paper, we propose a new latent-variable approach that models missing data. This provides a natural way to incorporate side information, for instance modeling the intuition that text will often mention rare entities which are likely to be missing in the database. Despite the added complexity introduced by reasoning about missing data, we demonstrate that a carefully designed local search approach to inference is very accurate and scales to large datasets. Experiments demonstrate improved performance for binary and unary relation extraction when compared to learning with heuristic labels, including on average a 27% increase in area under the precision recall curve in the binary case.", |
| "pdf_parse": { |
| "paper_id": "Q13-1030", |
| "_pdf_hash": "", |
| "abstract": [ |
| { |
| "text": "Distant supervision algorithms learn information extraction models given only large readily available databases and text collections. Most previous work has used heuristics for generating labeled data, for example assuming that facts not contained in the database are not mentioned in the text, and facts in the database must be mentioned at least once. In this paper, we propose a new latent-variable approach that models missing data. This provides a natural way to incorporate side information, for instance modeling the intuition that text will often mention rare entities which are likely to be missing in the database. Despite the added complexity introduced by reasoning about missing data, we demonstrate that a carefully designed local search approach to inference is very accurate and scales to large datasets. Experiments demonstrate improved performance for binary and unary relation extraction when compared to learning with heuristic labels, including on average a 27% increase in area under the precision recall curve in the binary case.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Abstract", |
| "sec_num": null |
| } |
| ], |
| "body_text": [ |
| { |
| "text": "This paper addresses the issue of missing data (Little and Rubin, 1986) in the context of distant supervision. The goal of distant supervision is to learn to process unstructured data, for instance to extract binary or unary relations from text (Bunescu and Mooney, 2007; Snyder and Barzilay, 2007; Wu and Weld, 2007; Mintz et al., 2009; Collins and Singer, 1999) , using a large database of propositions as a Figure 1 : A small hypothetical database and heuristically labeled training data for the EMPLOYER relation.", |
| "cite_spans": [ |
| { |
| "start": 47, |
| "end": 71, |
| "text": "(Little and Rubin, 1986)", |
| "ref_id": "BIBREF14" |
| }, |
| { |
| "start": 245, |
| "end": 271, |
| "text": "(Bunescu and Mooney, 2007;", |
| "ref_id": "BIBREF3" |
| }, |
| { |
| "start": 272, |
| "end": 298, |
| "text": "Snyder and Barzilay, 2007;", |
| "ref_id": "BIBREF22" |
| }, |
| { |
| "start": 299, |
| "end": 317, |
| "text": "Wu and Weld, 2007;", |
| "ref_id": "BIBREF26" |
| }, |
| { |
| "start": 318, |
| "end": 337, |
| "text": "Mintz et al., 2009;", |
| "ref_id": "BIBREF16" |
| }, |
| { |
| "start": 338, |
| "end": 363, |
| "text": "Collins and Singer, 1999)", |
| "ref_id": "BIBREF5" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 410, |
| "end": 418, |
| "text": "Figure 1", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "distant source of supervision. In the case of binary relations, the intuition is that any sentence which mentions a pair of entities (e 1 and e 2 ) that participate in a relation, r, is likely to express the proposition r(e 1 , e 2 ), so we can treat it as a positive training example of r. Figure 1 presents an example of this process.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 291, |
| "end": 299, |
| "text": "Figure 1", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "One question which has received little attention in previous work is how to handle the situation where information is missing, either from the text corpus, or the database. As an example, suppose the pair of entities (John P. McNamara, Washington State University) is absent from the EMPLOYER relation. In this case, the sentence in Figure 1 (and others which mention the entity pair) is effectively treated as a negative example of the relation. This is an issue of practical concern, as most databases of interest are highly incomplete -this is the reason we need to extend them by extracting information from text in the first place.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 333, |
| "end": 341, |
| "text": "Figure 1", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "We need to be cautious in how we handle missing data in distant supervision, because this is a case where data is not missing at random (NMAR). Whether a proposition is observed or missing in the text or database depends heavily on its truth value: given that it is true we have some chance to observe it, however we do not observe those which are false. To address this challenge, we propose a joint model of extraction from text and the process by which propositions are observed or missing in both the database and text. Our approach provides a natural way to incorporate side information in the form of a missing data model. For instance, popular entities such as Barack Obama already have good coverage in Freebase, so new extractions are more likely to be errors than those involving rare entities with poor coverage.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "Our approach to missing data is general and can be combined with various IE solutions. As a proof of concept, we extend MultiR (Hoffmann et al., 2011 ), a recent model for distantly supervised information extraction, to explicitly model missing data. These extensions complicate the MAP inference problem which is used as a subroutine in learning. This motivated us to explore a variety of approaches to inference in the joint extraction and missing data model. We explore both exact inference based on A* search and efficient approximate inference using local search. Our experiments demonstrate that with a carefully designed set of search operators, local search produces optimal solutions in most cases.", |
| "cite_spans": [ |
| { |
| "start": 127, |
| "end": 149, |
| "text": "(Hoffmann et al., 2011", |
| "ref_id": "BIBREF10" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "Experimental results demonstrate large performance gains over the heuristic labeling strategy on both binary relation extraction and weakly supervised named entity categorization. For example our model obtains a 27% increase in area under the precision recall curve on the sentence-level relation extraction task.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "There has been much interest in distantly supervised 1 training of relation extractors using 1 also referred to as weakly supervised databases. For example, Craven and Kumlien (1999) build a heuristically labeled dataset, using the Yeast Protein Database to label Pubmed abstracts with the subcellular-localization relation. Wu and Weld (2007) heuristically annotate Wikipedia articles with facts mentioned in the infoboxes, enabling automated infobox generation for articles which do not yet contain them. Benson et. al. (2011) use a database of music events taking place in New York City as a source of distant supervision to train event extractors from Twitter. Mintz et. al. (2009) used a set of relations from Freebase as a distant source of supervision to learn to extract information from Wikipedia. Ridel et. al. (2010 ), Hoffmann et. al. (2011 , and Surdeanu et. al. (2012) presented a series of models casting distant supervision as a multiple-instance learning problem (Dietterich et al., 1997) .", |
| "cite_spans": [ |
| { |
| "start": 157, |
| "end": 182, |
| "text": "Craven and Kumlien (1999)", |
| "ref_id": "BIBREF7" |
| }, |
| { |
| "start": 325, |
| "end": 343, |
| "text": "Wu and Weld (2007)", |
| "ref_id": "BIBREF26" |
| }, |
| { |
| "start": 507, |
| "end": 528, |
| "text": "Benson et. al. (2011)", |
| "ref_id": "BIBREF1" |
| }, |
| { |
| "start": 665, |
| "end": 685, |
| "text": "Mintz et. al. (2009)", |
| "ref_id": "BIBREF16" |
| }, |
| { |
| "start": 796, |
| "end": 826, |
| "text": "Wikipedia. Ridel et. al. (2010", |
| "ref_id": null |
| }, |
| { |
| "start": 827, |
| "end": 852, |
| "text": "), Hoffmann et. al. (2011", |
| "ref_id": "BIBREF10" |
| }, |
| { |
| "start": 855, |
| "end": 882, |
| "text": "and Surdeanu et. al. (2012)", |
| "ref_id": "BIBREF23" |
| }, |
| { |
| "start": 980, |
| "end": 1005, |
| "text": "(Dietterich et al., 1997)", |
| "ref_id": "BIBREF8" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Work", |
| "sec_num": "2" |
| }, |
| { |
| "text": "Recent work has begun to address the challenge of noise in heuristically labeled training data generated by distant supervision, and proposed a variety of strategies for correcting erroneous labels. Takamatsu et al. (2012) present a generative model of the labeling process, which is used as a preprocessing step for improving the quality of labels before training relation extractors. Independently, Xu et. al. (2013) analyze a random sample of 1834 sentences from the New York Times, demonstrating that most entity pairs expressing a Freebase relation correspond to false negatives. They apply pseudo-relevance feedback to add missing entries in the knowledge base before applying the MultiR model (Hoffmann et al., 2011) . Min et al. (2013) extend the MIML model of Surdeanu et. al. (2012) using a semi-supervised approach assuming a fixed proportion of true positives for each entity pair.", |
| "cite_spans": [ |
| { |
| "start": 199, |
| "end": 222, |
| "text": "Takamatsu et al. (2012)", |
| "ref_id": "BIBREF24" |
| }, |
| { |
| "start": 401, |
| "end": 418, |
| "text": "Xu et. al. (2013)", |
| "ref_id": "BIBREF27" |
| }, |
| { |
| "start": 700, |
| "end": 723, |
| "text": "(Hoffmann et al., 2011)", |
| "ref_id": "BIBREF10" |
| }, |
| { |
| "start": 726, |
| "end": 743, |
| "text": "Min et al. (2013)", |
| "ref_id": "BIBREF15" |
| }, |
| { |
| "start": 769, |
| "end": 792, |
| "text": "Surdeanu et. al. (2012)", |
| "ref_id": "BIBREF23" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Work", |
| "sec_num": "2" |
| }, |
| { |
| "text": "The Min et al. (2013) approach is perhaps the most closely related of the recent approaches for distant supervision. However, there are a number of key differences: (1) They impose a hard constraint on the proportion of true positive examples for each entity pair, whereas we jointly model relation extraction and missing data in the text and KB. (2) They only handle the case of missing information in the database and not in the text. (3) Their model, based on Surdeanu (2012), uses hard discriminative EM to tune parameters, whereas we use perceptron-style updates. (4) We evaluate various inference strategies for exact and approximate inference.", |
| "cite_spans": [ |
| { |
| "start": 4, |
| "end": 21, |
| "text": "Min et al. (2013)", |
| "ref_id": "BIBREF15" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Work", |
| "sec_num": "2" |
| }, |
| { |
| "text": "The issue of missing data has been extensively studied in the statistical literature (Little and Rubin, 1986; Gelman et al., 2003) . Most methods for handling missing data assume that variables are missing at random (MAR): whether a variable is observed does not depend on its value. In situations where the MAR assumption is violated (for example distantly supervised information extraction), ignoring the missing data mechanism will introduce bias. In this case it is necessary to jointly model the process of interest (e.g. information extraction) in addition to the missing data mechanism.", |
| "cite_spans": [ |
| { |
| "start": 85, |
| "end": 109, |
| "text": "(Little and Rubin, 1986;", |
| "ref_id": "BIBREF14" |
| }, |
| { |
| "start": 110, |
| "end": 130, |
| "text": "Gelman et al., 2003)", |
| "ref_id": "BIBREF9" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Work", |
| "sec_num": "2" |
| }, |
| { |
| "text": "Another line of related work is iterative semantic bootstrapping (Brin, 1999; Agichtein and Gravano, 2000) . Carlson et. al. (2010) exploit constraints between relations to reduce semantic drift in the bootstrapping process; such constraints are potentially complementary to our approach of modeling missing data.", |
| "cite_spans": [ |
| { |
| "start": 65, |
| "end": 77, |
| "text": "(Brin, 1999;", |
| "ref_id": "BIBREF2" |
| }, |
| { |
| "start": 78, |
| "end": 106, |
| "text": "Agichtein and Gravano, 2000)", |
| "ref_id": "BIBREF0" |
| }, |
| { |
| "start": 109, |
| "end": 131, |
| "text": "Carlson et. al. (2010)", |
| "ref_id": "BIBREF4" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Work", |
| "sec_num": "2" |
| }, |
| { |
| "text": "In this section we review the MultiR model (due to Hoffmann et. al. (2011)) for distant supervision in the context of extracting binary relations. This model is extended to handle missing data in Section 4. We focus on binary relations to keep discussions concrete; unary relation extraction is also possible. Given a set of sentences, s = s 1 , s 2 , . . . , s n , which mention a specific pair of entities (e 1 and e 2 ) our goal is to correctly predict which relation is mentioned in each sentence, or \"NA\" if none of the relations under consideration are mentioned. Unlike the standard supervised learning setup, we do not observe the latent sentence-level relation mention variables, z = z 1 , z 2 , . . . , z n . 2 Instead we only observe aggregate binary variables for each relation,", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "A Latent Variable Model for Distantly Supervised Relation Extraction", |
| "sec_num": "3" |
| }, |
| { |
| "text": "d = d 1 , d 2 , . . . , d", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "A Latent Variable Model for Distantly Supervised Relation Extraction", |
| "sec_num": "3" |
| }, |
| { |
| "text": "k , which indicate whether the proposition r j (e 1 , e 2 ) is present in the database (Freebase). Of course the question which arises is: how do we relate the aggregate-level variables, d j , to the sentence-level relation mentions, z i ? A sensible answer to this question is a simple deterministic-OR function. The deterministic-OR states that if there exists at least one i such that z i = j, then d j = 1. For example, if at least one sentence mentions that \"Barack Obama was born in Honolulu\", then that fact is true in aggregate, if none of the sentences mentions the relation, then the fact is assumed false. The model also makes the converse assumption: if Freebase contains the relation BIRTHLOCA-TION(Barack Obama, Honolulu), then we must extract it from at least one sentence. A summary of this model, which is due to Hoffmann et. al. 2011, is presented in Figure 2 .", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 869, |
| "end": 877, |
| "text": "Figure 2", |
| "ref_id": "FIGREF0" |
| } |
| ], |
| "eq_spans": [], |
| "section": "A Latent Variable Model for Distantly Supervised Relation Extraction", |
| "sec_num": "3" |
| }, |
| { |
| "text": "To learn the parameters of the sentence-level relation mention classifier, \u03b8, we maximize the likelihood of the facts observed in Freebase conditioned on the sentences in our text corpus:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Learning", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "\u03b8 * = arg max \u03b8 P (d|s; \u03b8) = arg max \u03b8 e1,e2 z P (z, d|s; \u03b8)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Learning", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "Here the conditional likelihood of a given entity pair is defined as follows:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Learning", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "P (z, d|s; \u03b8) = n i=1 \u03c6(z i , s i ; \u03b8) \u00d7 k j=1 \u03c9(z, d j ) = n i=1 e \u03b8\u2022f (zi,si) \u00d7 k j=1 1 \u00acdj \u2295\u2203i:j=zi", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Learning", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "Where 1 x is an indicator variable which takes the value 1 if x is true and 0 otherwise, the \u03c9(z, d j ) factors are hard constraints corresponding to the deterministic-OR function, and f (z i , s i ) is a vector of features extracted from sentence s i and relation z i .", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Learning", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "An iterative gradient-ascent based approach is used to tune \u03b8 using a latent-variable perceptronstyle additive update scheme (Collins, 2002; Liang et al., 2006; Zettlemoyer and Collins, 2007) . The gradient of the conditional log likelihood, for a single pair of entities, e 1 and e 2 , is as follows: 3 These expectations are too difficult to compute in practice, so instead they are approximated as maximizations. Computing this approximation to the gradient requires solving two inference problems corresponding to the two maximizations:", |
| "cite_spans": [ |
| { |
| "start": 125, |
| "end": 140, |
| "text": "(Collins, 2002;", |
| "ref_id": "BIBREF6" |
| }, |
| { |
| "start": 141, |
| "end": 160, |
| "text": "Liang et al., 2006;", |
| "ref_id": "BIBREF12" |
| }, |
| { |
| "start": 161, |
| "end": 191, |
| "text": "Zettlemoyer and Collins, 2007)", |
| "ref_id": "BIBREF28" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Learning", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "\u2202 log P (d|s; \u03b8) \u2202\u03b8 = E P (z|s,d;\u03b8) \uf8eb \uf8ed j f (s j , z j ) \uf8f6 \uf8f8 \u2212E P (z,d|s;\u03b8) \uf8eb \uf8ed j f (s j , z j ) \uf8f6 \uf8f8 1 2 3 \u2026 1 2 3 \u2026 1 2 \u2026 Local", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Learning", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "z * DB = arg max z P (z|s, d; \u03b8) z * = arg max z P (z, d|s; \u03b8)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Learning", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "The MAP solution for the second term is easy to compute: because d and z are deterministically related, we can simply find the highest scoring relation, r, for each sentence, s i , according to the sentence-level factors, \u03c6, independently. The first term, is more difficult, however, as this requires finding the best assignment to the sentence-level hidden variables z = z 1 . . . z n conditioned on the observed sentences and facts in the database. Hoffmann et. al. (2011) show how this reduces to a well-known weighted edge cover problem which can be solved exactly in polynomial time.", |
| "cite_spans": [ |
| { |
| "start": 451, |
| "end": 474, |
| "text": "Hoffmann et. al. (2011)", |
| "ref_id": "BIBREF10" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Learning", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "The model presented in Section 3 makes two assumptions which correspond to hard constraints:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Modeling Missing Data", |
| "sec_num": "4" |
| }, |
| { |
| "text": "1. If a fact is not found in the database it cannot be mentioned in the text.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Modeling Missing Data", |
| "sec_num": "4" |
| }, |
| { |
| "text": "2. If a fact is in the database, it must be mentioned in at least one sentence.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Modeling Missing Data", |
| "sec_num": "4" |
| }, |
| { |
| "text": "These assumptions drive the learning, however if there is information missing from either the text or the database this leads to errors in the training data (false positives, and false negatives respectively).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Modeling Missing Data", |
| "sec_num": "4" |
| }, |
| { |
| "text": "In order to gracefully handle the problem of missing data, we propose to extend the model presented in Section 3 by splitting the aggregate level variables, d, into two parts: t which represents whether a fact is mentioned in the text (in at least one sentence), and d which represents whether the fact is mentioned in the database. We introduce pairwise potentials \u03c8(t j , d j ) which penalize disagreement between t j and d j , that is:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Modeling Missing Data", |
| "sec_num": "4" |
| }, |
| { |
| "text": "\u03c8(t j , d j ) = \uf8f1 \uf8f4 \uf8f2 \uf8f4 \uf8f3 \u2212\u03b1 MIT if t j = 0 and d j = 1 \u2212\u03b1 MID if t j = 1 and d j = 0 0 otherwise", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Modeling Missing Data", |
| "sec_num": "4" |
| }, |
| { |
| "text": "Where \u03b1 MIT (Missing In Text) and \u03b1 MID (Missing In Database) are parameters of the model which can be understood as penalties for missing information in the text and database respectively. We refer to this model as DNMAR (for Distant Supervision with Data Not Missing At Random). A graphical model representation is presented in Figure 3 . This model can be understood as relaxing the two hard constraints mentioned above into soft constraints. As we show in Section 7, simply relaxing these hard constraints into soft constraints and setting the two parameters \u03b1 MIT , and \u03b1 MID by hand on development data results in a large improvement to precision at comparable recall over MultiR on two different applications of distant supervision: binary relation extraction and named entity categorization. Inference in this model becomes more challenging however, because the constrained inference problem no longer reduces to a weighted edge cover problem as before. In Section 5, we present an inference technique for the new model which is time and memory efficient and almost always finds an exact MAP solution.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 330, |
| "end": 338, |
| "text": "Figure 3", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Modeling Missing Data", |
| "sec_num": "4" |
| }, |
| { |
| "text": "The learning proceeds analogously to what was described in section 3.1, with the exception that we now maximize over the additional aggregate-level hidden variables t, which have been introduced. As before, MAP inference is a subroutine in learning, both for the unconstrained case corresponding to the second term (which is again trivial to compute), and for the constrained case which is more challenging as it no longer reduces to a weighted edge cover problem as before.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Modeling Missing Data", |
| "sec_num": "4" |
| }, |
| { |
| "text": "The only difference in the new inference problem is the addition of t; z and t are deterministically related, so we can simply find a MAP assignment to z, from which t follows. The resulting inference problem can be viewed as optimization under soft constraints, where the objective includes terms for each fact not in Freebase which is extracted from the text: \u2212\u03b1 MID , and an effective reward for extracting a fact which is contained in Freebase: \u03b1 MIT .", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "MAP Inference", |
| "sec_num": "5" |
| }, |
| { |
| "text": "The solution to the MAP inference problem is the value of z which maximizes the following objective:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "MAP Inference", |
| "sec_num": "5" |
| }, |
| { |
| "text": "z * DB = arg max z P (z|d; \u03b8, \u03b1) = arg max z n i=1 \u03b8 \u2022 f (z i , s i ) (1) + k j=1 \u03b1 MIT 1 dj \u2227\u2203i:j=zi \u2212 \u03b1 MID 1 \u00acdj \u2227\u2203i:j=zi", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "MAP Inference", |
| "sec_num": "5" |
| }, |
| { |
| "text": "Whether we choose to set the parameters \u03b1 MIT and \u03b1 MID to fixed values (Section 4), or incorporate side information through a missing data model (Section 6), inference becomes more challenging than in the model where facts observed in Freebase are treated as hard constraints (Section 3); the hard constraints are equivalent to setting \u03b1 MID = \u03b1 MIT = \u221e.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "MAP Inference", |
| "sec_num": "5" |
| }, |
| { |
| "text": "We now present exact and approximate approaches to inference. Standard search methods such as A* and branch and bound have high computation and memory requirements and are therefore only feasible on problems with few variables; they are, however, guaranteed to find an optimal solution. 4 Approximate methods scale to large prob- 4 Each entity pair defines an inference problem where the lem sizes, but we loose the guarantee of finding an optimal solution. After showing how to find guaranteed exact solutions for small problem sizes (e.g. up to 200 variables), we present an inference algorithm based on local search which is empirically shown to find optimal solutions in almost every case by comparing its solutions to those found by A*.", |
| "cite_spans": [ |
| { |
| "start": 287, |
| "end": 288, |
| "text": "4", |
| "ref_id": null |
| }, |
| { |
| "start": 330, |
| "end": 331, |
| "text": "4", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "MAP Inference", |
| "sec_num": "5" |
| }, |
| { |
| "text": "We cast exact MAP inference in the DNMAR model as an application of A* search. Each partial hypothesis, h, in the search space corresponds to a partial assignment of the first m variables in z; to expand a hypothesis, we generate k new hypotheses, where k is the total number of relations. Each new hypothesis h contains the same partial assignment to z 1 , . . . , z m as h, with each h having a different value of z m+1 = r.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "A* Search", |
| "sec_num": "5.1" |
| }, |
| { |
| "text": "A* operates by maintaining a priority queue of hypotheses to expand, with each hypothesis' priority determined by an admissible heuristic. The heuristic represents an upper bound on the score of the best solution with h's partial variable assignment under the objective from Equation 1. In general, a tighter upper bound corresponds to a better heuristic and faster solutions. To upper bound our objective, we start with the \u03c6(z i , s i ) factors from the partial assignment. Unassigned variables (i > k), are set to their maximum possible value, z i = max r \u03c6(r, s i ) independently. Next to account for the effect the aggregate \u03c8(t j , d j ) factors on the unassigned variables, we consider independently changing each unassigned z i variable for each \u03c8(t j , d j ) factor to improve the overall score. This approach can lead to inconsistencies, but provides us with a good upper bound for the best possible solution with a partial assignment to z 1 , . . . , z k .", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "A* Search", |
| "sec_num": "5.1" |
| }, |
| { |
| "text": "While A* is guaranteed to find an exact solution, its time and memory requirements prohibit use on large problems involving many variables. As a more scalable alternative we propose a greedy hill climbing method (Russell et al., 1996) , which starts with a full assignment to z, and repeatedly moves to the best neighboring solution z according to the objective in number of variables is equal to the number of sentences which mention the pair. Equation 1. The neighborhood of z is defined by a set of search operators. If none of the neighboring solutions has a higher score, then we have reached a (local) maximum at which point the algorithm terminates with the current solution which may or may not correspond to a global maximum. This process is repeated using a number of random restarts, and the best local maximum is returned as the solution.", |
| "cite_spans": [ |
| { |
| "start": 212, |
| "end": 234, |
| "text": "(Russell et al., 1996)", |
| "ref_id": "BIBREF21" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Local Search", |
| "sec_num": "5.2" |
| }, |
| { |
| "text": "Search Operators: We start with a standard search operator, which considers changing each relation-mention variable, z i , individually to maximize the overall score. At each iteration, all z i s are considered, and the one which produces the largest improvement to the overall score is changed to form the neighboring solution, z . Unfortunately, this definition of the solution neighborhood is prone to poor local optima because it is often required to traverse many low scoring states before changing one of the aggregate variables, t j , and achieving a higher score from the associated aggregate factor, \u03c8(t j , d j ). For example, consider a case where the proposition r(e 1 , e 2 ) is not in Freebase, but is mentioned many times in the text, and imagine the current solution contains no mention z i = r. Any neighboring solution which assigns a mention to r will include the penalty \u03b1 MID , which could outweigh the benefit from changing any individual z i to r: \u03c6(r, s i ) \u2212 \u03c6(z i , s i ). If multiple mentions were changed to r however, together they could outweigh the penalty for extracting a fact not in Freebase, and produce an overall higher score.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Local Search", |
| "sec_num": "5.2" |
| }, |
| { |
| "text": "To avoid the problem of getting stuck in local optima, we propose an additional search operator which considers changing all variables, z i , which are currently assigned to a specific relation r, to a new relation r , resulting in an additional (k \u2212 1) 2 possible neighbors, in addition to the n \u00d7 (k \u2212 1) neighbors which come from the standard search operator. This aggregate-level search operator allows for more global moves which help to avoid local optima, similar to the type-level sampling approach for MCMC (Liang et al., 2010) .", |
| "cite_spans": [ |
| { |
| "start": 516, |
| "end": 536, |
| "text": "(Liang et al., 2010)", |
| "ref_id": "BIBREF13" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Local Search", |
| "sec_num": "5.2" |
| }, |
| { |
| "text": "At each iteration, we consider all n \u00d7 (k \u2212 1) + (k\u22121) 2 possible neighboring solutions generated by both search operators, and pick the one with biggest overall improvement, or terminate the algorithm if no improvements can be made over the current solution. 20 random restarts were used for each infer-ence problem. We found this approach to almost always find an optimal solution. In over 100,000 problems with 200 or fewer variables from the New York Times dataset used in Section 7, an optimal solution was missed in only 3 cases which was verified by comparing against optimal solutions found using A*. Without including the aggregate-level search operator, local search almost always gets stuck in a local maximum.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Local Search", |
| "sec_num": "5.2" |
| }, |
| { |
| "text": "In Section 4, we relaxed the hard constraints made by MultiR, which allows for missing information in either the text or database, enabling errors in the distantly supervised training data to be naturally corrected as a side-effect of learning. We made the simplifying assumption, however, that all facts are equally likely to be missing from the text or database, which is encoded in the choice of 2 fixed parameters \u03b1 MIT , and \u03b1 MID . Is it possible to improve performance by incorporating side information in the form of a missing data model (Little and Rubin, 1986) , taking into account how likely each fact is to be observed in the text and the database conditioned on its truth value? In our setting, the missing data model corresponds to choosing the values of \u03b1 MIT and \u03b1 MID dynamically based on the entities and relations involved.", |
| "cite_spans": [ |
| { |
| "start": 546, |
| "end": 570, |
| "text": "(Little and Rubin, 1986)", |
| "ref_id": "BIBREF14" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Incorporating Side Information", |
| "sec_num": "6" |
| }, |
| { |
| "text": "Popular Entities: Consider two entities: Barack Obama, the 44th president of the United States, and Donald Parry, a professional rugby league footballer of the 1980s. 5 Since Obama is much more wellknown than Parry, we wouldn't be very surprised to see information missing from Freebase about Parry, but it would seem odd if true propositions were missing about Obama.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Incorporating Side Information", |
| "sec_num": "6" |
| }, |
| { |
| "text": "We can encode these intuitions by choosing entity-specific values of \u03b1 MID :", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Incorporating Side Information", |
| "sec_num": "6" |
| }, |
| { |
| "text": "\u03b1 (e1,e2) MID = \u2212\u03b3 min (c(e 1 ), c(e 2 ))", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Incorporating Side Information", |
| "sec_num": "6" |
| }, |
| { |
| "text": "where c(e i ) is the number of times e i appears in Freebase, which is used as an estimate of its coverage.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Incorporating Side Information", |
| "sec_num": "6" |
| }, |
| { |
| "text": "Well Aligned Relations: Given that a pair of entities, e 1 and e 2 , participating in a Freebase relation, r, appear together in a sentence s i , the chance that s i expresses r varies greatly depending on r. For example, if a sentence mentions a pair of entities which participate in both the COUNTRYCAPITOL relation and the LOCATIONCONTAINS relation (for example Moscow and Russia), it is more likely that the a random sentence will express LOCATIONCON-TAINS than COUNTRYCAPITOL.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Incorporating Side Information", |
| "sec_num": "6" |
| }, |
| { |
| "text": "We can encode this preference for matching certain relations over others by setting \u03b1 r", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Incorporating Side Information", |
| "sec_num": "6" |
| }, |
| { |
| "text": "MIT on a per-relation basis. We choose a different value of \u03b1 r MIT for each relation based on quick inspection of the data, and estimating the number of true positives. Relations such as contains, place lived, and nationality which contain a large number of true positive matches are assigned a large value of \u03b1 r MIT = \u03b3 large , those with a medium number such as capitol, place of death and administrative divisions were assigned a medium value \u03b3 medium , and those relations with few matches were assigned a small value \u03b3 small . These 3 parameters were tuned on held out development data.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Incorporating Side Information", |
| "sec_num": "6" |
| }, |
| { |
| "text": "In Section 5, we presented a scalable approach to inference in the DNMAR model which almost always finds an optimal solution. Of course the real question is: does modeling missing data improve performance at extracting information from text? In this section we present experimental results showing large improvements in both precision and recall on two distantly supervised learning tasks: binary relation extraction and named entity categorization.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experiments", |
| "sec_num": "7" |
| }, |
| { |
| "text": "For the sake of comparison to previous work we evaluate performance on the New York Times text, features and Freebase relations developed by Riedel et. al. (2010) which was also used by Hoffmann et. al. (2011) . This dataset is constructed by extracting named entities from 1.8 million New York Times articles, which are then match against entities in Freebase. Sentences which contain pairs of entities participating in one or more relations are then used as training examples for those relations. The sentencelevel features include word sequences appearing in context with the pair of entities, in addition to part of speech sequences, and dependency paths from the Malt parser (Nivre et al., 2004) .", |
| "cite_spans": [ |
| { |
| "start": 141, |
| "end": 162, |
| "text": "Riedel et. al. (2010)", |
| "ref_id": "BIBREF18" |
| }, |
| { |
| "start": 186, |
| "end": 209, |
| "text": "Hoffmann et. al. (2011)", |
| "ref_id": "BIBREF10" |
| }, |
| { |
| "start": 680, |
| "end": 700, |
| "text": "(Nivre et al., 2004)", |
| "ref_id": "BIBREF17" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Binary Relation Extraction", |
| "sec_num": "7.1" |
| }, |
| { |
| "text": "To evaluate the effect of modeling missing data in distant supervision, we compare against the Mul-tiR model for distant supervision (Hoffmann et al., 2011) , a state of the art approach for binary relation extraction which is the most similar previous work, and models facts in Freebase as hard constraints disallowing the possibility of missing information in either the text or the database. To make our experiment as controlled as possible and ruleout the possibility of differences in performance due to implementation details, we compare against our own re-implementation of MultiR which reproduces Hoffmann et. al.'s performance, and shares as much code as possible with the DNMAR model.", |
| "cite_spans": [ |
| { |
| "start": 133, |
| "end": 156, |
| "text": "(Hoffmann et al., 2011)", |
| "ref_id": "BIBREF10" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Baseline", |
| "sec_num": "7.1.1" |
| }, |
| { |
| "text": "We evaluate binary relation extraction using two evaluations. We first evaluate on a sentence-level extraction task using a manually annotated dataset provided by Hoffmann et. al. (2011) . 6 This dataset consists of sentences paired with human judgments on whether each expresses a specific relation. Secondly, we perform an automatic evaluation which compares propositions extracted from text against held-out data from Freebase.", |
| "cite_spans": [ |
| { |
| "start": 163, |
| "end": 186, |
| "text": "Hoffmann et. al. (2011)", |
| "ref_id": "BIBREF10" |
| }, |
| { |
| "start": 189, |
| "end": 190, |
| "text": "6", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experimental Setup", |
| "sec_num": "7.1.2" |
| }, |
| { |
| "text": "Sentential Extraction: Figure 4 presents precision and recall curves for the sentence-level relation extraction task on the same manually annotated data presented by Hoffmann et. al. (2011) . By explicitly modeling the possibility of missing information in both the text and the database we achieve a 17% increase in area under the precision recall curve. Incorporating additional side information in the form of a missing data model, as described in Section 6, produces even better performance, yielding a 27% increase over the baseline in area under the curve. We also compare against the system described by Xu et. al. (2013) the labels predicted by their Pseudo-relevance Feedback model. 7 The differences between each pair of systems, except DNMAR and Xu13 8 , is significant with p-value less than 0.05 according to a paired ttest assuming a normal distribution. Per-relation precision and recall curves are presented in Figure 6 . For certain relations, for instance /location/us state/capital, there simply isn't enough overlap between the information contained in Freebase and facts mentioned in the text to learn anything useful. For these relations, entity pair matches are unlikely to actually express the relation; for instance, in the following sentence from the data: NHPF , which has its Louisiana office in Baton Rouge , gets the funds ... although Baton Rouge is the capital of Louisiana, the /location/us state/capital relation is not expressed in this sentence. Another interesting observation which we can make from Figure 6 , is that the benefit from modeling missing data varies from one relation to another. Some relations, for instance /people/person/place of birth, have relatively good coverage in both Freebase and the text, and therefore we do not see as much gain from modeling missing data. Other relations, such as /location/location/contains, and /people/person/place lived have poorer coverage making our missing data model very beneficial.", |
| "cite_spans": [ |
| { |
| "start": 166, |
| "end": 189, |
| "text": "Hoffmann et. al. (2011)", |
| "ref_id": "BIBREF10" |
| }, |
| { |
| "start": 611, |
| "end": 628, |
| "text": "Xu et. al. (2013)", |
| "ref_id": "BIBREF27" |
| }, |
| { |
| "start": 692, |
| "end": 693, |
| "text": "7", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 23, |
| "end": 31, |
| "text": "Figure 4", |
| "ref_id": "FIGREF1" |
| }, |
| { |
| "start": 927, |
| "end": 935, |
| "text": "Figure 6", |
| "ref_id": "FIGREF3" |
| }, |
| { |
| "start": 1537, |
| "end": 1545, |
| "text": "Figure 6", |
| "ref_id": "FIGREF3" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Results", |
| "sec_num": "7.1.3" |
| }, |
| { |
| "text": "Aggregate Extraction: Following previous work, we evaluate precision and recall against heldout data from Freebase in Figure 5 . As mentioned by Mintz et. al. (2009) , this automatic evaluation underestimates precision because many facts correctly extracted from the text are missing in the database and therefore judged as incorrect. Riedel et. al. (2013) further argues that this evaluation is biased because frequent entity pairs are more likely to contain facts in Freebase, so systems which rank extractions involving popular entities higher will achieve better performance independently of how accurate their predictions are. Indeed in Figure 5 we see that the precision of our system which models missing data is generally lower than the system which assumes no data is missing from Freebase, although we do roughly double the recall. By better modeling missing data we achieve lower precision on this automatic held-out evaluation as the system using hard constraints is explicitly trained to predict facts which occur in Freebase (not those which are mentioned in the text but unlikely to appear in the database).", |
| "cite_spans": [ |
| { |
| "start": 145, |
| "end": 165, |
| "text": "Mintz et. al. (2009)", |
| "ref_id": "BIBREF16" |
| }, |
| { |
| "start": 335, |
| "end": 356, |
| "text": "Riedel et. al. (2013)", |
| "ref_id": "BIBREF19" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 118, |
| "end": 126, |
| "text": "Figure 5", |
| "ref_id": "FIGREF2" |
| }, |
| { |
| "start": 642, |
| "end": 650, |
| "text": "Figure 5", |
| "ref_id": "FIGREF2" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Results", |
| "sec_num": "7.1.3" |
| }, |
| { |
| "text": "As mentioned previously, the problem of missing data in distant (weak) supervision is a very general issue; so far we have investigated this problem in the context of extracting binary relations using distant supervision. We now turn to the problem of weakly supervised named entity recognition (Collins and Singer, 1999; Talukdar and Pereira, 2010) .", |
| "cite_spans": [ |
| { |
| "start": 295, |
| "end": 321, |
| "text": "(Collins and Singer, 1999;", |
| "ref_id": "BIBREF5" |
| }, |
| { |
| "start": 322, |
| "end": 349, |
| "text": "Talukdar and Pereira, 2010)", |
| "ref_id": "BIBREF25" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Named Entity Categorization", |
| "sec_num": "7.2" |
| }, |
| { |
| "text": "To demonstrate the effect of modeling missing data in the distantly supervised named entity categorization task, we adapt the MultiR and DNMAR models to the Twitter named entity categorization dataset which was presented by Ritter et. al. (2011) . The models described so far are applied unchanged: rather than modeling a set of relations in Freebase between a pair of entities, e 1 and e 2 , we now model a set of possible Freebase categories associated with a single entity e. This is a natural extension of distant supervision from binary to unary relations. The unlabeled data and features described by Ritter et. al. (2011) are used for training the model, and their manually annotated Twitter named entity dataset is used for evaluation.", |
| "cite_spans": [ |
| { |
| "start": 224, |
| "end": 245, |
| "text": "Ritter et. al. (2011)", |
| "ref_id": "BIBREF20" |
| }, |
| { |
| "start": 607, |
| "end": 628, |
| "text": "Ritter et. al. (2011)", |
| "ref_id": "BIBREF20" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experimental Setup", |
| "sec_num": "7.2.1" |
| }, |
| { |
| "text": "Precision and recall at weakly supervised named entity categorization comparing MultiR against DN-MAR is presented in Figure 7 . We observe substantial improvement in precision at comparable recall by explicitly modeling the possibility of missing information in the text and database. The missing data model leads to a 107% increase in area under the precision-recall curve (from 0.16 to 0.34), but still falls short of the results presented by Ritter et. al. (2011) . Intuitively this makes sense, because the model used by Ritter et. al. is based on latent Dirichlet allocation which is better suited to this highly ambiguous unary relation data. ", |
| "cite_spans": [ |
| { |
| "start": 446, |
| "end": 467, |
| "text": "Ritter et. al. (2011)", |
| "ref_id": "BIBREF20" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 118, |
| "end": 126, |
| "text": "Figure 7", |
| "ref_id": "FIGREF4" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Results", |
| "sec_num": "7.2.2" |
| }, |
| { |
| "text": "In this paper we have investigated the problem of missing data in distant supervision; we introduced a joint model of information extraction and missing data which relaxes the hard constraints used in previous work to generate heuristic labels, and provides a natural way to incorporate side information through a missing data model. Efficient inference breaks in the new model, so we presented an approach based on A* search which is guaranteed to find exact solutions, however exact inference is not computationally tractable for large problems. To address the challenge of large problem sizes, we proposed a scalable inference algorithm based on local search, which includes a set of aggregate search operators allowing for long-distance jumps in the solution space to avoid local maxima; this approach was experimentally demonstrated to find exact solutions in almost every case. Finally we evaluated the performance of our model on the tasks of binary relation extraction and named entity categorization showing large performance gains in each case. In future work we would like to apply our approach to modeling missing data to additional models, for instance the model of Surdeanu et. al. (2012) and Ritter et. al. (2011) , and also explore new missing data models.", |
| "cite_spans": [ |
| { |
| "start": 1179, |
| "end": 1202, |
| "text": "Surdeanu et. al. (2012)", |
| "ref_id": "BIBREF23" |
| }, |
| { |
| "start": 1207, |
| "end": 1228, |
| "text": "Ritter et. al. (2011)", |
| "ref_id": "BIBREF20" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusions", |
| "sec_num": "8" |
| }, |
| { |
| "text": "Transactions of the Association for Computational Linguistics, 1 (2013) 367-378. Action Editor: Kristina Toutanova.Submitted 7/2013; Revised 8/2013; Published 10/2013. c 2013 Association for Computational Linguistics.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "These variables indicate which relation is mentioned between e1 and e2 in each sentence.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "For details seeKoller and Friedman (2009), Chapter 20.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "http://en.wikipedia.org/wiki/Donald_ Parry", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "http://raphaelhoffmann.com/mr/", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "We thank Wei Xu for making this data available. 8 DNMAR has a 1.3% increase in AUC over Xu13, though this difference is not significant according to a paired t-test. DNMAR* achieves a 10% increase in AUC over Xu13 which is significant.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| } |
| ], |
| "back_matter": [ |
| { |
| "text": "The authors would like to thank Dan Weld, Chris Quirk, Raphael Hoffmann and the anonymous reviewers for helpful comments. Thanks to Wei Xu for providing data. This research was supported in part by ONR grant N00014-11-1-0294, DARPA contract FA8750-09-C-0179, a gift from Google, a gift from Vulcan Inc., and carried out at the University of Washington's Turing Center.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Acknowledgements", |
| "sec_num": null |
| } |
| ], |
| "bib_entries": { |
| "BIBREF0": { |
| "ref_id": "b0", |
| "title": "Snowball: Extracting relations from large plain-text collections", |
| "authors": [ |
| { |
| "first": "Eugene", |
| "middle": [], |
| "last": "Agichtein", |
| "suffix": "" |
| }, |
| { |
| "first": "Luis", |
| "middle": [], |
| "last": "Gravano", |
| "suffix": "" |
| } |
| ], |
| "year": 2000, |
| "venue": "Proceedings of the fifth ACM conference on Digital libraries", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Eugene Agichtein and Luis Gravano. 2000. Snowball: Extracting relations from large plain-text collections. In Proceedings of the fifth ACM conference on Digital libraries.", |
| "links": null |
| }, |
| "BIBREF1": { |
| "ref_id": "b1", |
| "title": "Event discovery in social media feeds", |
| "authors": [ |
| { |
| "first": "Edward", |
| "middle": [], |
| "last": "Benson", |
| "suffix": "" |
| }, |
| { |
| "first": "Aria", |
| "middle": [], |
| "last": "Haghighi", |
| "suffix": "" |
| }, |
| { |
| "first": "Regina", |
| "middle": [], |
| "last": "Barzilay", |
| "suffix": "" |
| } |
| ], |
| "year": 2011, |
| "venue": "Proceedings of ACL", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Edward Benson, Aria Haghighi, and Regina Barzilay. 2011. Event discovery in social media feeds. In Pro- ceedings of ACL.", |
| "links": null |
| }, |
| "BIBREF2": { |
| "ref_id": "b2", |
| "title": "Extracting patterns and relations from the world wide web", |
| "authors": [ |
| { |
| "first": "", |
| "middle": [], |
| "last": "Sergey Brin", |
| "suffix": "" |
| } |
| ], |
| "year": 1999, |
| "venue": "The World Wide Web and Databases", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Sergey Brin. 1999. Extracting patterns and relations from the world wide web. In The World Wide Web and Databases.", |
| "links": null |
| }, |
| "BIBREF3": { |
| "ref_id": "b3", |
| "title": "Learning to extract relations from the web using minimal supervision", |
| "authors": [ |
| { |
| "first": "Razvan", |
| "middle": [], |
| "last": "Bunescu", |
| "suffix": "" |
| }, |
| { |
| "first": "Raymond", |
| "middle": [], |
| "last": "Mooney", |
| "suffix": "" |
| } |
| ], |
| "year": 2007, |
| "venue": "ACL", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Razvan Bunescu and Raymond Mooney. 2007. Learning to extract relations from the web using minimal super- vision. In ACL.", |
| "links": null |
| }, |
| "BIBREF4": { |
| "ref_id": "b4", |
| "title": "Toward an architecture for never-ending language learning", |
| "authors": [ |
| { |
| "first": "Andrew", |
| "middle": [], |
| "last": "Carlson", |
| "suffix": "" |
| }, |
| { |
| "first": "Justin", |
| "middle": [], |
| "last": "Betteridge", |
| "suffix": "" |
| }, |
| { |
| "first": "Bryan", |
| "middle": [], |
| "last": "Kisiel", |
| "suffix": "" |
| }, |
| { |
| "first": "Burr", |
| "middle": [], |
| "last": "Settles", |
| "suffix": "" |
| }, |
| { |
| "first": "Tom M", |
| "middle": [], |
| "last": "Estevam R Hruschka", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Mitchell", |
| "suffix": "" |
| } |
| ], |
| "year": 2010, |
| "venue": "Proceedings of AAAI", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Andrew Carlson, Justin Betteridge, Bryan Kisiel, Burr Settles, Estevam R Hruschka Jr, and Tom M Mitchell. 2010. Toward an architecture for never-ending lan- guage learning. In Proceedings of AAAI.", |
| "links": null |
| }, |
| "BIBREF5": { |
| "ref_id": "b5", |
| "title": "Unsupervised models for named entity classification", |
| "authors": [ |
| { |
| "first": "Michael", |
| "middle": [], |
| "last": "Collins", |
| "suffix": "" |
| }, |
| { |
| "first": "Yoram", |
| "middle": [], |
| "last": "Singer", |
| "suffix": "" |
| } |
| ], |
| "year": 1999, |
| "venue": "EMNLP", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Michael Collins and Yoram Singer. 1999. Unsupervised models for named entity classification. In EMNLP.", |
| "links": null |
| }, |
| "BIBREF6": { |
| "ref_id": "b6", |
| "title": "Discriminative training methods for hidden markov models: theory and experiments with perceptron algorithms", |
| "authors": [ |
| { |
| "first": "Michael", |
| "middle": [], |
| "last": "Collins", |
| "suffix": "" |
| } |
| ], |
| "year": 2002, |
| "venue": "Proceedings of EMNLP", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Michael Collins. 2002. Discriminative training meth- ods for hidden markov models: theory and experi- ments with perceptron algorithms. In Proceedings of EMNLP.", |
| "links": null |
| }, |
| "BIBREF7": { |
| "ref_id": "b7", |
| "title": "Constructing biological knowledge bases by extracting information from text sources", |
| "authors": [ |
| { |
| "first": "Mark", |
| "middle": [], |
| "last": "Craven", |
| "suffix": "" |
| }, |
| { |
| "first": "Johan", |
| "middle": [], |
| "last": "Kumlien", |
| "suffix": "" |
| } |
| ], |
| "year": 1999, |
| "venue": "ISMB", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Mark Craven, Johan Kumlien, et al. 1999. Constructing biological knowledge bases by extracting information from text sources. In ISMB.", |
| "links": null |
| }, |
| "BIBREF8": { |
| "ref_id": "b8", |
| "title": "Solving the multiple instance problem with axis-parallel rectangles", |
| "authors": [ |
| { |
| "first": "G", |
| "middle": [], |
| "last": "Thomas", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Dietterich", |
| "suffix": "" |
| }, |
| { |
| "first": "H", |
| "middle": [], |
| "last": "Richard", |
| "suffix": "" |
| }, |
| { |
| "first": "Tom\u00e1s", |
| "middle": [], |
| "last": "Lathrop", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Lozano-P\u00e9rez", |
| "suffix": "" |
| } |
| ], |
| "year": 1997, |
| "venue": "Artificial Intelligence", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Thomas G Dietterich, Richard H Lathrop, and Tom\u00e1s Lozano-P\u00e9rez. 1997. Solving the multiple instance problem with axis-parallel rectangles. Artificial Intel- ligence.", |
| "links": null |
| }, |
| "BIBREF9": { |
| "ref_id": "b9", |
| "title": "Bayesian data analysis", |
| "authors": [ |
| { |
| "first": "Andrew", |
| "middle": [], |
| "last": "Gelman", |
| "suffix": "" |
| }, |
| { |
| "first": "B", |
| "middle": [], |
| "last": "John", |
| "suffix": "" |
| }, |
| { |
| "first": "Hal", |
| "middle": [ |
| "S" |
| ], |
| "last": "Carlin", |
| "suffix": "" |
| }, |
| { |
| "first": "Donald B", |
| "middle": [], |
| "last": "Stern", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Rubin", |
| "suffix": "" |
| } |
| ], |
| "year": 2003, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Andrew Gelman, John B Carlin, Hal S Stern, and Don- ald B Rubin. 2003. Bayesian data analysis. CRC press.", |
| "links": null |
| }, |
| "BIBREF10": { |
| "ref_id": "b10", |
| "title": "Knowledgebased weak supervision for information extraction of overlapping relations", |
| "authors": [ |
| { |
| "first": "Raphael", |
| "middle": [], |
| "last": "Hoffmann", |
| "suffix": "" |
| }, |
| { |
| "first": "Congle", |
| "middle": [], |
| "last": "Zhang", |
| "suffix": "" |
| }, |
| { |
| "first": "Xiao", |
| "middle": [], |
| "last": "Ling", |
| "suffix": "" |
| }, |
| { |
| "first": "Luke", |
| "middle": [], |
| "last": "Zettlemoyer", |
| "suffix": "" |
| }, |
| { |
| "first": "Daniel", |
| "middle": [ |
| "S" |
| ], |
| "last": "Weld", |
| "suffix": "" |
| } |
| ], |
| "year": 2011, |
| "venue": "Proceedings of ACL-HLT", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Raphael Hoffmann, Congle Zhang, Xiao Ling, Luke Zettlemoyer, and Daniel S. Weld. 2011. Knowledge- based weak supervision for information extraction of overlapping relations. In Proceedings of ACL-HLT.", |
| "links": null |
| }, |
| "BIBREF11": { |
| "ref_id": "b11", |
| "title": "Probabilistic Graphical Models: Principles and Techniques", |
| "authors": [ |
| { |
| "first": "D", |
| "middle": [], |
| "last": "Koller", |
| "suffix": "" |
| }, |
| { |
| "first": "N", |
| "middle": [], |
| "last": "Friedman", |
| "suffix": "" |
| } |
| ], |
| "year": 2009, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "D. Koller and N. Friedman. 2009. Probabilistic Graphi- cal Models: Principles and Techniques. MIT Press.", |
| "links": null |
| }, |
| "BIBREF12": { |
| "ref_id": "b12", |
| "title": "An end-to-end discriminative approach to machine translation", |
| "authors": [ |
| { |
| "first": "Percy", |
| "middle": [], |
| "last": "Liang", |
| "suffix": "" |
| }, |
| { |
| "first": "Alexandre", |
| "middle": [], |
| "last": "Bouchard-C\u00f4t\u00e9", |
| "suffix": "" |
| }, |
| { |
| "first": "Dan", |
| "middle": [], |
| "last": "Klein", |
| "suffix": "" |
| }, |
| { |
| "first": "Ben", |
| "middle": [], |
| "last": "Taskar", |
| "suffix": "" |
| } |
| ], |
| "year": 2006, |
| "venue": "Proceedings of ACL", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Percy Liang, Alexandre Bouchard-C\u00f4t\u00e9, Dan Klein, and Ben Taskar. 2006. An end-to-end discriminative ap- proach to machine translation. In Proceedings of ACL.", |
| "links": null |
| }, |
| "BIBREF13": { |
| "ref_id": "b13", |
| "title": "Type-based mcmc", |
| "authors": [ |
| { |
| "first": "Percy", |
| "middle": [], |
| "last": "Liang", |
| "suffix": "" |
| }, |
| { |
| "first": "Dan", |
| "middle": [], |
| "last": "Michael I Jordan", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Klein", |
| "suffix": "" |
| } |
| ], |
| "year": 2010, |
| "venue": "Proceedings of ACL", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Percy Liang, Michael I Jordan, and Dan Klein. 2010. Type-based mcmc. In Proceedings of ACL.", |
| "links": null |
| }, |
| "BIBREF14": { |
| "ref_id": "b14", |
| "title": "Statistical analysis with missing data", |
| "authors": [ |
| { |
| "first": "J", |
| "middle": [], |
| "last": "Roderick", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Little", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Donald B Rubin", |
| "suffix": "" |
| } |
| ], |
| "year": 1986, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Roderick J A Little and Donald B Rubin. 1986. Statis- tical analysis with missing data. John Wiley & Sons, Inc., New York, NY, USA.", |
| "links": null |
| }, |
| "BIBREF15": { |
| "ref_id": "b15", |
| "title": "Distant supervision for relation extraction with an incomplete knowledge base", |
| "authors": [ |
| { |
| "first": "Bonan", |
| "middle": [], |
| "last": "Min", |
| "suffix": "" |
| }, |
| { |
| "first": "Ralph", |
| "middle": [], |
| "last": "Grishman", |
| "suffix": "" |
| }, |
| { |
| "first": "Li", |
| "middle": [], |
| "last": "Wan", |
| "suffix": "" |
| }, |
| { |
| "first": "Chang", |
| "middle": [], |
| "last": "Wang", |
| "suffix": "" |
| }, |
| { |
| "first": "David", |
| "middle": [], |
| "last": "Gondek", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "Proceedings of NAACL-HLT", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Bonan Min, Ralph Grishman, Li Wan, Chang Wang, and David Gondek. 2013. Distant supervision for rela- tion extraction with an incomplete knowledge base. In Proceedings of NAACL-HLT.", |
| "links": null |
| }, |
| "BIBREF16": { |
| "ref_id": "b16", |
| "title": "Distant supervision for relation extraction without labeled data", |
| "authors": [ |
| { |
| "first": "Mike", |
| "middle": [], |
| "last": "Mintz", |
| "suffix": "" |
| }, |
| { |
| "first": "Steven", |
| "middle": [], |
| "last": "Bills", |
| "suffix": "" |
| }, |
| { |
| "first": "Rion", |
| "middle": [], |
| "last": "Snow", |
| "suffix": "" |
| }, |
| { |
| "first": "Dan", |
| "middle": [], |
| "last": "Jurafsky", |
| "suffix": "" |
| } |
| ], |
| "year": 2009, |
| "venue": "Proceedings of ACL-IJCNLP", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Mike Mintz, Steven Bills, Rion Snow, and Dan Jurafsky. 2009. Distant supervision for relation extraction with- out labeled data. In Proceedings of ACL-IJCNLP.", |
| "links": null |
| }, |
| "BIBREF17": { |
| "ref_id": "b17", |
| "title": "Memory-based dependency parsing", |
| "authors": [ |
| { |
| "first": "Joakim", |
| "middle": [], |
| "last": "Nivre", |
| "suffix": "" |
| }, |
| { |
| "first": "Johan", |
| "middle": [], |
| "last": "Hall", |
| "suffix": "" |
| }, |
| { |
| "first": "Jens", |
| "middle": [], |
| "last": "Nilsson", |
| "suffix": "" |
| } |
| ], |
| "year": 2004, |
| "venue": "Proceedings of CoNLL", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Joakim Nivre, Johan Hall, and Jens Nilsson. 2004. Memory-based dependency parsing. In Proceedings of CoNLL.", |
| "links": null |
| }, |
| "BIBREF18": { |
| "ref_id": "b18", |
| "title": "Modeling relations and their mentions without labeled text", |
| "authors": [ |
| { |
| "first": "Sebastian", |
| "middle": [], |
| "last": "Riedel", |
| "suffix": "" |
| }, |
| { |
| "first": "Limin", |
| "middle": [], |
| "last": "Yao", |
| "suffix": "" |
| }, |
| { |
| "first": "Andrew", |
| "middle": [], |
| "last": "Mccallum", |
| "suffix": "" |
| } |
| ], |
| "year": 2010, |
| "venue": "Proceedings of ECML/PKDD", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Sebastian Riedel, Limin Yao, and Andrew McCallum. 2010. Modeling relations and their mentions without labeled text. In Proceedings of ECML/PKDD.", |
| "links": null |
| }, |
| "BIBREF19": { |
| "ref_id": "b19", |
| "title": "Relation extraction with matrix factorization and universal schemas", |
| "authors": [ |
| { |
| "first": "Sebastian", |
| "middle": [], |
| "last": "Riedel", |
| "suffix": "" |
| }, |
| { |
| "first": "Limin", |
| "middle": [], |
| "last": "Yao", |
| "suffix": "" |
| }, |
| { |
| "first": "Andrew", |
| "middle": [], |
| "last": "Mccallum", |
| "suffix": "" |
| }, |
| { |
| "first": "Benjamin M", |
| "middle": [], |
| "last": "Marlin", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "Proceedings of NAACL-HLT", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Sebastian Riedel, Limin Yao, Andrew McCallum, and Benjamin M Marlin. 2013. Relation extraction with matrix factorization and universal schemas. In Pro- ceedings of NAACL-HLT.", |
| "links": null |
| }, |
| "BIBREF20": { |
| "ref_id": "b20", |
| "title": "Named entity recognition in tweets: An experimental study", |
| "authors": [ |
| { |
| "first": "Alan", |
| "middle": [], |
| "last": "Ritter", |
| "suffix": "" |
| }, |
| { |
| "first": "Sam", |
| "middle": [], |
| "last": "Clark", |
| "suffix": "" |
| }, |
| { |
| "first": "Mausam", |
| "middle": [], |
| "last": "", |
| "suffix": "" |
| }, |
| { |
| "first": "Oren", |
| "middle": [], |
| "last": "Etzioni", |
| "suffix": "" |
| } |
| ], |
| "year": 2011, |
| "venue": "Proceedings of EMNLP", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Alan Ritter, Sam Clark, Mausam, and Oren Etzioni. 2011. Named entity recognition in tweets: An experi- mental study. Proceedings of EMNLP.", |
| "links": null |
| }, |
| "BIBREF21": { |
| "ref_id": "b21", |
| "title": "Artificial intelligence: a modern approach", |
| "authors": [ |
| { |
| "first": "J", |
| "middle": [], |
| "last": "Stuart", |
| "suffix": "" |
| }, |
| { |
| "first": "Peter", |
| "middle": [], |
| "last": "Russell", |
| "suffix": "" |
| }, |
| { |
| "first": "John", |
| "middle": [ |
| "F" |
| ], |
| "last": "Norvig", |
| "suffix": "" |
| }, |
| { |
| "first": "Jitendra", |
| "middle": [ |
| "M" |
| ], |
| "last": "Candy", |
| "suffix": "" |
| }, |
| { |
| "first": "Douglas", |
| "middle": [ |
| "D" |
| ], |
| "last": "Malik", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Edwards", |
| "suffix": "" |
| } |
| ], |
| "year": 1996, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Stuart J. Russell, Peter Norvig, John F. Candy, Jiten- dra M. Malik, and Douglas D. Edwards. 1996. Ar- tificial intelligence: a modern approach.", |
| "links": null |
| }, |
| "BIBREF22": { |
| "ref_id": "b22", |
| "title": "Databasetext alignment via structured multilabel classification", |
| "authors": [ |
| { |
| "first": "Benjamin", |
| "middle": [], |
| "last": "Snyder", |
| "suffix": "" |
| }, |
| { |
| "first": "Regina", |
| "middle": [], |
| "last": "Barzilay", |
| "suffix": "" |
| } |
| ], |
| "year": 2007, |
| "venue": "Proceedings of IJCAI", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Benjamin Snyder and Regina Barzilay. 2007. Database- text alignment via structured multilabel classification. In Proceedings of IJCAI.", |
| "links": null |
| }, |
| "BIBREF23": { |
| "ref_id": "b23", |
| "title": "Multi-instance multilabel learning for relation extraction", |
| "authors": [ |
| { |
| "first": "Mihai", |
| "middle": [], |
| "last": "Surdeanu", |
| "suffix": "" |
| }, |
| { |
| "first": "Julie", |
| "middle": [], |
| "last": "Tibshirani", |
| "suffix": "" |
| }, |
| { |
| "first": "Ramesh", |
| "middle": [], |
| "last": "Nallapati", |
| "suffix": "" |
| }, |
| { |
| "first": "Christopher", |
| "middle": [ |
| "D" |
| ], |
| "last": "Manning", |
| "suffix": "" |
| } |
| ], |
| "year": 2012, |
| "venue": "Proceedings of EMNLP-Conll", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Mihai Surdeanu, Julie Tibshirani, Ramesh Nallapati, and Christopher D. Manning. 2012. Multi-instance multi- label learning for relation extraction. In Proceedings of EMNLP-Conll.", |
| "links": null |
| }, |
| "BIBREF24": { |
| "ref_id": "b24", |
| "title": "Reducing wrong labels in distant supervision for relation extraction", |
| "authors": [ |
| { |
| "first": "Shingo", |
| "middle": [], |
| "last": "Takamatsu", |
| "suffix": "" |
| }, |
| { |
| "first": "Issei", |
| "middle": [], |
| "last": "Sato", |
| "suffix": "" |
| }, |
| { |
| "first": "Hiroshi", |
| "middle": [], |
| "last": "Nakagawa", |
| "suffix": "" |
| } |
| ], |
| "year": 2012, |
| "venue": "Proceedings ACL", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Shingo Takamatsu, Issei Sato, and Hiroshi Nakagawa. 2012. Reducing wrong labels in distant supervision for relation extraction. In Proceedings ACL.", |
| "links": null |
| }, |
| "BIBREF25": { |
| "ref_id": "b25", |
| "title": "Experiments in graph-based semi-supervised learning methods for class-instance acquisition", |
| "authors": [ |
| { |
| "first": "Partha", |
| "middle": [ |
| "Pratim" |
| ], |
| "last": "Talukdar", |
| "suffix": "" |
| }, |
| { |
| "first": "Fernando", |
| "middle": [], |
| "last": "Pereira", |
| "suffix": "" |
| } |
| ], |
| "year": 2010, |
| "venue": "Proceedings of ACL", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Partha Pratim Talukdar and Fernando Pereira. 2010. Experiments in graph-based semi-supervised learning methods for class-instance acquisition. In Proceedings of ACL.", |
| "links": null |
| }, |
| "BIBREF26": { |
| "ref_id": "b26", |
| "title": "Autonomously semantifying wikipedia", |
| "authors": [ |
| { |
| "first": "Fei", |
| "middle": [], |
| "last": "Wu", |
| "suffix": "" |
| }, |
| { |
| "first": "Daniel", |
| "middle": [ |
| "S" |
| ], |
| "last": "Weld", |
| "suffix": "" |
| } |
| ], |
| "year": 2007, |
| "venue": "Proceedings of CIKM", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Fei Wu and Daniel S. Weld. 2007. Autonomously se- mantifying wikipedia. In Proceedings of CIKM.", |
| "links": null |
| }, |
| "BIBREF27": { |
| "ref_id": "b27", |
| "title": "Filling knowledge base gaps for distant supervision of relation extraction", |
| "authors": [ |
| { |
| "first": "Wei", |
| "middle": [], |
| "last": "Xu", |
| "suffix": "" |
| }, |
| { |
| "first": "Raphael Hoffmann Le", |
| "middle": [], |
| "last": "Zhao", |
| "suffix": "" |
| }, |
| { |
| "first": "Ralph", |
| "middle": [], |
| "last": "Grishman", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "Proceedings of ACL", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Wei Xu, Raphael Hoffmann Le Zhao, and Ralph Grish- man. 2013. Filling knowledge base gaps for distant supervision of relation extraction. In Proceedings of ACL.", |
| "links": null |
| }, |
| "BIBREF28": { |
| "ref_id": "b28", |
| "title": "Online learning of relaxed ccg grammars for parsing to logical form", |
| "authors": [ |
| { |
| "first": "Luke", |
| "middle": [ |
| "S" |
| ], |
| "last": "Zettlemoyer", |
| "suffix": "" |
| }, |
| { |
| "first": "Michael", |
| "middle": [], |
| "last": "Collins", |
| "suffix": "" |
| } |
| ], |
| "year": 2007, |
| "venue": "EMNLP-CoNLL", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Luke S. Zettlemoyer and Michael Collins. 2007. Online learning of relaxed ccg grammars for parsing to logical form. In EMNLP-CoNLL.", |
| "links": null |
| } |
| }, |
| "ref_entries": { |
| "FIGREF0": { |
| "num": null, |
| "uris": null, |
| "type_str": "figure", |
| "text": "MultiR(Hoffmann et. al. 2011)" |
| }, |
| "FIGREF1": { |
| "num": null, |
| "uris": null, |
| "type_str": "figure", |
| "text": "(hereinafter called Xu13). To do this, we trained our implementation of MultiR on Overall precision and Recall at the sentence-level extraction task comparing against human judgments. DNMAR * incorporates sideinformation as discussed in Section 6." |
| }, |
| "FIGREF2": { |
| "num": null, |
| "uris": null, |
| "type_str": "figure", |
| "text": "Aggregate-level automatic evaluation comparing against held-out data from Freebase. DNMAR * incorporates side-information as discussed in Section 6." |
| }, |
| "FIGREF3": { |
| "num": null, |
| "uris": null, |
| "type_str": "figure", |
| "text": "Per-relation precision and recall on the sentence-level relation extraction task. The dashed line corresponds to MultiR, DNMAR is the solid line, and DNMAR*, which incorporates side-information, is represented by the dotted line." |
| }, |
| "FIGREF4": { |
| "num": null, |
| "uris": null, |
| "type_str": "figure", |
| "text": "Precision and Recall at the named entity categorization task" |
| }, |
| "TABREF0": { |
| "content": "<table><tr><td>Person</td><td>EMPLOYER</td></tr><tr><td colspan=\"2\">Bibb Latan\u00e9 Tim Cook Susan Wojcicki Google UNC Chapel Hill Apple</td></tr><tr><td>True Positive</td><td>\"Bibb Latan\u00e9, a professor at the University of North Carolina at Chapel Hill, published the theory in 1981.\"</td></tr><tr><td>False Positive</td><td>\"Tim</td></tr></table>", |
| "text": "Cook praised Apple's record revenue...\" False Negative \"John P. McNamara, a professor at Washington State University's Department of Animal Sciences...\"", |
| "html": null, |
| "type_str": "table", |
| "num": null |
| } |
| } |
| } |
| } |