ACL-OCL / Base_JSON /prefixR /json /R13 /R13-1009.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "R13-1009",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T14:55:35.234510Z"
},
"title": "Towards a Structured Representation of Generic Concepts and Relations in Large Text Corpora",
"authors": [
{
"first": "Archana",
"middle": [],
"last": "Bhattarai",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "The University of Memphis",
"location": {}
},
"email": "abhattar@memphis.edu"
},
{
"first": "Vasile",
"middle": [],
"last": "Rus",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "The University of Memphis",
"location": {}
},
"email": "vrus@memphis.edu"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Extraction of structured information from text corpora involves identifying entities and the relationship between entities expressed in unstructured text. We propose a novel iterative pattern induction method to extract relation tuples exploiting lexical and shallow syntactic pattern of a sentence. We start with a single pattern to illustrate how the method explores additional paterns and tuples by itself with increasing amount of data. We apply frequency and correlation based filtering and ranking of relation tuples to ensure the correctness of the system. Experimental evaluation compared to other state of the art open extraction systems such as Reverb, textRunner and WOE shows the effectiveness of the proposed system.",
"pdf_parse": {
"paper_id": "R13-1009",
"_pdf_hash": "",
"abstract": [
{
"text": "Extraction of structured information from text corpora involves identifying entities and the relationship between entities expressed in unstructured text. We propose a novel iterative pattern induction method to extract relation tuples exploiting lexical and shallow syntactic pattern of a sentence. We start with a single pattern to illustrate how the method explores additional paterns and tuples by itself with increasing amount of data. We apply frequency and correlation based filtering and ranking of relation tuples to ensure the correctness of the system. Experimental evaluation compared to other state of the art open extraction systems such as Reverb, textRunner and WOE shows the effectiveness of the proposed system.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Traditional information extraction methodologies tend to extract a predefined relation between named entities annotated in a different process. While this method might be useful and accurate for smaller data with limited entity types and relations, it cannot scale to extract entities and their relationships in web due to the sheer volume and heterogeneity of data. Thus open domain information extraction systems such as Reverb (Fader et al., 2011) , TEXTRUNNER (Yates et al., 2007) and NELL (Carlson et al., 2010) have received added attention in recent times. Extracting machine readable structured information from free text is the basis of most of the semantic analytical systems. With these units of semantic information, a lot of applications requiring semantic information processing such as finding the semantic similarity between two unit of texts, semantic inference, automated question-answering etc can be visualized with better performance.",
"cite_spans": [
{
"start": 430,
"end": 450,
"text": "(Fader et al., 2011)",
"ref_id": "BIBREF10"
},
{
"start": 464,
"end": 484,
"text": "(Yates et al., 2007)",
"ref_id": "BIBREF18"
},
{
"start": 494,
"end": 516,
"text": "(Carlson et al., 2010)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Existing work on pre-defined relation extraction have implemented methods of supervised, semisupervised, bootstrapped and unsupervised classification (Zhao and Grishman, 2005) , (Kambhatla, 2004) (Bunescu and Mooney, 2006) (Zelenko et al., 2003) . For open information extraction methods, since they do not have predefined relations, it is very hard if impossible to generate labeled data for all potential relations in large text corpora. In this paper, we propose an iterative pattern induction based extraction system CREATE (Concept Representation and Extraction through Heterogenous Evidence), to extract relation tuples from large text corpora. We will start with a single selective pattern and iteratively add tuples and patterns in the corresponding collection. This method is easily usable in any domain since it does not require any labeled data. We ensure the selectivity of the pattern by filtering the patterns with statistics such as frequency and average pointwise mutual information (PMI) and specificity of the pattern. CREATE works under the assumption that sentences have a pattern of expressing information and this pattern is followed by multiple sentences. If we can explore these patterns in a language, we can extract tuples from all the sentences to build an automated system. One of the simplest cases of such a pattern is a sentence that only has two nouns and a verb in between. For example, for the sentence \"Google bought Youtube\", the partof-speech structure will be \"NNP VBD NNP\" and hence it is easy to identify two nouns as concepts and the verb as a relation between these two concepts. Thus, the tuple, bought(Google, Youtube) can be extracted with high confidence. The beauty of this system is that it gracefully identifies such patterns without requiring any human input and expands itself with the addition of every sentence on the system. The state of the art system that is closest to CREATE in terms of tuple generation is Reverb (Fader et al., 2011) . The core idea of Reverb is to identify a relation and extract concepts in the immediate left and right of the relation to form a tuple. The system takes a greedy approach where it only considers concepts that are adjacent to relations. Moreover, they also ignore the information that might change the context of the tuple in the sentence. For example, for the sentence \"RSV in older children and adults causes a cold.\", Reverb extracts tuple causes(adults, a cold) with confidence 0.6799. This approach has two disadvantages, first; it extracts invalid tuple as it ignores complete sentence context, second; it misses correct tuple causes(RSV, cold) because of its greedy nature. We overcome both the disadvantages in CREATE. Although, Reverb does not require training data to extract tuples, it does require labeled data to determine the confidence of a tuple. CREATE does not require labeled data other than the seed pattern at any stage of the process. With enough iterations and larger corpus, CRE-ATE is able to extract the tuple causes(RSV, cold) correctly with high confidence.",
"cite_spans": [
{
"start": 150,
"end": 175,
"text": "(Zhao and Grishman, 2005)",
"ref_id": "BIBREF20"
},
{
"start": 178,
"end": 195,
"text": "(Kambhatla, 2004)",
"ref_id": "BIBREF11"
},
{
"start": 196,
"end": 222,
"text": "(Bunescu and Mooney, 2006)",
"ref_id": "BIBREF6"
},
{
"start": 223,
"end": 245,
"text": "(Zelenko et al., 2003)",
"ref_id": "BIBREF19"
},
{
"start": 1972,
"end": 1992,
"text": "(Fader et al., 2011)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Few of the properties that we exploit for the filtering of tuples are as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 Patterns and tuples have dual dependence.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Patterns can be used to extract tuples and tuples can be used to identify patterns.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 If a tuple is generated from two different sentences using two different patterns, then the confidence of the tuple is highly increased.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 If a pattern only produces high quality tuples, then the pattern is considered to be of high confidence.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 Web is highly redundant. This redundancy can be exploited to evaluate the correctness of a tuple.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Our approach is to learn the patterns in an iterative manner as in DIPRE (Brin, 1999) and Snowball (Agichtein and Gravano, 2000) . We extend the work one step further to iteratively extract tuples with open relations from large text corpora. We follow the standard step of extracting patterns based on known tuples, extracting tuples based on known patterns and evaluating and refinining patterns based on inherent statistics to obtain high precision tuples and patterns. We make the following contributions in this paper.",
"cite_spans": [
{
"start": 73,
"end": 85,
"text": "(Brin, 1999)",
"ref_id": "BIBREF5"
},
{
"start": 99,
"end": 128,
"text": "(Agichtein and Gravano, 2000)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 We extend and adapt pattern based tuple extraction to perform open information extraction.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 We propose a method of domain independent pattern generation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 With the patterns generated in step 2, we propose a method of relation tuple extraction.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 We propose an effective method to refine/rank extracted tuples and patterns without human supervision.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "One of the major goals of open information extraction is to build automated system that can read textual data to a deeper extent compared to bag of words model. Carlson et. al (Carlson et al., 2010) use semi-supervised bootstrapping approach to continuously read and update the knowledge base with an Expectation Maximization like algorithm. Other systems that are tied to a particular structure are (Suchanek et al., 2007) , (Auer et al., 2007) , (Wu and Weld, 2010) which focus on more structured part of large factual collections such as Wikipedia based on wikipediacentric properties.",
"cite_spans": [
{
"start": 176,
"end": 198,
"text": "(Carlson et al., 2010)",
"ref_id": "BIBREF7"
},
{
"start": 400,
"end": 423,
"text": "(Suchanek et al., 2007)",
"ref_id": "BIBREF13"
},
{
"start": 426,
"end": 445,
"text": "(Auer et al., 2007)",
"ref_id": "BIBREF2"
},
{
"start": 448,
"end": 467,
"text": "(Wu and Weld, 2010)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "The first true open information extraction system TEXTRUNNER, obtained training data applying some heuristics rules over dependency parsing of the training corpus. Using these training samples, sequence based classifiers were trained and more tuples were extracted. The WOE systems (Wu and Weld, 2010) introduced by Wu and Weld make use of Wikipedia as a source of training data for their extractors, which leads to further improvements over TEX-TRUNNER (Yates et al., 2007) . Wu and Weld also show that dependency parse features result in a dramatic increase in precision and recall over shallow linguistic features, but at the cost of extraction speed. Semisupervised methods start with a few manually provided domain independent extraction patterns that will extract training tuples.",
"cite_spans": [
{
"start": 282,
"end": 301,
"text": "(Wu and Weld, 2010)",
"ref_id": "BIBREF15"
},
{
"start": 454,
"end": 474,
"text": "(Yates et al., 2007)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Statsnowball works under the principle of iterative pattern and tuple generation using Markov Logic Network (Zhu et al., 2009) and show improved extraction compared to TEXTRUNNER. Reverb (Fader et al., 2011 ) extracts on simple logic of extracting probable entities/concepts connected with a relation term adjacently. While it does not require seed data or training data to extract relation tuples, it depends on manually analysed data for the confidence evaluation of a tuple. Unsupervised methods generally exploit the characteristic of the text source, perform deep or shallow parsing and extract the patterns and cluster these patterns to extract relations. Yan et. al. (Yan et al., 2009) used the characteristics of wikipedia and performed clustering of patterns to extract relations without human supervision. They report a precision as high as 84% with deep linguistic parsing. Other works (Syed and Finin, 2010) also use wikipedia for ontology development for entities. (Min et al., 2012) extract relation tuples based on entity similarity graph and pattern similarity. Probabilistic topic based models (Chang et al., 2009) (Yao et al., 2011) have also been used to infer relation between entity-pairs. These models assume relation tuples as atomic observations in documents rather than word observations in standard LDA model.",
"cite_spans": [
{
"start": 108,
"end": 126,
"text": "(Zhu et al., 2009)",
"ref_id": "BIBREF21"
},
{
"start": 187,
"end": 206,
"text": "(Fader et al., 2011",
"ref_id": "BIBREF10"
},
{
"start": 674,
"end": 692,
"text": "(Yan et al., 2009)",
"ref_id": "BIBREF16"
},
{
"start": 897,
"end": 919,
"text": "(Syed and Finin, 2010)",
"ref_id": "BIBREF14"
},
{
"start": 1111,
"end": 1131,
"text": "(Chang et al., 2009)",
"ref_id": "BIBREF8"
},
{
"start": 1132,
"end": 1150,
"text": "(Yao et al., 2011)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "We formulate the problem of relation tuple extraction as a binary classification problem.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Problem Definition",
"sec_num": "3"
},
{
"text": "Given a sentence S = (w1; w2; ..; e1..; wj; ..r1; wk..; e2; ::; wn) where e1 and e2 are the entities of interest, r1 is the relation of interest, and w1, w2....wj...wk is the context of the tuple in the sentence s, the classification function,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Problem Definition",
"sec_num": "3"
},
{
"text": "f (T (S)) = 1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Problem Definition",
"sec_num": "3"
},
{
"text": "if e1 and e2 are related by r1 \u22121 otherwise Here T (S) is a feature set extracted from the sentence as a context.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Problem Definition",
"sec_num": "3"
},
{
"text": "The classification model is built based on context, independent of entities and relations. A context or a pattern of a tuple in a sentence is a 4-tuple (lef t, middle lef t, middle right, right) where lef t is the sequential list of entities and words that occur before first argument in the tuple, middle lef t is the list of words that occur between first argument and relation, middle right is the list of words that occur between relation and second argument and right is the list of words that occur after second argument in the sentence unless another relation is detected.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Problem Definition",
"sec_num": "3"
},
{
"text": "The classification function f (T (S)) = 1 if the pattern of the tuple T in the sentence S exists in pattern database.the degree of similarity of the context of probable tuple is greater than threshold similarity with one of the contexts existing in context-base.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Problem Definition",
"sec_num": "3"
},
{
"text": "Given a set of documents containing sentences, our goal is to extract relation tuples with highest recall and precision. As explained earlier, our system is designed to utilize the dual dependence of tuple with pattern and pattern with tuple. As a starting point, we use a seed pattern p = (\u03c6, \u03c6, \u03c6, \u03c6) that will generate tuples from text corpus. These tuples are then used to generate extraction patterns which in turn generate more tuples just like in Snowball. All the extracted tuples and patterns in the process are not guaranteed to be correct. A good tuple should be syntactically and semantically correct as well as articulate, autonomous and informative. Similarly, a good pattern should achieve a good balance between two competitive criteria; specificity and coverage. Specificity means the pattern is able to identify high-quality relation tuples; while coverage means the pattern can identify a statistically non-trivial number of good relation tuples. Hence, in the process, we have a self evaluating system which evaluates and filters out invalid tuples and patterns based on their statistical properties. The overall system can be broken down into several modules, each of which perform an isolated task such as concept extraction, relation extraction, probable tuple generation, tuple verification etc. The system architecture of the overall system has been depicted in figure 1 and the algorithm is shown in Table 1 . The sub-modules are explained in detail in the subsequent sub-sections. Lexical and shallow NLP techniques are robust and fast enough for a problem like ours where extraction needs to be performed at web scale. Although, our concept extraction module can be easily replaced with named entity extractor, we primarily use part-of-speech tagging and chunking results for concept/relation extraction. All the sentences in our data sets are parsed using a opennlp (Baldridge et al., 2004) part-of-speech tagger.",
"cite_spans": [
{
"start": 1895,
"end": 1919,
"text": "(Baldridge et al., 2004)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [
{
"start": 1426,
"end": 1433,
"text": "Table 1",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "Create Tuple/Pattern Extraction Methodology",
"sec_num": "4"
},
{
"text": "Seed Pattern: We start with a fairly general and yet very strict pattern that will extract tuples from a sentence. The seed pattern, p s = {\u03c6, \u03c6, \u03c6, \u03c6} meaning there is an empty left context, empty middle left context, empty middle right context and empty right context. As an example, let us consider a sentence \"Temperature is ultimately regulated in the hypothalamus\", our process extracts two concepts \"Temperature\" and \"the hypothalamus\" and relation \"is ultimately regulated in\". The left context (context before concept 1) in this case is empty, middle left context (context between concept 1 and relation) is also empty and similarly, middle right and right contexts are empty. This is a fairly specific pattern for a tuple to be valid and moreover, this pattern is domain independent and can be applied to any domain for english language. We have a running example showing the steps in table 2.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Create Tuple/Pattern Extraction Methodology",
"sec_num": "4"
},
{
"text": "Concept Extraction Module: We extract concepts in the sentence based on noun phrases. We remove starting and trailing stopwords in noun phrases. If noun phrases contain conjunction, we break down noun phrase into two concepts.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Create Tuple/Pattern Extraction Methodology",
"sec_num": "4"
},
{
"text": "Relation Extraction Module: To extract relations, we extract the longest sequence of words such that it starts with verb or is a sequence of noun, adjective, adverb, pronoun and determiner or a sequence of preposition, particle and infinitve marker. If any pair of matches are adjacent or overlap in a sentence, we merge them to a single relation. This method has been proven to be effective in (Fader et al., 2011) .",
"cite_spans": [
{
"start": 395,
"end": 415,
"text": "(Fader et al., 2011)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Create Tuple/Pattern Extraction Methodology",
"sec_num": "4"
},
{
"text": "Probable Tuple Extraction: For each relation r \u2208 R and for every combination of c i andc j \u2208 C, such that c i occurs before r and no other relation occurs between c i and r and c j occurs after r and no other relation occurs between c j and r in the sentence, we create a probable tuple t = (c i , r, c j ).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Create Tuple/Pattern Extraction Methodology",
"sec_num": "4"
},
{
"text": "Tuple Pattern Extraction: For each tuple t = (c i , r, c j ) in sentence s, we extract the sequence of words in sentence that occurs between begin-ning of sentence and concept c i . If a relation occurs before c i , we start with the end of closest relation. This is the left context. Similarly we extract middle left context as the sequence of words between c i and relation r. Middle right context is the sequence of words between relation r and c j . Right context is the sequence of words between c j and either another relation r p (if exists) or end of the sentence. We experiment with three types of patterns, first: purely lexical(only use lexicons for pattern generation), second: purely syntactic (only use part of speech tags for pattern generation) and third: mixed pattern( a combination of lexicons and part of speech tags. For mixed pattern, we replace all nouns, verbs, adjectives and adverbs with their part of speech tags and leave preposition, particle and other words to use lexicons. Iteration: Our system is an iterative process and gets better qualitatively and quantitatively with each iteration. The number of iteration is highly dependent on the application of interest, pattern database size, size of corpus and time sensitivity of the system. We experimented on a smaller sample of data to see the convergence of the algorithm. We also iterated over a large corpus to see the effect of iteration on number of patterns and tuples. Since the extraction algorithm is based in active learning methodology, the system can perform quite well with iteration count as small as 2 in large corpus.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Create Tuple/Pattern Extraction Methodology",
"sec_num": "4"
},
{
"text": "Algorithm 1 Iterative Pattern Induction Input: P attern, P = {seed pattern}, T uples, T = {\u03c6} Sentences, S = {s1, s2, ....sn} Output: P atterns, P = {p1, p2, ...px}, T uples, T = {t1, t2, t3......ty} 1: for every Si \u2208 S do 2:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Create Tuple/Pattern Extraction Methodology",
"sec_num": "4"
},
{
"text": "C prob = {c1, c2, ..cj} \u2190 extractConcepts(Si) 3: R prob = {r1, r2...r h } \u2190 extractRelations(Si) 4: psent = replaceConceptsRelations(C prob , R prob ) 5:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Create Tuple/Pattern Extraction Methodology",
"sec_num": "4"
},
{
"text": "T prob = {t1, ..tu} \u2190 extractP robableT uples(C prob , R prob ) 6: end for 7: for every tj \u2208 T prob do 8: pattern, pi = extractP atternF or(Si, ps) 9: if pi \u2208 P && tj / \u2208 T 10: T.add(tj), P.update(pi) 11: else if pi / \u2208 P && tj \u2208 T 12: P.add(pi), T.update(tj) 13: else if pi \u2208 P && tj \u2208 T 14: P.update(pi), T.update(tj) 15: end if 16: end for We employ a holistic approach for concepts and relations extraction that enforces coherence in relations and concepts in tuples . To ensure validity of extracted tuples, we select patterns and tuples that occur more than \u03b1 (3 in our experiments) and \u03b2 (2 for medical and 1 for wikipedia for our experiments) times respectively. Also, total frequency of a pattern p in a relation r is defined as the sum of the frequencies of p in all entity pairs that have relation r. We define confidence of a tuple as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Create Tuple/Pattern Extraction Methodology",
"sec_num": "4"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "Conf (t) = p\u2208Pt f (p i ) f (p maxt )log(N )",
"eq_num": "(1)"
}
],
"section": "Create Tuple/Pattern Extraction Methodology",
"sec_num": "4"
},
{
"text": "where f (p i ) is the frequency of pattern p i for relation r such that tuple t also has relation r. Here, f (p maxt ) is the frequency of pattern that has maximum frequency for relation r and N is the total number of distinct patterns that match tuple t. Note here that confidence conf(t) can be greater than 1 depending on the number of patterns that extract tuple t.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Create Tuple/Pattern Extraction Methodology",
"sec_num": "4"
},
{
"text": "Traditional vector space model based relevance cannot be applied to concept based relevance paradigm. Hence we employ PMI based relevance for tuple retrieval. If e1 is the query entity for which search is executed, then the relevance of a tuple is calculated in terms of PMI between query entity e1 and second argument in tuple that contains e1 as first argument. PMI between entities e1 and e2 is defined as P M I(e 1 , e 2 ) = log P (e 1 , e 2 ) P (e 1 , e)P (e 2 , e) = logN n 12 n 1 .n 2",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tuple relevance",
"sec_num": "5.2"
},
{
"text": "(2) N P M I(e 1 , e 2 ) = P M I(e 1 , e 2 ) \u2212logP (e 1 , e 2 )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tuple relevance",
"sec_num": "5.2"
},
{
"text": "where N : the total number of tuples in the corpus, P (e 1 , e 2 ) = n 12 /N =the number of sentences containing tuples that have e 1 and e 2 as arguments, P (e 1 , e) = n 1 /N : the probability that the entity e 1 cooccurs with entity e in tuples, P (e 2 , e) = n 2 /N : the probability that the entity e 2 cooccurs with entity e in tuples.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tuple relevance",
"sec_num": "5.2"
},
{
"text": "6 Prototype and Experiments",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tuple relevance",
"sec_num": "5.2"
},
{
"text": "We built the system prototype based on the process explained in this paper for two datasets, namely; wikipedia and medical sites. We crawled 10 medical information sites and collected sentences talking about medicine. The prototype provides a tuple searching interface and a concept-graph based navigation system. We demonstrate the usefulness of the system with medical information and evaluate against few relations in wikipedia. Figure 2 shows a snapshot of the prototype for medical data for another example.",
"cite_spans": [],
"ref_spans": [
{
"start": 432,
"end": 440,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "System Prototype",
"sec_num": "6.1"
},
{
"text": "We compared the result of our system with other systems such as Reverb, TextRunner and WOE. For evaluation purpose, we used the test set of 500 sentences used in Reverb system evaluation (Fader et al., 2011) . The figures shows the quantitative comparison of our system compared to reverb and woe. It has to be noted however that this result does not evaluate the iterative process of create. The distinctive advantage of create is seen when applied to a relatively larger corpus where the system is applied iteratively. Figure 5 shows the comparison of CREATE with Reverbm WOE and TextRunner. We see improved recall at around 92% and precision around 75% for create which outperforms all other systems. Similarly, figure 6 shows the effect of iteration on the performance of CREATE system. We see the same effect of rapid increase in performance in initial iterations and then it gets stabilized after few iterations.",
"cite_spans": [
{
"start": 187,
"end": 207,
"text": "(Fader et al., 2011)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [
{
"start": 521,
"end": 529,
"text": "Figure 5",
"ref_id": null
},
{
"start": 715,
"end": 723,
"text": "figure 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "Comparison with Open Information Extraction Systems",
"sec_num": "6.2"
},
{
"text": "We also experimented with the performance based on different patterns. Figure 7 shows that recall for POS pattern is the highest but the precision is highest with mixed pattern.",
"cite_spans": [],
"ref_spans": [
{
"start": 71,
"end": 79,
"text": "Figure 7",
"ref_id": "FIGREF3"
}
],
"eq_spans": [],
"section": "Comparison with Open Information Extraction Systems",
"sec_num": "6.2"
},
{
"text": "We used Semantically Annotated Snapshot of the English Wikipedia (Atserias et al., 2008) relation tuples as the first large dataset. The SW1 corpus is a snapshot of the English Wikipedia dated from 2006-11-04 processed with a number of public-available NLP tools. We chose to use this data as it has been processed and has information on shallow parsing such as POS tags and named entities on seven categories. To demonstrate the interchangeability of concept extraction module , we used the named entities as concepts for relation extraction. We then generated tuples from data. Since it is not possible to evaluate all the relation tuples extracted from wikipedia, we performed samples evaluation of the system for few sampled relations and tuples. We compared the performance of our system based on precision and recall compared to Dbpedia. The evaluation in terms of precision and recall is shown in Table 4 . Precision and recall are given by the following equations ",
"cite_spans": [
{
"start": 65,
"end": 88,
"text": "(Atserias et al., 2008)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [
{
"start": 904,
"end": 912,
"text": "Table 4",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Wikipedia Tuple Extraction",
"sec_num": "6.3"
},
{
"text": "We have qualitatively and quantitavely demonstrated the effectiveness and usefullness of our system and overall relation extraction systems. With increasng data being available, the value and importance of systems such as CREATE is ever increasing. We have demonstrated the prospects of relation extraction systems. At the same, we also need to be aware of the challenges that need to be solved before we can realize a fully functional machine reading system.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "7"
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Snowball: Extracting relations from large plain-text collections",
"authors": [
{
"first": "Eugene",
"middle": [],
"last": "Agichtein",
"suffix": ""
},
{
"first": "Luis",
"middle": [],
"last": "Gravano",
"suffix": ""
}
],
"year": 2000,
"venue": "Proceedings of the fifth ACM conference on Digital libraries",
"volume": "",
"issue": "",
"pages": "85--94",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Eugene Agichtein and Luis Gravano. 2000. Snow- ball: Extracting relations from large plain-text col- lections. In Proceedings of the fifth ACM conference on Digital libraries, pages 85-94. ACM.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Semantically annotated snapshot of the english wikipedia. Proceedings of the Sixth International Language Resources and Evaluation (LREC'08)",
"authors": [
{
"first": "Jordi",
"middle": [],
"last": "Atserias",
"suffix": ""
},
{
"first": "Hugo",
"middle": [],
"last": "Zaragoza",
"suffix": ""
},
{
"first": "Massimiliano",
"middle": [],
"last": "Ciaramita",
"suffix": ""
},
{
"first": "Giuseppe",
"middle": [],
"last": "Attardi",
"suffix": ""
}
],
"year": 2008,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jordi Atserias, Hugo Zaragoza, Massimiliano Cia- ramita, and Giuseppe Attardi. 2008. Semantically annotated snapshot of the english wikipedia. Pro- ceedings of the Sixth International Language Re- sources and Evaluation (LREC'08).",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Dbpedia: A nucleus for a web of open data",
"authors": [
{
"first": "S\u00f6ren",
"middle": [],
"last": "Auer",
"suffix": ""
},
{
"first": "Christian",
"middle": [],
"last": "Bizer",
"suffix": ""
},
{
"first": "Georgi",
"middle": [],
"last": "Kobilarov",
"suffix": ""
},
{
"first": "Jens",
"middle": [],
"last": "Lehmann",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Cyganiak",
"suffix": ""
},
{
"first": "Zachary",
"middle": [],
"last": "Ives",
"suffix": ""
}
],
"year": 2007,
"venue": "The semantic web",
"volume": "",
"issue": "",
"pages": "722--735",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "S\u00f6ren Auer, Christian Bizer, Georgi Kobilarov, Jens Lehmann, Richard Cyganiak, and Zachary Ives. 2007. Dbpedia: A nucleus for a web of open data. In The semantic web, pages 722-735. Springer.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "The opennlp maxent package in java",
"authors": [
{
"first": "Jason",
"middle": [],
"last": "Baldridge",
"suffix": ""
},
{
"first": "Tom",
"middle": [],
"last": "Morton",
"suffix": ""
},
{
"first": "Gann",
"middle": [],
"last": "Bierner",
"suffix": ""
}
],
"year": 2004,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jason Baldridge, Tom Morton, and Gann Bierner. 2004. The opennlp maxent package in java. URL: http://maxent. sourceforge. net.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Open information extraction for the web",
"authors": [
{
"first": "Michele",
"middle": [],
"last": "Banko",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Michael",
"suffix": ""
},
{
"first": "Stephen",
"middle": [],
"last": "Cafarella",
"suffix": ""
},
{
"first": "Matt",
"middle": [],
"last": "Soderland",
"suffix": ""
},
{
"first": "Oren",
"middle": [],
"last": "Broadhead",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Etzioni",
"suffix": ""
}
],
"year": 2009,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michele Banko, Michael J Cafarella, Stephen Soder- land, Matt Broadhead, and Oren Etzioni. 2009. Open information extraction for the web. University of Washington.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Extracting patterns and relations from the world wide web",
"authors": [
{
"first": "",
"middle": [],
"last": "Sergey Brin",
"suffix": ""
}
],
"year": 1999,
"venue": "The World Wide Web and Databases",
"volume": "",
"issue": "",
"pages": "172--183",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sergey Brin. 1999. Extracting patterns and relations from the world wide web. In The World Wide Web and Databases, pages 172-183. Springer.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Subsequence kernels for relation extraction",
"authors": [
{
"first": "Razvan",
"middle": [],
"last": "Bunescu",
"suffix": ""
},
{
"first": "Raymond",
"middle": [],
"last": "Mooney",
"suffix": ""
}
],
"year": 2006,
"venue": "Advances in neural information processing systems",
"volume": "18",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Razvan Bunescu and Raymond Mooney. 2006. Subse- quence kernels for relation extraction. Advances in neural information processing systems, 18:171.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Toward an architecture for neverending language learning",
"authors": [
{
"first": "Andrew",
"middle": [],
"last": "Carlson",
"suffix": ""
},
{
"first": "Justin",
"middle": [],
"last": "Betteridge",
"suffix": ""
},
{
"first": "Bryan",
"middle": [],
"last": "Kisiel",
"suffix": ""
},
{
"first": "Burr",
"middle": [],
"last": "Settles",
"suffix": ""
},
{
"first": "Tom M",
"middle": [],
"last": "Estevam R Hruschka",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Mitchell",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the Twenty-Fourth Conference on Artificial Intelligence (AAAI 2010)",
"volume": "2",
"issue": "",
"pages": "3--3",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Andrew Carlson, Justin Betteridge, Bryan Kisiel, Burr Settles, Estevam R Hruschka Jr, and Tom M Mitchell. 2010. Toward an architecture for never- ending language learning. In Proceedings of the Twenty-Fourth Conference on Artificial Intelligence (AAAI 2010), volume 2, pages 3-3.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Connections between the lines: augmenting social networks with text",
"authors": [
{
"first": "Jonathan",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Jordan",
"middle": [],
"last": "Boyd-Graber",
"suffix": ""
},
{
"first": "David",
"middle": [
"M"
],
"last": "Blei",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the 15th ACM SIGKDD international conference on Knowledge discovery and data mining",
"volume": "",
"issue": "",
"pages": "169--178",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jonathan Chang, Jordan Boyd-Graber, and David M Blei. 2009. Connections between the lines: aug- menting social networks with text. In Proceedings of the 15th ACM SIGKDD international conference on Knowledge discovery and data mining, pages 169-178. ACM.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "A web of concepts",
"authors": [
{
"first": "Nilesh",
"middle": [],
"last": "Dalvi",
"suffix": ""
},
{
"first": "Ravi",
"middle": [],
"last": "Kumar",
"suffix": ""
},
{
"first": "Bo",
"middle": [],
"last": "Pang",
"suffix": ""
},
{
"first": "Raghu",
"middle": [],
"last": "Ramakrishnan",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Tomkins",
"suffix": ""
},
{
"first": "Philip",
"middle": [],
"last": "Bohannon",
"suffix": ""
},
{
"first": "Sathiya",
"middle": [],
"last": "Keerthi",
"suffix": ""
},
{
"first": "Srujana",
"middle": [],
"last": "Merugu",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the twenty-eighth ACM SIGMOD-SIGACT-SIGART symposium on Principles of database systems",
"volume": "",
"issue": "",
"pages": "1--12",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nilesh Dalvi, Ravi Kumar, Bo Pang, Raghu Ramakr- ishnan, Andrew Tomkins, Philip Bohannon, Sathiya Keerthi, and Srujana Merugu. 2009. A web of con- cepts. In Proceedings of the twenty-eighth ACM SIGMOD-SIGACT-SIGART symposium on Princi- ples of database systems, pages 1-12. ACM.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Identifying relations for open information extraction",
"authors": [
{
"first": "Anthony",
"middle": [],
"last": "Fader",
"suffix": ""
},
{
"first": "Stephen",
"middle": [],
"last": "Soderland",
"suffix": ""
},
{
"first": "Oren",
"middle": [],
"last": "Etzioni",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1535--1545",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Anthony Fader, Stephen Soderland, and Oren Etzioni. 2011. Identifying relations for open information ex- traction. In Proceedings of the Conference on Em- pirical Methods in Natural Language Processing, pages 1535-1545. Association for Computational Linguistics.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Combining lexical, syntactic, and semantic features with maximum entropy models for extracting relations",
"authors": [
{
"first": "Nanda",
"middle": [],
"last": "Kambhatla",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of the ACL 2004 on Interactive poster and demonstration sessions, page 22. Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nanda Kambhatla. 2004. Combining lexical, syntac- tic, and semantic features with maximum entropy models for extracting relations. In Proceedings of the ACL 2004 on Interactive poster and demonstra- tion sessions, page 22. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Towards large-scale unsupervised relation extraction from the web",
"authors": [
{
"first": "Shuming",
"middle": [],
"last": "Bonan Min",
"suffix": ""
},
{
"first": "Ralph",
"middle": [],
"last": "Shi",
"suffix": ""
},
{
"first": "Chin-Yew",
"middle": [],
"last": "Grishman",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Lin",
"suffix": ""
}
],
"year": 2012,
"venue": "International Journal on Semantic Web and Information Systems (IJSWIS)",
"volume": "8",
"issue": "3",
"pages": "1--23",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bonan Min, Shuming Shi, Ralph Grishman, and Chin- Yew Lin. 2012. Towards large-scale unsuper- vised relation extraction from the web. International Journal on Semantic Web and Information Systems (IJSWIS), 8(3):1-23.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Yago: a core of semantic knowledge",
"authors": [
{
"first": "M",
"middle": [],
"last": "Fabian",
"suffix": ""
},
{
"first": "Gjergji",
"middle": [],
"last": "Suchanek",
"suffix": ""
},
{
"first": "Gerhard",
"middle": [],
"last": "Kasneci",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Weikum",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of the 16th international conference on World Wide Web",
"volume": "",
"issue": "",
"pages": "697--706",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Fabian M Suchanek, Gjergji Kasneci, and Gerhard Weikum. 2007. Yago: a core of semantic knowl- edge. In Proceedings of the 16th international con- ference on World Wide Web, pages 697-706. ACM.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Unsupervised techniques for discovering ontology elements from wikipedia article links",
"authors": [
{
"first": "Zareen",
"middle": [],
"last": "Syed",
"suffix": ""
},
{
"first": "Tim",
"middle": [],
"last": "Finin",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the NAACL HLT 2010 First International Workshop on Formalisms and Methodology for Learning by Reading",
"volume": "",
"issue": "",
"pages": "78--86",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zareen Syed and Tim Finin. 2010. Unsupervised techniques for discovering ontology elements from wikipedia article links. In Proceedings of the NAACL HLT 2010 First International Workshop on Formalisms and Methodology for Learning by Read- ing, pages 78-86. Association for Computational Linguistics.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Open information extraction using wikipedia",
"authors": [
{
"first": "Fei",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Daniel S Weld",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "118--127",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Fei Wu and Daniel S Weld. 2010. Open information extraction using wikipedia. In Proceedings of the 48th Annual Meeting of the Association for Compu- tational Linguistics, pages 118-127. Association for Computational Linguistics.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Unsupervised relation extraction by mining wikipedia texts using information from the web",
"authors": [
{
"first": "Yulan",
"middle": [],
"last": "Yan",
"suffix": ""
},
{
"first": "Naoaki",
"middle": [],
"last": "Okazaki",
"suffix": ""
},
{
"first": "Yutaka",
"middle": [],
"last": "Matsuo",
"suffix": ""
},
{
"first": "Zhenglu",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Mitsuru",
"middle": [],
"last": "Ishizuka",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP",
"volume": "2",
"issue": "",
"pages": "1021--1029",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yulan Yan, Naoaki Okazaki, Yutaka Matsuo, Zhenglu Yang, and Mitsuru Ishizuka. 2009. Unsupervised relation extraction by mining wikipedia texts using information from the web. In Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP: Vol- ume 2-Volume 2, pages 1021-1029. Association for Computational Linguistics.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Structured relation discovery using generative models",
"authors": [
{
"first": "Limin",
"middle": [],
"last": "Yao",
"suffix": ""
},
{
"first": "Aria",
"middle": [],
"last": "Haghighi",
"suffix": ""
},
{
"first": "Sebastian",
"middle": [],
"last": "Riedel",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Mccallum",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1456--1466",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Limin Yao, Aria Haghighi, Sebastian Riedel, and An- drew McCallum. 2011. Structured relation discov- ery using generative models. In Proceedings of the Conference on Empirical Methods in Natural Lan- guage Processing, pages 1456-1466. Association for Computational Linguistics.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Textrunner: open information extraction on the web",
"authors": [
{
"first": "Alexander",
"middle": [],
"last": "Yates",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Cafarella",
"suffix": ""
},
{
"first": "Michele",
"middle": [],
"last": "Banko",
"suffix": ""
},
{
"first": "Oren",
"middle": [],
"last": "Etzioni",
"suffix": ""
},
{
"first": "Matthew",
"middle": [],
"last": "Broadhead",
"suffix": ""
},
{
"first": "Stephen",
"middle": [],
"last": "Soderland",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of Human Language Technologies: The Annual Conference of the North American Chapter of the Association for Computational Linguistics: Demonstrations",
"volume": "",
"issue": "",
"pages": "25--26",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alexander Yates, Michael Cafarella, Michele Banko, Oren Etzioni, Matthew Broadhead, and Stephen Soderland. 2007. Textrunner: open information extraction on the web. In Proceedings of Human Language Technologies: The Annual Conference of the North American Chapter of the Association for Computational Linguistics: Demonstrations, pages 25-26. Association for Computational Linguistics.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Kernel methods for relation extraction",
"authors": [
{
"first": "Dmitry",
"middle": [],
"last": "Zelenko",
"suffix": ""
},
{
"first": "Chinatsu",
"middle": [],
"last": "Aone",
"suffix": ""
},
{
"first": "Anthony",
"middle": [],
"last": "Richardella",
"suffix": ""
}
],
"year": 2003,
"venue": "The Journal of Machine Learning Research",
"volume": "3",
"issue": "",
"pages": "1083--1106",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dmitry Zelenko, Chinatsu Aone, and Anthony Richardella. 2003. Kernel methods for relation extraction. The Journal of Machine Learning Re- search, 3:1083-1106.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Extracting relations with integrated information using kernel methods",
"authors": [
{
"first": "Shubin",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Ralph",
"middle": [],
"last": "Grishman",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of the 43rd Annual Meeting on Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "419--426",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shubin Zhao and Ralph Grishman. 2005. Extract- ing relations with integrated information using ker- nel methods. In Proceedings of the 43rd Annual Meeting on Association for Computational Linguis- tics, pages 419-426. Association for Computational Linguistics.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Statsnowball: a statistical approach to extracting entity relationships",
"authors": [
{
"first": "Jun",
"middle": [],
"last": "Zhu",
"suffix": ""
},
{
"first": "Zaiqing",
"middle": [],
"last": "Nie",
"suffix": ""
},
{
"first": "Xiaojiang",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Bo",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Ji-Rong",
"middle": [],
"last": "Wen",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the 18th international conference on World wide web",
"volume": "",
"issue": "",
"pages": "101--110",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jun Zhu, Zaiqing Nie, Xiaojiang Liu, Bo Zhang, and Ji-Rong Wen. 2009. Statsnowball: a statistical ap- proach to extracting entity relationships. In Pro- ceedings of the 18th international conference on World wide web, pages 101-110. ACM.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"uris": null,
"text": "Overall System Architecture Feature:We consider lexical and shallow parse information as features for relation extraction.",
"num": null,
"type_str": "figure"
},
"FIGREF1": {
"uris": null,
"text": "Figure 2: Concept based Search User Interface",
"num": null,
"type_str": "figure"
},
"FIGREF2": {
"uris": null,
"text": "Effect of Iteration on Number of patterns Figure 3 and figure 4 show the effect of iteration with the CREATE algorithm. It shows that in initial iterations, there is a rapid increase in number of patterns and tuples. However it starts to converge with higher iterations. For proof of concept, Effect of Iteration on Number of tuples we experimented with a sample data that we created with medical sentences. It shows that tuple and pattern generation converges in 5 iterations. Comparison of CREATE performance with Reverb, WOE and TextRunner",
"num": null,
"type_str": "figure"
},
"FIGREF3": {
"uris": null,
"text": "Precision/ Recall variance with Confidence",
"num": null,
"type_str": "figure"
},
"TABREF0": {
"text": "Iterative Pattern Induction Algorithm",
"type_str": "table",
"num": null,
"content": "<table/>",
"html": null
},
"TABREF1": {
"text": "Running Example of Tuple and Pattern Extraction",
"type_str": "table",
"num": null,
"content": "<table><tr><td>5 Tuple Refinement</td></tr><tr><td>5.1 Tuple and Pattern Filtering</td></tr></table>",
"html": null
},
"TABREF3": {
"text": "Data statistics for wikipedia.",
"type_str": "table",
"num": null,
"content": "<table/>",
"html": null
},
"TABREF5": {
"text": "Data Statistics.",
"type_str": "table",
"num": null,
"content": "<table/>",
"html": null
}
}
}
}