ACL-OCL / Base_JSON /prefixY /json /Y11 /Y11-1040.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "Y11-1040",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T13:39:47.241893Z"
},
"title": "Iteratively Estimating Pattern Reliability and Seed Quality With Extraction Consistency *",
"authors": [
{
"first": "Yi-Hsun",
"middle": [],
"last": "Lee",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "National Tsing Hua University",
"location": {
"addrLine": "Section 2, Kuang-Fu Road",
"postCode": "101",
"settlement": "Hsinchu",
"country": "Taiwan, R.O.C"
}
},
"email": ""
},
{
"first": "Chung-Yao",
"middle": [],
"last": "Chuang",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Academia Sinica",
"location": {
"addrLine": "128 Academia Road, Sec.2",
"settlement": "Nankang, Taipei",
"country": "Taiwan, ROC"
}
},
"email": "cychuang@iis.sinica.edu.tw"
},
{
"first": "Wen-Lian",
"middle": [],
"last": "Hsu",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "National Tsing Hua University",
"location": {
"addrLine": "Section 2, Kuang-Fu Road",
"postCode": "101",
"settlement": "Hsinchu",
"country": "Taiwan, R.O.C"
}
},
"email": "hsu@iis.sinica.edu.tw"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "In this paper, we focus on the task of distilling relation instances from the Web. Most of the approaches for this task were based on provided seed instances or patterns to initiate the process. Thus, the result of the extraction depends largely on the quality of the instances and patterns. For this matter, we propose an iterative mechanism that estimates the reliability of a pattern by the consistency of its extractions, and reevaluate the usefulness of seed instance based on estimated pattern reliability. The resulting system is a semi-supervised method that can take a large quantity of seed instances with diverse quality. To evaluate the effectiveness of our approach, we experimented on 8 types of relationships. The empirical results show that our system performs quite consistency in different relationships while maintaining high precision and recall value.",
"pdf_parse": {
"paper_id": "Y11-1040",
"_pdf_hash": "",
"abstract": [
{
"text": "In this paper, we focus on the task of distilling relation instances from the Web. Most of the approaches for this task were based on provided seed instances or patterns to initiate the process. Thus, the result of the extraction depends largely on the quality of the instances and patterns. For this matter, we propose an iterative mechanism that estimates the reliability of a pattern by the consistency of its extractions, and reevaluate the usefulness of seed instance based on estimated pattern reliability. The resulting system is a semi-supervised method that can take a large quantity of seed instances with diverse quality. To evaluate the effectiveness of our approach, we experimented on 8 types of relationships. The empirical results show that our system performs quite consistency in different relationships while maintaining high precision and recall value.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "The rapid growth of the World Wide Web has attracted a lot of research effort on designing methods that automatically extract knowledge or useful information from large, unstructured text. Different from the conventional corpus, the magnitude and noisy natural of the Web has prohibited analytical approaches to be effective. Consequently, most of the systems that took this challenge proceed in a semi-supervised fashion with a human-provided starting point, such as a few instances of the desired extraction (Mann and Yarowsky, 2005; Muslea, 1999; Ravichandran and Hovy, 2002; Pantel and Pennacchiotti, 2006) .",
"cite_spans": [
{
"start": 510,
"end": 535,
"text": "(Mann and Yarowsky, 2005;",
"ref_id": "BIBREF4"
},
{
"start": 536,
"end": 549,
"text": "Muslea, 1999;",
"ref_id": "BIBREF5"
},
{
"start": 550,
"end": 578,
"text": "Ravichandran and Hovy, 2002;",
"ref_id": "BIBREF7"
},
{
"start": 579,
"end": 610,
"text": "Pantel and Pennacchiotti, 2006)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper, we focus on the extraction of relation instances. In this scenario, the system needs to be fed with prepared pairs, such as <Barack Obama, Auguest 4th>, or initial extraction patterns to bootstrap. Kozareva and Hovy (2010) mentioned that seed selection plays an important role in this kind of semi-supervised approaches. Therefore, how to select high quality seeds in the initial stage is a critical issue. Most researchers select seeds manually to avoid this problem, but the scalability of such manual selection is not promising. Then, these approaches utilize the given instances, called seeds, and generate extraction patterns that has the potential to locate more instances of the desired type in the text. For example, Ravichandran and Hovy (2002) use surface text patterns like <Person> was born on <Date> to answer questions about birth dates. Different from those approaches that heavily depend on the quality of the initial seeds, in this paper, we took an alternative direction that focuses more on the quantity of the seed instances. Such a pursuit is made possible by the advent of rich knowledge sources such as Wikipedia 1 and CIA Fact Book 2 . For example, Wikipedia Infoboxes 3 provide an opportunity to easily gather a vast amount of seed instances because the data is stored in a template form as illustrated in Figure 1 . Using such sources, we can harvest a large number of pairs like <Kobe Bryant, Pennsylvania> from the below infobox as seed instances for birth place extraction. However, using arbitrary seeds to retrieve sentences from the Web will potentially result in a large number of irrelevant content, which will in turn hamper pattern production.",
"cite_spans": [
{
"start": 213,
"end": 237,
"text": "Kozareva and Hovy (2010)",
"ref_id": "BIBREF3"
},
{
"start": 740,
"end": 768,
"text": "Ravichandran and Hovy (2002)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [
{
"start": 1346,
"end": 1354,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "To demonstrate such a situation, we conducted an experiment on seeds gathered from Wikipedia infobox. The results are presented in Table 1 . For each relation type, we randomly select 200 seed instances for forming queries and for each query, evaluate first ten snippets returned from search engine. The relevance of the retrieved snippets are judged by two human annotators. We can see that the relevance ratio is not perfect even we have used both entities in the pair for forming the query. One factor behind such imperfection is that the open and voluntary nature of Wikipedia allows editors to fill in information of different specificity. For example, the birth place field of some people contains only less detailed information such as the country instead of more specific description like county or city. Moreover, as can be seen in Table 1 , the relevance ratio is not consistent among different relation types and may be surprisingly low such as the type of death place. Such a situation will affect the performance of semi-supervised approaches greatly (Xu et al., 2007) .",
"cite_spans": [
{
"start": 1064,
"end": 1081,
"text": "(Xu et al., 2007)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [
{
"start": 131,
"end": 138,
"text": "Table 1",
"ref_id": "TABREF1"
},
{
"start": 841,
"end": 848,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Fortunately, having abundant seed instances offers us an opportunity to mitigate such a problem. In this paper, we propose a mechanism that iteratively assesses both the quality of seed instances and induced extraction patterns. Our strategy is to estimate the reliability of an extraction pattern by the consistency of its extractions, and alternately, reevaluate the usefulness of seed instances based on estimated pattern reliability. The resulting system works best when it is fed with a large number of seeds, so that the reliability of the induced pattern can be better estimated.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In the next section, we review several semi-supervised approaches that are comparable to our system. We introduce the proposed CEPRA method in Section 3. In Section 4, we describe the experimental settings; and in Section 5, we discuss the experiments conducted to evaluate the performance of different selection approaches. We summarize the results in Section 6. Then, in Section 7, we provide some concluding remarks and consider avenues for future research.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Semi-supervised approaches start with manually prepared patterns or seeds, and then generate surface text patterns, which are syntactic patterns that connect two entities in one relationship. Surface text patterns are widely used for information extraction. For example, <Person> was born in <Year> is an intuitive pattern for matching the birth year of someone. This pattern connect the person and the corresponding year as a semantic relation (birth year) and thus can be used to effectively extract the information. For binary relation extraction, such as the above example of <Person, Birth Year>, the first term is often called the hook term(e.g. Person), and the second one the target term(e.g. Birth Year) (Alfonseca et al., 2006; Mann and Yarowsky, 2005; Ravichandran and Hovy, 2002) .",
"cite_spans": [
{
"start": 713,
"end": 737,
"text": "(Alfonseca et al., 2006;",
"ref_id": "BIBREF1"
},
{
"start": 738,
"end": 762,
"text": "Mann and Yarowsky, 2005;",
"ref_id": "BIBREF4"
},
{
"start": 763,
"end": 791,
"text": "Ravichandran and Hovy, 2002)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Works",
"sec_num": "2"
},
{
"text": "Most pattern-based approaches for relation extraction are implemented as follows. First, a set of seed instances are prepared in the form of pairs of hook and target terms serving as examples of the intended relation type. For instance, <Kobe Bryant, 1978> could be one of the seed instances that fed into a relation extraction system for learning how to extract the birth year of someone. The seed instances are then used as queries for retrieving sentences containing both the hook and the target terms, most popularly from the Web. The retrieved sentences are subsequently used for generating extraction patterns. Several approaches for this step have been proposed such as the longest common substring (Agichtein et al., 2001) , substrings in suffix trees (Ravichandran and Hovy, 2002; Ruiz-Casado et al., 2007) and edit distance based alignment (Ruiz-Casado et al., 2007) . However, a portion of the retrieved sentences can be, to a certain degree, not relevant in describing the intended relation. Thus, the patterns built on top of them can deviate from the original goal. To overcome this problem, most approaches place an evaluation step in which patterns are assessed using certain criteria. Such an evaluation is usually done using a sentence collection different from the one used for producing the extraction patterns. When using the Web as our extraction source, one apparent choice of this testing collection is the sentences retrieved by only submitting the hook terms to the search engine. In this paper, we denote such a sentence collection as S H , and the sentence collection gathered using both hook and target terms as S \u27e8H,T \u27e9 .",
"cite_spans": [
{
"start": 706,
"end": 730,
"text": "(Agichtein et al., 2001)",
"ref_id": "BIBREF0"
},
{
"start": 760,
"end": 789,
"text": "(Ravichandran and Hovy, 2002;",
"ref_id": "BIBREF7"
},
{
"start": 790,
"end": 815,
"text": "Ruiz-Casado et al., 2007)",
"ref_id": "BIBREF8"
},
{
"start": 850,
"end": 876,
"text": "(Ruiz-Casado et al., 2007)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Works",
"sec_num": "2"
},
{
"text": "With an additional sentence collection, we can assess the utility of the generated patterns. A simple way to estimate the usefulness of an extraction pattern, p, is based on the extraction frequency of that pattern. This frequency-based estimation (FE) method counts how many terms (regardless correct or not) that pattern, with hook-term slot filled, is able to extract from S H ,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Works",
"sec_num": "2"
},
{
"text": "f F E (p) = \u2211 h\u2208H |p h (S H )|",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Works",
"sec_num": "2"
},
{
"text": "where H is the set of all hook terms, p h is the pattern p with hook term h, p h (S H ) is the bag of terms that p h extracted from S H , and |p h (S H )| denotes the size of that bag of terms. Another intuitive approach for evaluating extraction patterns is based on estimated accuracy. The accuracy can only be estimated because of the noisy nature of the Web. For example, in the Chinese portion of Wikipedia, the birth place of Kobe Bryant extracted from infoboxes is \"\u8cd3\u5915\u6cd5\u5c3c\u4e9e\u5dde (Pennsylvania)\" and \"\u8cbb\u57ce (Philadelphia)\". However, there are several translations other than the above for \"Pennsylvania\" and \"Philadelphia\" in Chinese, which all could appear in the retrieval. Besides the translation problem that we encountered frequently in this work, the voluntary nature of Wikipedia renders automatic evaluation vulnerable to specificity mismatches. For instance, the extracted information may be more detailed than the information provided in the infobox, however, there is no general and convenient way to adjust such a mismatch. For these reasons, rather than calling this approach accuracy-based, we refer it as confidence-based estimation (CE), and is formulated as",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Works",
"sec_num": "2"
},
{
"text": "f CE (p) = \u2211 \u27e8h,t\u27e9\u2208\u27e8H,T\u27e9 |p h (S H ) = t| \u2211 h\u2208H |p h (S H )|",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Works",
"sec_num": "2"
},
{
"text": "where \u27e8H, T\u27e9 is the set of seed instances in which each \u27e8h, t\u27e9 is a pair of hook and target terms, and |p h (S H ) = t| is the number of terms in p h (S H ) that matches t. Both frequency and confidence-based approaches rely heavily on the seed quality. In order to alleviate that, Pantel and Pennacchiotti (2006) proposed a method called Espresso which uses point-wise mutual information (PMI) to evaluate the strength of association between a pattern and its extractions,",
"cite_spans": [
{
"start": 282,
"end": 313,
"text": "Pantel and Pennacchiotti (2006)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Works",
"sec_num": "2"
},
{
"text": "f ES (p) = \u2211 \u27e8h,t\u27e9\u2208\u27e8H,T\u27e9 pmi(\u27e8h,t\u27e9,p) max pmi \u00d7 g ES (\u27e8h, t\u27e9) |\u27e8H, T\u27e9|",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Works",
"sec_num": "2"
},
{
"text": "where pmi(\u27e8h, t\u27e9, p) is the point-wise mutual information between pattern p and a relation instance \u27e8h, t\u27e9, max pmi is the largest PMI observed, and g ES (\u27e8h, t\u27e9) is an estimate of the quality of instance \u27e8h, t\u27e9,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Works",
"sec_num": "2"
},
{
"text": "g ES (\u27e8h, t\u27e9) = \u2211 p\u2208P pmi(\u27e8h,t\u27e9,p) max pmi \u00d7 f ES (p) |P|",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Works",
"sec_num": "2"
},
{
"text": "where P is the set of all patterns. These two formulas are calculated iteratively to adjust the weights of both patterns and seed instances. This approach utilizes the co-occurrence as an indication. However, patterns frequently co-occurred with some instances may still have no relevance to the targeted relation. For example, in (Blohm et al., 2007) , Espresso got a low precision result in birth year extraction. Xu et al. (2007) noted that a factor for pattern-based approach to be effective is the ratio of relevant sentences within the text collection for generating the patterns. In this paper, we gather many seeds extracted from Infoboxes in the initial stage. Due to the seed quality is not consistent, we propose a new approach for evaluating both the patterns and seed instances. Different from the above approaches which only evaluate performance based on S H , our proposal further utilizes the statistical similarity between extractions from S \u27e8H,T \u27e9 and extractions from S H , which we believe is a good indicator of pattern reliability.",
"cite_spans": [
{
"start": 331,
"end": 351,
"text": "(Blohm et al., 2007)",
"ref_id": "BIBREF2"
},
{
"start": 416,
"end": 432,
"text": "Xu et al. (2007)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Works",
"sec_num": "2"
},
{
"text": "In this section, we focus on the main concept about estimating pattern's reliability with consistency measurement between different sentence collections and measure the seed's quality by reliable patterns, described in the following sub-sections.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Proposed Approach",
"sec_num": "3"
},
{
"text": "As pattern generation process can potentially produce a large number of extraction patterns, we need a strategy to find the most reliable patterns. To explain the intuition behind our approach, consider an \"oracle collection\", S O , which contains all sentences describing the targeted relation that we can find on the Web. Figure 2 shows the relationship between S H , S \u27e8H,T \u27e9 and S O . Ideally, S O and S \u27e8H,T \u27e9 would be subsets of S H if we could retrieve all sentences containing hook terms from the Web. As depicted in Figure 2 , S \u27e8H,T \u27e9 is not a subset of S O because the sentences retrieved by using hook and target terms will contain some noisy results as discussed in Section 1 and demonstrated in Table 1 . When generating patterns, S \u27e8H,T \u27e9 \\ S O will cause relevance problems, which makes the pattern induction procedure producing deviated patterns. On the other hand, S O \\ S \u27e8H,T \u27e9 will cause specificity problems, which undermines useful patterns in evaluation.",
"cite_spans": [],
"ref_spans": [
{
"start": 324,
"end": 332,
"text": "Figure 2",
"ref_id": "FIGREF1"
},
{
"start": 525,
"end": 533,
"text": "Figure 2",
"ref_id": "FIGREF1"
},
{
"start": 709,
"end": 716,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Proposed Approach",
"sec_num": "3"
},
{
"text": "Manually selecting seed instances could reduce the extent of these two kinds of problems. However, the scalability of such an approach is not promising. Thus, we need an approach to automatically assess the utility of each seed instance in evaluating the extraction patterns. Supposedly, high utility seed instances are the ones which distribute high scores into the reliable patterns. In this work, we assume that the reliability of a pattern can be measured by looking into the performance similarities between applying that pattern to S H and S \u27e8H,T \u27e9 . We consider such a similarity because if a pattern extract mostly in the intersection of S H , S \u27e8H,T \u27e9 and S O , then its performance will be consistent regardless extracting from S H or S \u27e8H,T \u27e9 . Once those highly reliable patterns are found, we could in turn assessing the quality of each seed instance.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Proposed Approach",
"sec_num": "3"
},
{
"text": "In this work, we use the extended Jaccard coefficient (Strehl and Ghosh, 2000) (EJAC) to evaluate the consistency in performance when applying to two different sentence collection",
"cite_spans": [
{
"start": 54,
"end": 78,
"text": "(Strehl and Ghosh, 2000)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Proposed Approach",
"sec_num": "3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "J(p) = V S H (p) \u2022 V S \u27e8H,T \u27e9 (p) \u2225 V S H (p) \u2225 2 + \u2225 V S \u27e8H,T \u27e9 (p) \u2225 2 \u2212V S H (p) \u2022 V S \u27e8H,T \u27e9 (p)",
"eq_num": "(1)"
}
],
"section": "Proposed Approach",
"sec_num": "3"
},
{
"text": "where V X (p) is the performance vector of p under sentence collection X which comprises the pattern's weighted precision estimates for each seed instance",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Proposed Approach",
"sec_num": "3"
},
{
"text": "V X (p) = ( \u03bb \u27e8h 1 ,t 1 \u27e9 (p, X), \u03bb \u27e8h 2 ,t 2 \u27e9 (p, X), ..., \u03bb \u27e8hn,tn\u27e9 (p, X) )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Proposed Approach",
"sec_num": "3"
},
{
"text": "where",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Proposed Approach",
"sec_num": "3"
},
{
"text": "\u03bb \u27e8h i ,t i \u27e9 (p, X) = A \u27e8h i ,t i \u27e9 (p, X) \u00d7 w \u27e8h i ,t i \u27e9 in which w \u27e8h i ,t i \u27e9",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Proposed Approach",
"sec_num": "3"
},
{
"text": "is the weight of seed instance \u27e8h i , t i \u27e9 (described below), and",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Proposed Approach",
"sec_num": "3"
},
{
"text": "A \u27e8h i ,t i \u27e9 (p, X)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Proposed Approach",
"sec_num": "3"
},
{
"text": "is the estimated precision of p,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Proposed Approach",
"sec_num": "3"
},
{
"text": "A \u27e8h i ,t i \u27e9 (p, X) = |p h i (X) = t i | |p h i (X)|",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Proposed Approach",
"sec_num": "3"
},
{
"text": "When binding to some hook terms, a pattern may not be able to extract anything from S H or S \u27e8H,T \u27e9 . In this case, we use the expected target accuracy (ETA), \u00b5 X , as the missing value,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Proposed Approach",
"sec_num": "3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "\u00b5 X (p) = \u2211 \u27e8h,t\u27e9\u2208\u27e8H,T\u27e9 A \u27e8h,t\u27e9 (p, X) \u00d7 w \u27e8h,t\u27e9 \u2211 \u27e8h,t\u27e9\u2208\u27e8H,T\u27e9 w \u27e8h,t\u27e9",
"eq_num": "(2)"
}
],
"section": "Proposed Approach",
"sec_num": "3"
},
{
"text": "where X can be S H or S \u27e8H,T \u27e9 . However, using only Equation 1 is not sufficient because inferior patterns would have similar low precision distribution in both S H and S \u27e8H,T \u27e9 . Therefore, we combine Equation 1 and 2 to form our evaluation metric",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Proposed Approach",
"sec_num": "3"
},
{
"text": "f CEP RA (p) = \u03b1 \u00d7 \u00b5 S \u27e8H,T \u27e9 (p) + \u03b2 \u00d7 J(p)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Proposed Approach",
"sec_num": "3"
},
{
"text": "In this paper, we set both \u03b1 and \u03b2 to 0.5. As mentioned earlier, the quality of the seed instances is an important aspect of this task. Intuitively, the utility of a seed instance can be assessed by the frequency that it is matched by extraction patterns. This assumes that the more inbound links to a seed, the higher quality it gets. However, there may be some overly-general patterns that are characterized by high coverage and low precision. Therefore, we need to prune those possibly unreliable patterns and utilize the remaining ones to evaluate the quality of seed instances. It is conceivable that if the reliable patterns cannot extract correct target term for a specific hook term, then the quality of this seed instance is questionable. In other word, the more reliable the patterns matching the instance, the higher will be the quality of that instance. Hence, we derive our formula for weighing seed instance as",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Proposed Approach",
"sec_num": "3"
},
{
"text": "v \u27e8h,t\u27e9 = \u2211 p\u2208P { A \u27e8h,t\u27e9 (p, S \u27e8H,T \u27e9 ), if f CEP RA (p) > \u03b5 0, otherwise",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Proposed Approach",
"sec_num": "3"
},
{
"text": "where \u03b5 = 0.7 in this work. This value is normalized to obtain the final weight",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Proposed Approach",
"sec_num": "3"
},
{
"text": "w \u27e8h,t\u27e9 = v \u27e8h,t\u27e9 \u2212 min i v \u27e8h i ,t i \u27e9 max i v \u27e8h i ,t i \u27e9 /c",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Proposed Approach",
"sec_num": "3"
},
{
"text": "where c = 20 in this work. w \u27e8h,t\u27e9 's and f CEP RA (p)'s are calculated iteratively. Initially, we compute f CEP RA (p)'s using w \u27e8h,t\u27e9 = 1. Then, we collect patterns with high f CEP RA values to determine w \u27e8h,t\u27e9 's. After setting the seed weights, we can re-calculate the f CEP RA 's with reduced influence from inferior seeds. As this process iterates, only patterns whose f CEP RA values higher than a threshold are retained. Note that we also keep patterns with high ETA value if it can extract several seed instances with high w \u27e8h,t\u27e9 and their f CEP RA is set to its \u00b5 S \u27e8H,T \u27e9 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Proposed Approach",
"sec_num": "3"
},
{
"text": "In this section, we describe our system, Consistent Estimation Pattern-based Relation Acquirer (CEPRA), which runs briefly as follows. Based on the seed instances gathered from the infoboxes in Chinese portion of Wikipedia, we collect two sentence collections S H and S \u27e8H,T \u27e9 from the Web. The retrieved sentences are preprocessed with segmentation and parts-of-speech tagging. Next, extraction patterns are generated with an alignmentbased approach (Sung et al., 2009) based on sentences in S \u27e8H,T \u27e9 . Those patterns are then evaluated by the approach described in Section 3. Finally, we utilize the retained patterns to extract relation instances.",
"cite_spans": [
{
"start": 451,
"end": 470,
"text": "(Sung et al., 2009)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "System Architecture and Experimental Environment",
"sec_num": "4"
},
{
"text": "To assess the performance of our method, we performed experiments on 8 types of relations. The experimental settings are described below:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "System Architecture and Experimental Environment",
"sec_num": "4"
},
{
"text": "Experiment Data: We collected our seed instances from the infoboxes in the Chinese portion of Wikipedia. The collected instances are biographical relations and of 8 different types. To compile the training S \u27e8H,T \u27e9 , for each relation type, we used 1000 seed instances and submitted the hook and target terms of each training seed to retrieve 50 snippets from Google. Among those 1000 seed instances, we randomly picked 50 instances and used their corresponding sentences retrieved from the Web to run pattern generation. The produced patterns are evaluated using whole training sentence collection. A testing sentence collection 4 is formed by using a separate set of 200 instances from each relation type. This testing set contains 3768 sentences and is annotated by two human annotators (the relevance statistics are shown in Table 1 .)",
"cite_spans": [],
"ref_spans": [
{
"start": 829,
"end": 836,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "System Architecture and Experimental Environment",
"sec_num": "4"
},
{
"text": "We compared our method with three other approaches described in Section 2. In this paper, we want to evaluate the difference in pattern selection, thus, we use the same pattern generation procedure for all 4 approaches. The results are compared both in recall and precision. Figure 3 shows the precision value of the eight relationships for top N ranked patterns. In this figure, we can observe that none of the approaches performs well on the Death Place relationship. In Table 1 , we can find although the relevant ratio is quite low in the Death Place relationship, which affects the performance of the pattern-based systems. However, on this relationship, our approach achieves a higher precision score (0.67) than the compared approaches. In the Birth Place relationship, the F E, CE and Espresso approaches would get a lower precision at top 500 ranked patterns. In contrast, our approach achieves a perfect precision score of 1 on the top 500 and 1000 patterns. In these relationships with lower relevant ratio, our approach still performs better than other approaches. Next, in some relationships with high quality seeds like Birth Date, Birth Year, Nobel Prize and Spouse, the CE and our approach both achieve good performance. But we can find the CE approach with a significant drop when N increased in Death Date and Death Year relationships. Contrast to our approach, we still get a stable and good performance, higher than 0.9, in these relationships. It means that our approach would gather more reliable patterns than other approaches. Finally, unlike other approaches, the Espresso approach only considers the relationships between the targets and the patterns. It did not perform well in our experiments.",
"cite_spans": [],
"ref_spans": [
{
"start": 275,
"end": 283,
"text": "Figure 3",
"ref_id": "FIGREF2"
},
{
"start": 473,
"end": 480,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Evaluation and Comparisons:",
"sec_num": null
},
{
"text": "Next we observe the methods' distribution of precision value between these eight relationships is quite different In Figure 3 . Our approach achieves the highest precision score on the top 500 ranked patterns, but the score decreases slightly as N increases. In contrast, the Espresso's precision score increases with N ranked patterns. However, this phenomenon presents that the top N ranked patterns would not be the good choice. Figure 3 also shows that the CE approach performs quite unstable in different relationships. This approach is easily influenced on the quality of seeds. Next, we discuss the recall and precision value derived by the compared approaches on the eight relationships. Table 2 shows the highest recall value of the eight relationships with precision value above \u03b3, range from 0.7 to 0.9. Because the Espresso and F E approaches get lower 4 http://140.109.17.85/cepra/ precision values at top N ranked patterns, we do not consider these two approaches in this table. First of all, in the Death Year relationship, we can find our approach achieves a higher recall value than the CE approach. As shown in Figure 3 , the CE would encounter a significant drop in this relationship at top 6000 to 7000 ranked patterns. Compared to our approach, we not only retain more reliable patterns but we also get a higher recall value. In the Spouse relationship, although our approach get a bit lower precision value, we get a higher recall value 0.4 compared to 0.14 for the CE approach. Generally speaking, in this empirical experiment, our approach would retain more reliable patterns while maintaining stable and high performance in different relationships.",
"cite_spans": [],
"ref_spans": [
{
"start": 117,
"end": 125,
"text": "Figure 3",
"ref_id": "FIGREF2"
},
{
"start": 432,
"end": 438,
"text": "Figure",
"ref_id": null
},
{
"start": 696,
"end": 703,
"text": "Table 2",
"ref_id": "TABREF2"
},
{
"start": 1129,
"end": 1137,
"text": "Figure 3",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Evaluation of Pattern Estimation Approaches",
"sec_num": "5"
},
{
"text": "As mentioned in Sections 5, the performance of CE approach is not stable on different relationships. We assume that the problem is caused by 1) insufficient information to judge whether the target is correct, which means that the CE approach yields a biased confidence score because of the influence of low quality seeds; 2) the lack of relevant sentences in the S H . In the F E approach, the precision value of the top N patterns is quite low because a pattern with a high frequency is always a general pattern. In some domains, highly frequent patterns may be useful for finding new instances. However, in relation extraction tasks, we need to find some related or specific patterns instead of highly frequent patterns. Next, we consider the performance of the Espresso approach. In this paper, we utilize collections comprised of sentences retrieved from the Web. This factor may affect the performance of the Espresso approach. However, the patterns highly occurred with some targets possible describe different relationships. Besides, how to compile a training set is an important issue when utilizing the Espresso approach. In this paper, we propose an approach to estimate a pattern's reliability between different sets. We use an iteratively pattern-seed evaluation method to prune irrelevant patterns and low quality seeds. In Sections 5, we noted that the CEP RA achieved a stable performance on different relationships. However, we still need to resolve the following issues. 1) Because our approach was based on utilizing many seeds extracted from Wikipedia Infoboxes, we need to observe more details about the seed sets like the learning curve with different sizes of seed sets or the performance on different ratio of inferior quality seeds. 2) Currently, we do not consider the issue about pattern generation. Maybe we need to concern about utilizing more complex linguistic tools to generate frequent patterns and check our validation performance is consistent or not.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "6"
},
{
"text": "Unlike other semi-supervised approach utilizing manually prepared seeds in the beginning, we proposed a method to estimate pattern's reliability by abundant seeds extracted from Wikipedia Infoboxes automatically. First, we employ an automatic approach to select sentences, and then use an alignment-based pattern generation approach. Next we apply consistency measurement to estimate pattern's reliability and utilize an iteratively approach to find high quality seeds and reliable patterns. Finally, we use the derived patterns to find precise targets. Based on our experimental result, our system has a more stable and better performance than other approaches. In our future work, we will discuss more experiments about the sizes of seed sets and how to utilize a deep linguistic tool to improve the system's performance.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "7"
},
{
"text": "25th Pacific Asia Conference on Language, Information and Computation, pages 382-391",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "http://www.wikipedia.org 2 http://www.cia.gov/library/pulications/the-world-factbook 3 http://en.wikipedia.org/wiki/Help:Infobox",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Snowball: a prototype system for extracting relations from large text collections",
"authors": [
{
"first": "Eugene",
"middle": [],
"last": "Agichtein",
"suffix": ""
},
{
"first": "Luis",
"middle": [],
"last": "Gravano",
"suffix": ""
},
{
"first": "Jeff",
"middle": [],
"last": "Pavel",
"suffix": ""
},
{
"first": "Viktoriya",
"middle": [],
"last": "Sokolova",
"suffix": ""
},
{
"first": "Aleksandr",
"middle": [],
"last": "Voskoboynik",
"suffix": ""
}
],
"year": 2001,
"venue": "Proceedings of the 2001 ACM SIGMOD international conference on Management of data, SIGMOD '01",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Agichtein, Eugene, Luis Gravano, Jeff Pavel, Viktoriya Sokolova, and Aleksandr Voskoboynik. 2001. Snowball: a prototype system for extracting relations from large text collections. In Proceedings of the 2001 ACM SIGMOD international conference on Management of data, SIGMOD '01, p. 612, New York, NY, USA. ACM.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "A rote extractor with edit distance-based generalisation and multi-corpora precision calculation",
"authors": [
{
"first": "Enrique",
"middle": [],
"last": "Alfonseca",
"suffix": ""
},
{
"first": "Pablo",
"middle": [],
"last": "Castells",
"suffix": ""
},
{
"first": "Manabu",
"middle": [],
"last": "Okumura",
"suffix": ""
},
{
"first": "Maria",
"middle": [],
"last": "Ruiz-Casado",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of the COLING/ACL on Main conference poster sessions, COLING-ACL '06",
"volume": "",
"issue": "",
"pages": "9--16",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alfonseca, Enrique, Pablo Castells, Manabu Okumura, and Maria Ruiz-Casado. 2006. A rote extractor with edit distance-based generalisation and multi-corpora precision calculation. In Proceedings of the COLING/ACL on Main conference poster sessions, COLING-ACL '06, pp. 9-16, Stroudsburg, PA, USA. Association for Computational Linguistics.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Harvesting relations from the web: quantifiying the impact of filtering functions",
"authors": [
{
"first": "Sebastian",
"middle": [],
"last": "Blohm",
"suffix": ""
},
{
"first": "Philipp",
"middle": [],
"last": "Cimiano",
"suffix": ""
},
{
"first": "Egon",
"middle": [],
"last": "Stemle",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of the 22nd national conference on Artificial intelligence",
"volume": "2",
"issue": "",
"pages": "1316--1321",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Blohm, Sebastian, Philipp Cimiano, and Egon Stemle. 2007. Harvesting relations from the web: quantifiying the impact of filtering functions. In Proceedings of the 22nd national conference on Artificial intelligence -Volume 2, pp. 1316-1321. AAAI Press.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Not all seeds are equal: measuring the quality of text mining seeds",
"authors": [
{
"first": "Zornitsa",
"middle": [],
"last": "Kozareva",
"suffix": ""
},
{
"first": "Eduard",
"middle": [],
"last": "Hovy",
"suffix": ""
}
],
"year": 2010,
"venue": "Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics, HLT '10",
"volume": "",
"issue": "",
"pages": "618--626",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kozareva, Zornitsa and Eduard Hovy. 2010. Not all seeds are equal: measuring the quality of text mining seeds. In Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics, HLT '10, pp. 618-626, Stroudsburg, PA, USA. Association for Computational Linguistics.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Multi-field information extraction and cross-document fusion",
"authors": [
{
"first": "Gideon",
"middle": [
"S"
],
"last": "Mann",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Yarowsky",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of the 43rd Annual Meeting on Association for Computational Linguistics, ACL '05",
"volume": "",
"issue": "",
"pages": "483--490",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mann, Gideon S. and David Yarowsky. 2005. Multi-field information extraction and cross-document fusion. In Proceedings of the 43rd Annual Meeting on Association for Computational Linguistics, ACL '05, pp. 483-490, Stroudsburg, PA, USA. Association for Computational Linguistics.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Extraction patterns for information extraction tasks: A survey",
"authors": [
{
"first": "Ion",
"middle": [],
"last": "Muslea",
"suffix": ""
}
],
"year": 1999,
"venue": "AAAI-99 Workshop on Machine Learning for Information Extraction",
"volume": "",
"issue": "",
"pages": "1--6",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Muslea, Ion. 1999. Extraction patterns for information extraction tasks: A survey. In AAAI-99 Workshop on Machine Learning for Information Extraction, pp. 1-6.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Espresso: leveraging generic patterns for automatically harvesting semantic relations",
"authors": [
{
"first": "Patrick",
"middle": [],
"last": "Pantel",
"suffix": ""
},
{
"first": "Marco",
"middle": [],
"last": "Pennacchiotti",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of the 21st International Conference on Computational Linguistics and the 44th annual meeting of the Association for Computational Linguistics, ACL-44",
"volume": "",
"issue": "",
"pages": "113--120",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pantel, Patrick and Marco Pennacchiotti. 2006. Espresso: leveraging generic patterns for automatically harvesting semantic relations. In Proceedings of the 21st International Conference on Computational Linguistics and the 44th annual meeting of the Asso- ciation for Computational Linguistics, ACL-44, pp. 113-120, Stroudsburg, PA, USA. Association for Computational Linguistics.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Learning surface text patterns for a question answering system",
"authors": [
{
"first": "Deepak",
"middle": [],
"last": "Ravichandran",
"suffix": ""
},
{
"first": "Eduard",
"middle": [],
"last": "Hovy",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of the 40th Annual Meeting on Association for Computational Linguistics, ACL '02",
"volume": "",
"issue": "",
"pages": "41--47",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ravichandran, Deepak and Eduard Hovy. 2002. Learning surface text patterns for a question answering system. In Proceedings of the 40th Annual Meeting on Association for Computational Linguistics, ACL '02, pp. 41-47, Stroudsburg, PA, USA. Association for Computational Linguistics.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Automatising the learning of lexical patterns: An application to the enrichment of wordnet by extracting semantic relationships from wikipedia",
"authors": [
{
"first": "Maria",
"middle": [],
"last": "Ruiz-Casado",
"suffix": ""
},
{
"first": "Enrique",
"middle": [],
"last": "Alfonseca",
"suffix": ""
},
{
"first": "Pablo",
"middle": [],
"last": "Castells",
"suffix": ""
}
],
"year": 2007,
"venue": "Data Knowl. Eng",
"volume": "61",
"issue": "",
"pages": "484--499",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ruiz-Casado, Maria, Enrique Alfonseca, and Pablo Castells. 2007. Automatising the learning of lexical patterns: An application to the enrichment of wordnet by extracting semantic relationships from wikipedia. Data Knowl. Eng., 61, 484-499, June.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Value-based customer grouping from large retail data-sets",
"authors": [
{
"first": "Alexander",
"middle": [],
"last": "Strehl",
"suffix": ""
},
{
"first": "Joydeep",
"middle": [],
"last": "Ghosh",
"suffix": ""
}
],
"year": 2000,
"venue": "Proceedings of the SPIE Conference on Data Mining and Knowledge Discovery",
"volume": "",
"issue": "",
"pages": "33--42",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Strehl, Alexander and Joydeep Ghosh. 2000. Value-based customer grouping from large retail data-sets. In In Proceedings of the SPIE Conference on Data Mining and Knowl- edge Discovery, pp. 33-42.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Alignmentbased surface patterns for factoid question answering systems",
"authors": [
{
"first": "",
"middle": [],
"last": "Sung",
"suffix": ""
},
{
"first": "Cheng-Wei",
"middle": [],
"last": "Cheng-Lung",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Wen-Lian",
"middle": [],
"last": "Hsu-Chun Yen",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Hsu",
"suffix": ""
}
],
"year": 2009,
"venue": "Integr. Comput.-Aided Eng",
"volume": "16",
"issue": "",
"pages": "259--269",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sung, Cheng-Lung, Cheng-Wei Lee, Hsu-Chun Yen, and Wen-Lian Hsu. 2009. Alignment- based surface patterns for factoid question answering systems. Integr. Comput.-Aided Eng., 16, 259-269, August.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "A seed-driven bottom-up machine learning framework for extracting relations of various complexity",
"authors": [
{
"first": "Feiyu",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Hans",
"middle": [],
"last": "Uszkoreit",
"suffix": ""
},
{
"first": "Hong",
"middle": [],
"last": "Li",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of the 45th",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xu, Feiyu, Hans Uszkoreit, and Hong Li. 2007. A seed-driven bottom-up machine learning framework for extracting relations of various complexity. In Proceedings of the 45th",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Annual Meeting of the Association of Computational Linguistics",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "584--591",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Annual Meeting of the Association of Computational Linguistics, pp. 584-591, Prague, Czech Republic, June. Association for Computational Linguistics.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"type_str": "figure",
"text": "An example of Wikipedia Infobox",
"uris": null,
"num": null
},
"FIGREF1": {
"type_str": "figure",
"text": "Overlap Diagram between S H , S \u27e8H,T \u27e9 and S O",
"uris": null,
"num": null
},
"FIGREF2": {
"type_str": "figure",
"text": "Precision of the Top N ranking patterns of the four compared approaches, CEP RA, CE, F E and Espresso. The x-axis represents the precision value and the y-axis represents the number of N .",
"uris": null,
"num": null
},
"TABREF1": {
"text": "Relevance ratio of snippets retrieved by submitting seed instances as queries.",
"num": null,
"content": "<table><tr><td>Type Birth</td><td>Birth</td><td>Birth</td><td>Death</td><td>Death</td><td>Death</td><td>Nobel</td><td>Spouse</td></tr><tr><td>Date</td><td>Place</td><td>Year</td><td>Date</td><td>Place</td><td>Year</td><td>Prize</td><td/></tr><tr><td>Ratio 0.96</td><td colspan=\"3\">0.768 0.955 0.939</td><td>0.107</td><td>0.888</td><td>0.879</td><td>0.889</td></tr></table>",
"type_str": "table",
"html": null
},
"TABREF2": {
"text": "Highest recall value with different precision threshold, \u03b3, in these eight relationships.",
"num": null,
"content": "<table><tr><td>Type</td><td>Method</td><td>0.9</td><td>\u03b3 0.8</td><td>0.7</td><td>Type</td><td>Method</td><td>0.9</td><td>\u03b3 0.8</td><td>0.7</td></tr><tr><td>Birth</td><td>CE</td><td colspan=\"4\">0.68 0.70 0.74 Birth</td><td>CE</td><td colspan=\"3\">0.17 0.35 0.43</td></tr><tr><td>Date</td><td colspan=\"4\">CEPRA 0.71 0.74 0.74</td><td>Place</td><td colspan=\"4\">CEPRA 0.17 0.35 0.43</td></tr><tr><td>Birth</td><td>CE</td><td colspan=\"4\">0.71 0.73 0.73 Death</td><td>CE</td><td colspan=\"3\">0.57 0.58 0.58</td></tr><tr><td>Year</td><td colspan=\"4\">CEPRA 0.71 0.72 0.73</td><td>Date</td><td colspan=\"4\">CEPRA 0.56 0.58 0.58</td></tr><tr><td>Death</td><td>CE</td><td>0</td><td>0</td><td>0</td><td>Death</td><td>CE</td><td colspan=\"3\">0.28 0.33 0.35</td></tr><tr><td>Place</td><td>CEPRA</td><td>0</td><td>0</td><td>0</td><td>Year</td><td colspan=\"4\">CEPRA 0.33 0.41 0.43</td></tr><tr><td>Nobel Prize</td><td colspan=\"5\">CE CEPRA 0.38 0.42 0.43 0.39 0.42 0.43 Spouse</td><td>CE CEPRA</td><td>0 0</td><td colspan=\"2\">0.14 0.43 0.4 0.48</td></tr></table>",
"type_str": "table",
"html": null
}
}
}
}