| { |
| "paper_id": "U17-1010", |
| "header": { |
| "generated_with": "S2ORC 1.0.0", |
| "date_generated": "2023-01-19T03:11:30.222328Z" |
| }, |
| "title": "Incremental Knowledge Acquisition Approach for Information Extraction on both Semi-structured and Unstructured Text from the Open Domain Web", |
| "authors": [ |
| { |
| "first": "Maria", |
| "middle": [], |
| "last": "Myung", |
| "suffix": "", |
| "affiliation": {}, |
| "email": "maria.kim@dst.defence.gov.au" |
| }, |
| { |
| "first": "Hee", |
| "middle": [], |
| "last": "Kim", |
| "suffix": "", |
| "affiliation": {}, |
| "email": "" |
| } |
| ], |
| "year": "", |
| "venue": null, |
| "identifiers": {}, |
| "abstract": "Extracting information from semistructured text has been studied only for limited domain sources due to its heterogeneous formats. This paper proposes a Ripple-Down Rules (RDR) based approach to extract relations from both semistructured and unstructured text in open domain Web pages. We find that RDR's 'case-by-case' incremental knowledge acquisition approach provides practical flexibility for (1) handling heterogeneous formats of semi-structured text; (2) conducting knowledge engineering on any Web pages with minimum start-up cost and (3) allowing open-ended settings on relation schema. The efficacy of the approach has been demonstrated by extracting contact information from randomly collected open domain Web pages. The rGALA system achieved 0.87 F1 score on a testing dataset of 100 Web pages, after only 7 hours of knowledge engineering on a training set of 100 Web pages.", |
| "pdf_parse": { |
| "paper_id": "U17-1010", |
| "_pdf_hash": "", |
| "abstract": [ |
| { |
| "text": "Extracting information from semistructured text has been studied only for limited domain sources due to its heterogeneous formats. This paper proposes a Ripple-Down Rules (RDR) based approach to extract relations from both semistructured and unstructured text in open domain Web pages. We find that RDR's 'case-by-case' incremental knowledge acquisition approach provides practical flexibility for (1) handling heterogeneous formats of semi-structured text; (2) conducting knowledge engineering on any Web pages with minimum start-up cost and (3) allowing open-ended settings on relation schema. The efficacy of the approach has been demonstrated by extracting contact information from randomly collected open domain Web pages. The rGALA system achieved 0.87 F1 score on a testing dataset of 100 Web pages, after only 7 hours of knowledge engineering on a training set of 100 Web pages.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Abstract", |
| "sec_num": null |
| } |
| ], |
| "body_text": [ |
| { |
| "text": "Open Information Extraction (Open IE) (Banko et al., 2007; Wu and Weld, 2010; was introduced to extract information from the open domain Web where the relations of interest cannot be pre-defined in advance due to its heterogeneity in domain. Its purpose is to avoid specifying target relations and developing extraction models for individual target relations. The Open IE systems focus on discovering a binary relation candidate tuple in the form of (E1, RelText, E2) by identifying two entities of interest E1 and E2, and the salient textual cues RelText (aka 'relational text') between the two entities. Then, they classify whether any binary relation R exists between the two entities in a given tuple to extract a binary relation tuple like (E1, R, E2).", |
| "cite_spans": [ |
| { |
| "start": 38, |
| "end": 58, |
| "text": "(Banko et al., 2007;", |
| "ref_id": "BIBREF4" |
| }, |
| { |
| "start": 59, |
| "end": 77, |
| "text": "Wu and Weld, 2010;", |
| "ref_id": "BIBREF30" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "To date extracting information from the open domain Web has mainly focused on the unstructured text (i.e., where text is formatted in paragraphs and expressed as full sentences). However, most Web pages generally contain information expressed in semi-structured text including tables, lists, isolated words or text snippets as well as unstructured text. Therefore, it is important to develop an IE capability that is able to process both semi-structured and unstructured text from the open domain Web.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "Unlike unstructured text, semi-structured text usually includes HTML tags, which is primarily for formatting purposes. The variability in the way people use HTML tags impedes IE process. Above all, HTML tags have the following characteristics which hinder the IE task:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "(1) HTML's tabular structure is often abused to arrange the graphical aspect instead of using cascading style sheets; and (2) HTML tags can be deeply nested mixing relevant content with web noise in a loose manner.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "Processing semi-structured text from the open domain Web is very challenging task as it needs to deal with heterogeneous formats as well as heterogeneous domains. It is difficult to create sufficient labelled data for semi-structured text in heterogeneous formats. Due to these difficulties, extracting information from semi-structured text has been studied only for specific domains (Chang et al., 2006) . Moreover, existing semistructured text IE approaches cannot be extended to the open domain Web sources as they usually require domain dependent inputs.", |
| "cite_spans": [ |
| { |
| "start": 384, |
| "end": 404, |
| "text": "(Chang et al., 2006)", |
| "ref_id": "BIBREF7" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "In summary, extracting information from semistructured text in the open domain Web presents the following three main challenges:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "1. It is difficult to distinguish between the relevant content and web noise without domain knowledge. There is no explicit difference in HTML structure between them. 2. There are no clear linguistic markers (e.g., punctuation) to segment semi-structured text in the same manner as a sentence in unstructured text. 3. It is hard to create \"sufficient\" labelled training data and/or a complete ruleset for semi-structured text in open domain due to its heterogeneous formats. Our rGALA system aims to extract information from both semi-structured and unstructured text in the open domain Web. To handle heterogeneous formats in semi-structured text, the rGALA system treats semi-structured text the same way as unstructured text in the Open IE task. The system filters out most of HTML tags and forms a binary relation candidate tuple (E1, RelText, E2); the system then extracts a binary relation tuple (E1, R, E2) if a relation R exists between the two given entities.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "The rGALA system adopts a Ripple-Down Rules (RDR)' incremental knowledge acquisition approach; in RDR, the rule creation process is simple and rapid with ensured consistency in ruleset maintenance. The system does not require labelled training data or up-front knowledge for rule creation. Moreover, it supports open-ended settings on target relation definition by starting with a small set of relations and incrementally adding more relations as discovered during the extraction process.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "Open Information Extraction (Open IE) aims to achieve domain-independent discovery of relations from the heterogeneous Web. Existing Open IE systems can be categorised into two groups based on the level of sophistication of the NLP techniques applied: (1) shallow syntactic parsing; and (2) dependency parsing. Shallow syntactic parsing based Open IE systems annotate sentences with Part-of-Speech (POS) tags and phrase chunk tags, then identify relations by matching patterns over these tags. The systems in this category include TextRunner (Banko et al., 2007) , WOEpos (Wu and Weld, 2010) , ReVerb and R2A2 . Dependency parsing based Open IE systems utilise a dependency parser to identify whole subtrees connecting the relation predicate and its arguments. The systems in this category include OLLIE (Mausam et al., 2012) , ClausIE (Corro and Gemulla, 2013), Wanderlust (Akbik and Brob, 2009) , WOEparse (Wu and Weld, 2010) and KrakeN (Akbik and Loser, 2012) . Each of these systems makes use of various heuristics to obtain extractions from the dependency parses. They are generally more time consuming than the shallow parsing based systems. They trade efficiency for improved precision and recall.", |
| "cite_spans": [ |
| { |
| "start": 542, |
| "end": 562, |
| "text": "(Banko et al., 2007)", |
| "ref_id": "BIBREF4" |
| }, |
| { |
| "start": 572, |
| "end": 591, |
| "text": "(Wu and Weld, 2010)", |
| "ref_id": "BIBREF30" |
| }, |
| { |
| "start": 804, |
| "end": 825, |
| "text": "(Mausam et al., 2012)", |
| "ref_id": "BIBREF24" |
| }, |
| { |
| "start": 874, |
| "end": 896, |
| "text": "(Akbik and Brob, 2009)", |
| "ref_id": "BIBREF0" |
| }, |
| { |
| "start": 908, |
| "end": 927, |
| "text": "(Wu and Weld, 2010)", |
| "ref_id": "BIBREF30" |
| }, |
| { |
| "start": 939, |
| "end": 962, |
| "text": "(Akbik and Loser, 2012)", |
| "ref_id": "BIBREF1" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Open Information Extraction (Open IE)", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "The basic idea of RDR (Compton and Jansen, 1990 ) is that each case is processed by the system and when the outcome is incorrect or NULL, one or more rules are added to provide the correct outcome for that case. The system also stores cornerstone cases, cases which triggered the creation of new rules.", |
| "cite_spans": [ |
| { |
| "start": 22, |
| "end": 47, |
| "text": "(Compton and Jansen, 1990", |
| "ref_id": "BIBREF9" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Ripple-Down Rules (RDR)", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "The RDR approach has been applied to a range of NLP applications. Pham and colleagues developed KAFTIE using the RDR approach to extract positive attributions from scientific papers (Pham and Hoffmann, 2004) and to extract temporal relations (Pham and Hoffmann, 2006) . KAFTIE was noted to have outperformed machine learning based systems. The RDR Case Explore (RDRCE) system (Xu and Hoffmann, 2010) combined RDR with a Machine Learning method. RDRCE was applied for POS tagging task and achieved a slight improvement over a state-of-the-art POS tagging system after 60 hours of knowledge engineering. A hybrid RDR-based Open IE system (Kim and Compton, 2012) makes use of RDR's incremental knowledge acquisition technique as an add-on to the state-of-the-art ReVerb Open IE system. With this wrapper approach, the ReVerb system's performance is further improved using RDR's error correction for the domain of interest.", |
| "cite_spans": [ |
| { |
| "start": 182, |
| "end": 207, |
| "text": "(Pham and Hoffmann, 2004)", |
| "ref_id": "BIBREF26" |
| }, |
| { |
| "start": 242, |
| "end": 267, |
| "text": "(Pham and Hoffmann, 2006)", |
| "ref_id": "BIBREF27" |
| }, |
| { |
| "start": 376, |
| "end": 399, |
| "text": "(Xu and Hoffmann, 2010)", |
| "ref_id": "BIBREF31" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Ripple-Down Rules (RDR)", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "Early IE systems for semi-structured text have been studied largely with manual approaches (Hammer et al., 1997; Arocena and Mendelzon, 1999) and supervised approaches (Kushmerick, 1997; Hsu and Dung, 1998; Soderland, 1999; Muslea et al., 1999; Califf and Mooney, 1999; Freitag, 2000; Laender et al., 2002) . In order to increase the level of automation and reduce manual efforts, most of recent work has focused on semi-supervised approaches (Chang and Lui, 2001; Chang and Kuo, 2004) and unsupervised approaches (Crescenzi et al., 2001; Arasu and Garcia-Molina, 2003; Zhai and Liu, 2005; Liu et al., 2010; Grigalis, 2013) . Semi-supervised and unsupervised IE systems can be applied only to template based Web pages as they depend heavily on the existence of a common template (Chang et al., 2006) .", |
| "cite_spans": [ |
| { |
| "start": 91, |
| "end": 112, |
| "text": "(Hammer et al., 1997;", |
| "ref_id": "BIBREF17" |
| }, |
| { |
| "start": 113, |
| "end": 141, |
| "text": "Arocena and Mendelzon, 1999)", |
| "ref_id": "BIBREF3" |
| }, |
| { |
| "start": 168, |
| "end": 186, |
| "text": "(Kushmerick, 1997;", |
| "ref_id": "BIBREF21" |
| }, |
| { |
| "start": 187, |
| "end": 206, |
| "text": "Hsu and Dung, 1998;", |
| "ref_id": "BIBREF18" |
| }, |
| { |
| "start": 207, |
| "end": 223, |
| "text": "Soderland, 1999;", |
| "ref_id": "BIBREF29" |
| }, |
| { |
| "start": 224, |
| "end": 244, |
| "text": "Muslea et al., 1999;", |
| "ref_id": "BIBREF25" |
| }, |
| { |
| "start": 245, |
| "end": 269, |
| "text": "Califf and Mooney, 1999;", |
| "ref_id": "BIBREF5" |
| }, |
| { |
| "start": 270, |
| "end": 284, |
| "text": "Freitag, 2000;", |
| "ref_id": "BIBREF15" |
| }, |
| { |
| "start": 285, |
| "end": 306, |
| "text": "Laender et al., 2002)", |
| "ref_id": "BIBREF22" |
| }, |
| { |
| "start": 443, |
| "end": 464, |
| "text": "(Chang and Lui, 2001;", |
| "ref_id": "BIBREF8" |
| }, |
| { |
| "start": 465, |
| "end": 485, |
| "text": "Chang and Kuo, 2004)", |
| "ref_id": "BIBREF6" |
| }, |
| { |
| "start": 514, |
| "end": 538, |
| "text": "(Crescenzi et al., 2001;", |
| "ref_id": "BIBREF12" |
| }, |
| { |
| "start": 539, |
| "end": 569, |
| "text": "Arasu and Garcia-Molina, 2003;", |
| "ref_id": "BIBREF2" |
| }, |
| { |
| "start": 570, |
| "end": 589, |
| "text": "Zhai and Liu, 2005;", |
| "ref_id": "BIBREF32" |
| }, |
| { |
| "start": 590, |
| "end": 607, |
| "text": "Liu et al., 2010;", |
| "ref_id": "BIBREF23" |
| }, |
| { |
| "start": 608, |
| "end": 623, |
| "text": "Grigalis, 2013)", |
| "ref_id": "BIBREF16" |
| }, |
| { |
| "start": 779, |
| "end": 799, |
| "text": "(Chang et al., 2006)", |
| "ref_id": "BIBREF7" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "IE systems for Semi-structured Text", |
| "sec_num": "2.3" |
| }, |
| { |
| "text": "In the same manner as the rGALA system, WHISK (Soderland, 1999 ) also aims to extract information from both semi-structured and unstructured text; but unlike rGALA, it targets specific domain Web pages and uses a supervised learning algorithm. To reduce the amount of manual labelling, WHISK interleaves learning new rules and annotating new instances (training examples) using selective sampling; thus, the learning and annotation process is iterative. It begins with an empty set of rules and at each iteration: (1) it presents to the user a batch of instances to be labelled via a graphical interface; (2) the labelled instances are added to a training set; (3) for each instance in a training set (not covered by the existing ruleset), WHISK learns the new rule using top-down induction, i.e., it finds the most general rule that covers the seed, then specialises the rule by adding terms incrementally until a stopping condition is met and finally (4) it prunes the rules.", |
| "cite_spans": [ |
| { |
| "start": 46, |
| "end": 62, |
| "text": "(Soderland, 1999", |
| "ref_id": "BIBREF29" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "IE systems for Semi-structured Text", |
| "sec_num": "2.3" |
| }, |
| { |
| "text": "The rGALA system consists of the following three main components: (1) Preprocessor, (2) Tuple Extractor, and (3) RDR Engine.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "rGALA Implementation", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "(1) Preprocessor consists of the following four tools:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "rGALA Implementation", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "(a) Web transformer A simple HTML transformation tool was built using JSOUP 1 to extract both semistructured and unstructured text. To keep all potential information while minimising the amount of Web noise, the Web transformer tool conducts the following two steps:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "rGALA Implementation", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "Step1: removes most of HTML tags and attributes except <table>, <list> and <p> tags.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "rGALA Implementation", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "Step2: extracts text within <p> tags. (b) Text segmenter A text segmenter was built using the JFlex 2 (fast lexical analyser generator for Java) parser", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "rGALA Implementation", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "to identify text segments from both semistructured and unstructured text. It takes a specification with a set of regular expressions and corresponding actions to identify a whole block of text for semi-structured text and a sentence for unstructured text. BETWEEN, ENTITY_2] , which become RDR cases for binary relation classification task in the RDR Engine. A candidate tuple consists of two entities (ENTITY_1 and ENTI-TY_2) and a relational text (BETWEEN), which includes all the words between the two entities. The maximum number of tokens in the relational text is not limited by default but this value is configurable.", |
| "cite_spans": [ |
| { |
| "start": 256, |
| "end": 264, |
| "text": "BETWEEN,", |
| "ref_id": null |
| }, |
| { |
| "start": 265, |
| "end": 274, |
| "text": "ENTITY_2]", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "rGALA Implementation", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "(3) RDR Engine follows these three steps:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "rGALA Implementation", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "(a)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "rGALA Implementation", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "Step 1: The user checks the Relation Extraction (RE) result returned from the system. For each RDR case, the system can return a correct or an incorrect RE result, or a NULL result when no rule was fired for the given case. (b)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "rGALA Implementation", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "Step 2: The user creates an RDR rule when the result returned is not correct or NULL.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "rGALA Implementation", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "If the system returns an incorrect RE result, a new rule is created under the rule which returned the incorrect result. If the system returns a NULL result, a new rule is created under the root rule. (c)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "rGALA Implementation", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "Step 3: The system evaluates the newly created RDR rule and the user refines it when required. For the newly created rule, the system automatically evaluates it against the relevant cases (the parent rule's cases and the sibling rules' cases) in the system, which may conflict with the new rule. If the rule conflicts with these cases, the user can refine the rule's condition to make the rule more precise.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "rGALA Implementation", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "An RDR rule has one or more conditions connected with an 'AND' operation, and a conclusion. Figure 1 shows the components of the RDR rule condition and conclusion. Note that 'Cond' and 'Conc' refer to 'Condition' and 'Conclusion' respectively. \u2022 Semantic feature: Named Entity (NE) (c) Currently the rGALA system supports nine OPERATORs including '==', '!=', 'contains', '!contains', 'regEx', 'startsWith', 'hasWordIn', 'Pattern' and 'NULL'. Especially, the operator 'regEx', 'hasWordIn' and 'Pattern' assist a single rule to handle multiple cases with similar patterns and words. (d) VALUE is usually derived automatically in the system's GUI based on the choice made for the ATTRIBUTE_1 and ATTRIBUTE_2.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 92, |
| "end": 100, |
| "text": "Figure 1", |
| "ref_id": "FIGREF0" |
| } |
| ], |
| "eq_spans": [], |
| "section": "RDR Rule Description", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "(2) A conclusion contains the relation extraction result in the form of (ENTITY1, RELATION, ENTITY2).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "RDR Rule Description", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "A Multiple Classification RDR (MCRDR) is an 'n-ary' tree structure with only except edges. A case is evaluated by passing it to the root rule, which is always satisfied. An MCRDR evaluates all the first level rules which are direct children of the root rule. When a rule is satisfied, all its corresponding children rules are tested recursively where the children rules' conclusions overwrite the parent rule's conclusion. The inference process stops when there are no more children rules to evaluate. The rGALA system applies an MCRDR for the single classification task as it is proved to be more efficient than a Single Classification RDR (SCRDR) even for single classification task (Kang et al., 1995) . Figure 2 demonstrates MCRDR ruleset construction starting with an empty ruleset and the following three RDR cases described in the examples below. Example 1. From the below text segment 1, RDR case 1 is identified. As the system returns a NULL result, a new rule is created. In RDR's exception rule structure, a user needs to select only a few conditions which are enough to distinguish the current case from the cornerstone case of the parent rule. Figure 3 presents the RDR Engine GUI of the rGALA system. The GUI allows a user to view each RDR case and the system's classification results, and to form a rule when required. The numbers in figure 3 describes the followings:", |
| "cite_spans": [ |
| { |
| "start": 685, |
| "end": 704, |
| "text": "(Kang et al., 1995)", |
| "ref_id": "BIBREF19" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 707, |
| "end": 715, |
| "text": "Figure 2", |
| "ref_id": "FIGREF1" |
| }, |
| { |
| "start": 1157, |
| "end": 1165, |
| "text": "Figure 3", |
| "ref_id": "FIGREF2" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Rule Construction Example in Multiple Classification RDR (MCRDR)", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": "1. Displays a text segment; ", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "rGALA Graphic User Interface (GUI)", |
| "sec_num": "3.4" |
| }, |
| { |
| "text": "The experiments were conducted to demonstrate the efficacy of the rGALA system in creating rules and the effectiveness of its ruleset on both semistructured and unstructured text in open domain Web pages.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experiments", |
| "sec_num": "4" |
| }, |
| { |
| "text": "In order to examine the efficacy of the rGALA system on open domain Web pages, a set of Web pages was collected from various educational institutions (e.g. '.edu'), commercial companies (e.g. '.com') and government organisations (e.g. '.gov') web sites based on their URL addresses (without domain specific keywords). Manual annotation of a gold standard data is the very time consuming process. Therefore, from 1351 collected Web pages, only two sets of 100 Web pages were randomly selected as training and testing datasets without duplication.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experiment Settings", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "Five types of relations about contact information including 'hasPhone', 'hasFax', 'hasAddress', 'hasEmail' and 'hasDomainAddress' were chosen as initial target relations because: (1) they are commonly observed information in both semi-structured and unstructured text in open domain Web pages; and (2) they are usually written in heterogeneous formats influenced by personal, organisational and cultural preferences.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experiment Settings", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "In the experiments, these five types of relations were further categorised into ten target relations. For example, the 'hasPhone' relation was further specified into two relations 'O_hasPhone' and 'P_hasPhone' to capture the different entity types (organisation and person). Note that partially matched entities (esp. phone numbers) were evaluated as correct extractions in these experiments. For example, for '+61 2 9999 5444/9999 1111', the extracted entity '9999 1111' instead of '+61 2 9999 1111' was counted as a correct extraction.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experiment Settings", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "This section presents how the initial RDR ruleset was created to handle instances of the target relations from an empty ruleset.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Initial RDR Ruleset Construction", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "To create the gold standard data, we manually analysed the post-processed 100 Web pages in the training dataset. As shown in the Table 1 , a total of 325 instances of target relations were identified; 320 instances were found from semi-structured text written in 61 patterns and 5 instances were found from unstructured text written in 5 patterns.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 129, |
| "end": 136, |
| "text": "Table 1", |
| "ref_id": "TABREF4" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Initial RDR Ruleset Construction", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "To build the initial RDR ruleset, the rGALA system processed the 100 Web pages in the training dataset. It identified 1396 text segments and 5770 candidate tuples (RDR cases). In the tuple extraction process, 7 NE types were identified as entities' of interest including person, organisation, location, phone number, fax number, email address, postal address, and domain address.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Initial RDR Ruleset Construction", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "The 5770 candidate tuples include 318 instances of the target relations and missed 7 target relation instances due to errors in detecting some phone number formatting. For example, for 'Phone: (618) 322 4444/5555', the current system can detect '(618) 322 4444' but it cannot detect the last shortened phone number '5555'. Table 1 shows the number of RDR rules created for each type of target relations. In total, 22 rules were created; 21 rules were created to cover 67 patterns and 1 rule was created to reshape an overly generalised rule causing a false positive error. On average, for semi-structured text, one rule covered three or more patterns.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 323, |
| "end": 330, |
| "text": "Table 1", |
| "ref_id": "TABREF4" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Initial RDR Ruleset Construction", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "Some RDR rules created for unstructured text also handled semi-structured text, and vice versa. These are indicated using bold numbers in Table 1 . For example, no rule was required to handle the 17 instances of 'P_hasPhone' relation from semi-structured text as it was covered by one rule created from one instance in unstructured text. This arises because the rGALA system handles semi-structured text the same way as unstructured text; it filters out most of HTML tags and identifies candidate tuples in the form of (E1, RelText, E2).", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 138, |
| "end": 145, |
| "text": "Table 1", |
| "ref_id": "TABREF4" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Initial RDR Ruleset Construction", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "The knowledge engineering of 5770 candidate tuples (RDR cases) took about 7 hours without any extra-preparation time for labelling data or understanding the data structure in advance. The initial RDR ruleset construction time starts when a case is called and finishes when a rule is accepted as complete. This construction time is logged automatically. ", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Initial RDR Ruleset Construction", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "This section presents the performance of the rGALA system on the 100 Web pages in the testing dataset with the RDR ruleset constructed from the training dataset. To create the gold standard data, the postprocessed 100 Web pages in the testing dataset were also manually analysed. As shown in Table 2 , a total of 141 instances of the target relations were identified; 137 instances were found from semi-structured text written in 26 patterns and 4 instances were found from unstructured text written in 4 patterns. Among the 26 patterns from semi-structured text, 10 patterns were the same as the patterns from semi-structured text in the training dataset.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 292, |
| "end": 300, |
| "text": "Table 2", |
| "ref_id": "TABREF5" |
| } |
| ], |
| "eq_spans": [], |
| "section": "rGALA System Performance", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "As shown in Table 2 , the testing dataset only included four target relations out of our ten target relations including 'O_hasPhone', 'O_hasFax', 'O_hasAddress' and 'O_hasEmail' due to the random selection of testing data. The testing dataset contained four out of five types of relations about contact information including 'hasPhone ', 'hasFax', 'hasAddress' and 'hasEmail'. When processing the testing dataset, the rGALA system identified 1386 text segments and 2818 candidate tuples (RDR cases). Overall, the rGALA system achieved reasonable and balanced performance of 0.88 F1 score with 0.93 precision and 0.83 recall. Total of 24 errors occurred including 4 False Positive (FP) errors and 20 False Negative errors (FN). All the 24 errors were caused from NE errors in the preprocessing phase; the 4 FP errors were due to incorrect NE types and the 20 FN errors were due to missed NEs. Among the 20 FN errors, the 8 FN errors were from missing shortened phone numbers format and 12 FN errors were from missing person and organisation named entities.", |
| "cite_spans": [ |
| { |
| "start": 335, |
| "end": 376, |
| "text": "', 'hasFax', 'hasAddress' and 'hasEmail'.", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 12, |
| "end": 19, |
| "text": "Table 2", |
| "ref_id": "TABREF5" |
| } |
| ], |
| "eq_spans": [], |
| "section": "rGALA System Performance", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "As mentioned in section 4, the rGALA system achieved reasonable performance of 0.88 F1 score (with 0.93 precision and 0.83 recall) after only 7 hours of knowledge engineering on 100 open domain Web pages. No extra time was spent in analyzing the data, validating the rules or debugging.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Discussion", |
| "sec_num": "5" |
| }, |
| { |
| "text": "In our experiment, the training dataset by chance contained more examples and patterns than the testing dataset. If the testing dataset were to contain more examples and patterns, the system may degrade. However, the rGALA system can quickly handle those uncovered examples in the testing dataset by adding rules incrementally.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Discussion", |
| "sec_num": "5" |
| }, |
| { |
| "text": "The rGALA system cleans out HTML tags and treats semi-structured text in the same way as unstructured text. This approach brings out two main advantages shown in Table 1: (1) the rGALA system can handle various patterns of semi-structured text without any prior knowledge of the data structure/format and (2) its RDR rules work on both semi-structured and unstructured text. It is usually difficult to perfectly extract information from open domain Web pages in one go. Subsequent maintenance and evolution of the ruleset is of utmost importance. In the rGALA system, a new rule is automatically organised in an exception structure, with automatic checking for any potential conflicts. This effectively addresses the critical maintenance issue from which most manual approaches suffer.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 162, |
| "end": 170, |
| "text": "Table 1:", |
| "ref_id": "TABREF4" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Discussion", |
| "sec_num": "5" |
| }, |
| { |
| "text": "Although the size of the experimental dataset was not large, it fully satisfied our initial scenario where IE is required from a collection of open domain Web pages without prior knowledge of the data. Experience suggests that knowledge acquisition with RDR remains very simple and rapid even for large rulesets with over 10,000 rules (Compton et al., 2011) .", |
| "cite_spans": [ |
| { |
| "start": 335, |
| "end": 357, |
| "text": "(Compton et al., 2011)", |
| "ref_id": "BIBREF10" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Discussion", |
| "sec_num": "5" |
| }, |
| { |
| "text": "As mentioned in section 2.3, WHISK (Soderland, 1999) also aimed for information extraction from both semi-structured and unstructured text. While the rGALA system builds one ruleset which works for both semistructured and unstructured text for open domain sources, WHISK builds separate rulesets for semi-structured and unstructured text; it requires specific inputs for different domains such as the exact phrase delimiters to be extracted from semi-structured text.", |
| "cite_spans": [ |
| { |
| "start": 35, |
| "end": 52, |
| "text": "(Soderland, 1999)", |
| "ref_id": "BIBREF29" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Discussion", |
| "sec_num": "5" |
| }, |
| { |
| "text": "The rGALA system is simple but effective; its case-by-case incremental knowledge acquisition approach helps to efficiently capture human knowledge to handle heterogeneous formats of semi-structured text in the open domain Web without prior knowledge, a labelled dataset or pre-defined relation schema. Rules can be updated as errors are uncovered, or when new formats are discovered, or new target relations are defined. The rGALA system is not a system to extract all potential relations from the whole Web, but it is a system to extract any relations of interests from any given Web pages. To date no work has been published on IE from semi-structured text for open domain Web pages. We have demonstrated that treating semi-structured text the same way as unstructured text for this problem shows considerable promise.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Discussion", |
| "sec_num": "5" |
| }, |
| { |
| "text": "https://jsoup.org/ 2 http://jflex.de/ -The JFlex parser uses Deterministic Finite Automata (DFA) to segment a text stream based on a set of user-defined rules.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "The gALA system was developed by Defence Science and Technology (DST) group.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| } |
| ], |
| "back_matter": [], |
| "bib_entries": { |
| "BIBREF0": { |
| "ref_id": "b0", |
| "title": "Wanderlust: Extracting semantic relations from natural language text using dependency grammar patterns", |
| "authors": [ |
| { |
| "first": "Alan", |
| "middle": [], |
| "last": "Akbik", |
| "suffix": "" |
| }, |
| { |
| "first": "Jugen", |
| "middle": [], |
| "last": "Brob", |
| "suffix": "" |
| } |
| ], |
| "year": 2009, |
| "venue": "Proceedings of the 18th International conference on World Wide Web", |
| "volume": "", |
| "issue": "", |
| "pages": "6--15", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Alan Akbik and Jugen Brob. 2009. Wanderlust: Extracting semantic relations from natural language text using dependency grammar patterns. In Proceedings of the 18th International conference on World Wide Web, pages 6-15.", |
| "links": null |
| }, |
| "BIBREF1": { |
| "ref_id": "b1", |
| "title": "Kraken: N-ary facts in open information extraction", |
| "authors": [ |
| { |
| "first": "Alan", |
| "middle": [], |
| "last": "Akbik", |
| "suffix": "" |
| }, |
| { |
| "first": "Alexander", |
| "middle": [], |
| "last": "Loser", |
| "suffix": "" |
| } |
| ], |
| "year": 2012, |
| "venue": "Proceedings of the Joint Workshop on Automatic Knowledge Base Construction and Web-scale Knowledge Extraction", |
| "volume": "", |
| "issue": "", |
| "pages": "52--56", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Alan Akbik and Alexander Loser. 2012. Kraken: N-ary facts in open information extraction. In Proceedings of the Joint Workshop on Automatic Knowledge Base Construction and Web-scale Knowledge Extraction, pages 52-56.", |
| "links": null |
| }, |
| "BIBREF2": { |
| "ref_id": "b2", |
| "title": "Extracting structured data from web pages", |
| "authors": [ |
| { |
| "first": "Arvind", |
| "middle": [], |
| "last": "Arasu", |
| "suffix": "" |
| }, |
| { |
| "first": "Hector", |
| "middle": [], |
| "last": "Garcia-Molina", |
| "suffix": "" |
| } |
| ], |
| "year": 2003, |
| "venue": "Proceedings of the 2003 ACM SIGMOD International Conference on Management of Data", |
| "volume": "", |
| "issue": "", |
| "pages": "337--348", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Arvind Arasu and Hector Garcia-Molina. 2003. Extracting structured data from web pages. In Proceedings of the 2003 ACM SIGMOD International Conference on Management of Data, pages 337-348.", |
| "links": null |
| }, |
| "BIBREF3": { |
| "ref_id": "b3", |
| "title": "WebOQL: Restructuring documents, databases and Webs", |
| "authors": [ |
| { |
| "first": "Gustavo", |
| "middle": [ |
| "O" |
| ], |
| "last": "Arocena", |
| "suffix": "" |
| }, |
| { |
| "first": "Alberto", |
| "middle": [ |
| "O" |
| ], |
| "last": "Mendelzon", |
| "suffix": "" |
| } |
| ], |
| "year": 1999, |
| "venue": "Theory and Practice of Object Systems", |
| "volume": "5", |
| "issue": "3", |
| "pages": "127--141", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Gustavo O. Arocena and Alberto O. Mendelzon. 1999. WebOQL: Restructuring documents, databases and Webs. Theory and Practice of Object Systems, 5(3): 127-141.", |
| "links": null |
| }, |
| "BIBREF4": { |
| "ref_id": "b4", |
| "title": "Open information extraction from the web", |
| "authors": [ |
| { |
| "first": "Michele", |
| "middle": [], |
| "last": "Banko", |
| "suffix": "" |
| }, |
| { |
| "first": "Michael", |
| "middle": [ |
| "J" |
| ], |
| "last": "Cafarella", |
| "suffix": "" |
| }, |
| { |
| "first": "Stephen", |
| "middle": [], |
| "last": "Soderland", |
| "suffix": "" |
| }, |
| { |
| "first": "Matt", |
| "middle": [], |
| "last": "Broadhead", |
| "suffix": "" |
| }, |
| { |
| "first": "Oren", |
| "middle": [], |
| "last": "Etzioni", |
| "suffix": "" |
| } |
| ], |
| "year": 2007, |
| "venue": "Proceedings of the 20th International Joint Conference on Artificial Intelligence", |
| "volume": "", |
| "issue": "", |
| "pages": "2670--2676", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Michele Banko, Michael J. Cafarella, Stephen Soderland, Matt Broadhead and Oren Etzioni. 2007. Open information extraction from the web. In Proceedings of the 20th International Joint Conference on Artificial Intelligence, pages 2670-2676.", |
| "links": null |
| }, |
| "BIBREF5": { |
| "ref_id": "b5", |
| "title": "Relational Learning of Pattern-Match Rules for Information Extraction", |
| "authors": [ |
| { |
| "first": "Mary", |
| "middle": [ |
| "Elaine" |
| ], |
| "last": "Califf", |
| "suffix": "" |
| }, |
| { |
| "first": "Raymond", |
| "middle": [ |
| "J" |
| ], |
| "last": "Mooney", |
| "suffix": "" |
| } |
| ], |
| "year": 1999, |
| "venue": "Proceedings of the 16th National Conference on Artificial Intelligence and 11th Conference on Innovative Applications of Artificial Intelligence", |
| "volume": "", |
| "issue": "", |
| "pages": "328--334", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Mary Elaine Califf and Raymond J. Mooney. 1999. Relational Learning of Pattern-Match Rules for Information Extraction. In Proceedings of the 16th National Conference on Artificial Intelligence and 11th Conference on Innovative Applications of Artificial Intelligence. pages 328-334.", |
| "links": null |
| }, |
| "BIBREF6": { |
| "ref_id": "b6", |
| "title": "OLERA: Semisupervised Web-data extraction with visual support", |
| "authors": [ |
| { |
| "first": "Chia-Hui", |
| "middle": [], |
| "last": "Chang", |
| "suffix": "" |
| }, |
| { |
| "first": "Shih-Chien", |
| "middle": [], |
| "last": "Kuo", |
| "suffix": "" |
| } |
| ], |
| "year": 2004, |
| "venue": "IEEE Intelligent Systems", |
| "volume": "19", |
| "issue": "6", |
| "pages": "56--64", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Chia-Hui Chang and Shih-Chien Kuo. 2004. OLERA: Semisupervised Web-data extraction with visual support. IEEE Intelligent Systems, 19(6): 56-64.", |
| "links": null |
| }, |
| "BIBREF7": { |
| "ref_id": "b7", |
| "title": "A Survey of Web Information Extractio Systems", |
| "authors": [ |
| { |
| "first": "Chia-Hui", |
| "middle": [], |
| "last": "Chang", |
| "suffix": "" |
| }, |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Kayed", |
| "suffix": "" |
| }, |
| { |
| "first": "M", |
| "middle": [ |
| "R" |
| ], |
| "last": "Girgis", |
| "suffix": "" |
| }, |
| { |
| "first": "K", |
| "middle": [ |
| "F" |
| ], |
| "last": "Shaalan", |
| "suffix": "" |
| } |
| ], |
| "year": 2006, |
| "venue": "IEEE Transactiona on Knowledge and Data Engineering", |
| "volume": "18", |
| "issue": "10", |
| "pages": "1411--1428", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Chia-Hui Chang, M. Kayed, M.R. Girgis and K.F. Shaalan. 2006. A Survey of Web Information Extractio Systems. IEEE Transactiona on Knowledge and Data Engineering, 18(10): 1411- 1428.", |
| "links": null |
| }, |
| "BIBREF8": { |
| "ref_id": "b8", |
| "title": "IEPAD: informatio extraction based on pattern discovery", |
| "authors": [ |
| { |
| "first": "Chia-Hui", |
| "middle": [], |
| "last": "Chang", |
| "suffix": "" |
| }, |
| { |
| "first": "Shao-Chen", |
| "middle": [], |
| "last": "Lui", |
| "suffix": "" |
| } |
| ], |
| "year": 2001, |
| "venue": "Proceedings of the 10th International World Wide Web Conference", |
| "volume": "", |
| "issue": "", |
| "pages": "681--688", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Chia-Hui Chang and Shao-Chen Lui. 2001. IEPAD: informatio extraction based on pattern discovery. In Proceedings of the 10th International World Wide Web Conference, pages 681-688.", |
| "links": null |
| }, |
| "BIBREF9": { |
| "ref_id": "b9", |
| "title": "A philosophical basis for knowledge acquisition", |
| "authors": [ |
| { |
| "first": "Paul", |
| "middle": [], |
| "last": "Compton", |
| "suffix": "" |
| }, |
| { |
| "first": "Bob", |
| "middle": [], |
| "last": "Jansen", |
| "suffix": "" |
| } |
| ], |
| "year": 1990, |
| "venue": "Knowledge Acquisition", |
| "volume": "2", |
| "issue": "3", |
| "pages": "241--258", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Paul Compton and Bob Jansen. 1990. A philosophical basis for knowledge acquisition. Knowledge Acquisition, 2(3): 241-258.", |
| "links": null |
| }, |
| "BIBREF10": { |
| "ref_id": "b10", |
| "title": "Experience with longterm knowledge acquisition", |
| "authors": [ |
| { |
| "first": "Paul", |
| "middle": [], |
| "last": "Compton", |
| "suffix": "" |
| }, |
| { |
| "first": "Lindsay", |
| "middle": [], |
| "last": "Peters", |
| "suffix": "" |
| }, |
| { |
| "first": "Timothy", |
| "middle": [], |
| "last": "Lavers", |
| "suffix": "" |
| }, |
| { |
| "first": "Yang-Sok", |
| "middle": [], |
| "last": "Kim", |
| "suffix": "" |
| } |
| ], |
| "year": 2011, |
| "venue": "Proceedings of the 6th international conference on Knowledge capture", |
| "volume": "", |
| "issue": "", |
| "pages": "49--56", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Paul Compton, Lindsay Peters, Timothy Lavers, Yang-Sok Kim. 2011. Experience with long- term knowledge acquisition. In Proceedings of the 6th international conference on Knowledge capture, pages 49-56.", |
| "links": null |
| }, |
| "BIBREF11": { |
| "ref_id": "b11", |
| "title": "ClausIE: Clause-Based Open Information Extraction", |
| "authors": [ |
| { |
| "first": "Luciano", |
| "middle": [], |
| "last": "Del Corro", |
| "suffix": "" |
| }, |
| { |
| "first": "Rainer", |
| "middle": [], |
| "last": "Gemulla", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "Proceedings of the 22nd International conference on World Wide Web", |
| "volume": "", |
| "issue": "", |
| "pages": "355--366", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Luciano Del Corro and Rainer Gemulla. 2013. ClausIE: Clause-Based Open Information Extraction. In Proceedings of the 22nd International conference on World Wide Web, pages 355-366.", |
| "links": null |
| }, |
| "BIBREF12": { |
| "ref_id": "b12", |
| "title": "RoadRunner: Towards Automatic Data Extraction from Large Web Sites", |
| "authors": [ |
| { |
| "first": "Giansalvatore", |
| "middle": [], |
| "last": "Valter Crescenzi", |
| "suffix": "" |
| }, |
| { |
| "first": "Paolo", |
| "middle": [], |
| "last": "Mecca", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Merialdo", |
| "suffix": "" |
| } |
| ], |
| "year": 2001, |
| "venue": "Proceedings of the 27th International Conference on Very Large Data Basaes", |
| "volume": "", |
| "issue": "", |
| "pages": "109--118", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Valter Crescenzi, Giansalvatore Mecca and Paolo Merialdo. 2001. RoadRunner: Towards Automatic Data Extraction from Large Web Sites. In Proceedings of the 27th International Conference on Very Large Data Basaes, pages 109-118.", |
| "links": null |
| }, |
| "BIBREF13": { |
| "ref_id": "b13", |
| "title": "Open information extraction: The second generation", |
| "authors": [ |
| { |
| "first": "Oren", |
| "middle": [], |
| "last": "Etzioni", |
| "suffix": "" |
| }, |
| { |
| "first": "Anthony", |
| "middle": [], |
| "last": "Fader", |
| "suffix": "" |
| }, |
| { |
| "first": "Janara", |
| "middle": [], |
| "last": "Christensen", |
| "suffix": "" |
| }, |
| { |
| "first": "Stephen", |
| "middle": [], |
| "last": "Soderland", |
| "suffix": "" |
| }, |
| { |
| "first": "Mausam", |
| "middle": [], |
| "last": "", |
| "suffix": "" |
| } |
| ], |
| "year": 2011, |
| "venue": "Proceedings of the Conference on Artificial Intelligence", |
| "volume": "", |
| "issue": "", |
| "pages": "3--10", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Oren Etzioni, Anthony Fader, Janara Christensen, Stephen Soderland, and Mausam. 2011. Open information extraction: The second generation. In Proceedings of the Conference on Artificial Intelligence, pages 3-10.", |
| "links": null |
| }, |
| "BIBREF14": { |
| "ref_id": "b14", |
| "title": "Identifying relations for open information extraction", |
| "authors": [ |
| { |
| "first": "Anthony", |
| "middle": [], |
| "last": "Fader", |
| "suffix": "" |
| }, |
| { |
| "first": "Stephen", |
| "middle": [], |
| "last": "Soderland", |
| "suffix": "" |
| }, |
| { |
| "first": "Oren", |
| "middle": [], |
| "last": "Etzioni", |
| "suffix": "" |
| } |
| ], |
| "year": 2011, |
| "venue": "Proceedings of the Conference on Empirical Methods in Natural Language Processing", |
| "volume": "", |
| "issue": "", |
| "pages": "1535--1545", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Anthony Fader, Stephen Soderland and Oren Etzioni. 2011. Identifying relations for open information extraction. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, pages 1535-1545.", |
| "links": null |
| }, |
| "BIBREF15": { |
| "ref_id": "b15", |
| "title": "Machine Learning for Information Extraction in Informal Domains", |
| "authors": [ |
| { |
| "first": "Dayne", |
| "middle": [], |
| "last": "Freitag", |
| "suffix": "" |
| } |
| ], |
| "year": 2000, |
| "venue": "Machine Learning", |
| "volume": "39", |
| "issue": "", |
| "pages": "169--202", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Dayne Freitag. 2000. Machine Learning for Information Extraction in Informal Domains. Machine Learning, 39(2-3): 169-202.", |
| "links": null |
| }, |
| "BIBREF16": { |
| "ref_id": "b16", |
| "title": "Towards web-scale structured web data extraction", |
| "authors": [ |
| { |
| "first": "Tomas", |
| "middle": [], |
| "last": "Grigalis", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "Proceedings of the 6th ACM International Conference on Web search and data mining", |
| "volume": "", |
| "issue": "", |
| "pages": "753--758", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Tomas Grigalis, Towards web-scale structured web data extraction. 2013. In Proceedings of the 6th ACM International Conference on Web search and data mining, pages 753-758.", |
| "links": null |
| }, |
| "BIBREF17": { |
| "ref_id": "b17", |
| "title": "Semistructured Data: The TSIMMIS Experience", |
| "authors": [ |
| { |
| "first": "Joachim", |
| "middle": [], |
| "last": "Hammer", |
| "suffix": "" |
| }, |
| { |
| "first": "Jason", |
| "middle": [], |
| "last": "Mchugh", |
| "suffix": "" |
| }, |
| { |
| "first": "Hector", |
| "middle": [], |
| "last": "Garcia-Molina", |
| "suffix": "" |
| } |
| ], |
| "year": 1997, |
| "venue": "Proceedings of the 1st East-European conference on Advances in Databases and Information Systems", |
| "volume": "", |
| "issue": "", |
| "pages": "22--22", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Joachim Hammer, Jason McHugh and Hector Garcia-Molina. 1997. Semistructured Data: The TSIMMIS Experience. In Proceedings of the 1st East-European conference on Advances in Databases and Information Systems, pages 22- 22.", |
| "links": null |
| }, |
| "BIBREF18": { |
| "ref_id": "b18", |
| "title": "Generating finite-state transducers for semistructured data extraction from the Web", |
| "authors": [ |
| { |
| "first": "Chun-Nan", |
| "middle": [], |
| "last": "Hsu", |
| "suffix": "" |
| }, |
| { |
| "first": "Ming-Tzung", |
| "middle": [], |
| "last": "Dung", |
| "suffix": "" |
| } |
| ], |
| "year": 1998, |
| "venue": "Information systems", |
| "volume": "23", |
| "issue": "8", |
| "pages": "521--538", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Chun-Nan Hsu and Ming-Tzung Dung. 1998. Generating finite-state transducers for semi- structured data extraction from the Web. Information systems, 23(8): 521-538.", |
| "links": null |
| }, |
| "BIBREF19": { |
| "ref_id": "b19", |
| "title": "Multiple classification ripple down rules: evaluation and possibilities", |
| "authors": [ |
| { |
| "first": "Paul", |
| "middle": [], |
| "last": "Byeong Ho Kang", |
| "suffix": "" |
| }, |
| { |
| "first": "Phil", |
| "middle": [], |
| "last": "Compton", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Preston", |
| "suffix": "" |
| } |
| ], |
| "year": 1995, |
| "venue": "Proceedings of the 9th Banff Knowledge Acquisition for Knowledge Based Systems Workshop", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Byeong Ho Kang, Paul Compton and Phil Preston. 1995. Multiple classification ripple down rules: evaluation and possibilities. In Proceedings of the 9th Banff Knowledge Acquisition for Knowledge Based Systems Workshop.", |
| "links": null |
| }, |
| "BIBREF20": { |
| "ref_id": "b20", |
| "title": "Improving Open Information Extraction for Informal Web Documents with Ripple-Down Rules", |
| "authors": [ |
| { |
| "first": "Myung", |
| "middle": [ |
| "Hee" |
| ], |
| "last": "", |
| "suffix": "" |
| }, |
| { |
| "first": "Kim", |
| "middle": [], |
| "last": "", |
| "suffix": "" |
| }, |
| { |
| "first": "Paul", |
| "middle": [], |
| "last": "Compton", |
| "suffix": "" |
| } |
| ], |
| "year": 2012, |
| "venue": "Proceedings of the 12th Pacific Rim conference on Knowledge Management and Acquisition for Intelligent Systems", |
| "volume": "", |
| "issue": "", |
| "pages": "160--174", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Myung Hee Kim and Paul Compton. Improving Open Information Extraction for Informal Web Documents with Ripple-Down Rules. 2012. In Proceedings of the 12th Pacific Rim conference on Knowledge Management and Acquisition for Intelligent Systems, pages 160-174.", |
| "links": null |
| }, |
| "BIBREF21": { |
| "ref_id": "b21", |
| "title": "Wrapper induction for information extraction", |
| "authors": [ |
| { |
| "first": "Nicholas", |
| "middle": [], |
| "last": "Kushmerick", |
| "suffix": "" |
| } |
| ], |
| "year": 1997, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Nicholas Kushmerick. 1997. Wrapper induction for information extraction. PhD dissertation, University of Washington.", |
| "links": null |
| }, |
| "BIBREF22": { |
| "ref_id": "b22", |
| "title": "DEByE -Data Extraction By Example", |
| "authors": [ |
| { |
| "first": "H", |
| "middle": [ |
| "F" |
| ], |
| "last": "Alberto", |
| "suffix": "" |
| }, |
| { |
| "first": "Berthier", |
| "middle": [], |
| "last": "Laender", |
| "suffix": "" |
| }, |
| { |
| "first": "Altigran", |
| "middle": [ |
| "S" |
| ], |
| "last": "Ribeiro-Neto", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Da Silva", |
| "suffix": "" |
| } |
| ], |
| "year": 2002, |
| "venue": "Data and Knowledge Engineering", |
| "volume": "40", |
| "issue": "2", |
| "pages": "121--154", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Alberto H.F. Laender, Berthier Ribeiro-Neto and Altigran S. da Silva. 2002. DEByE -Data Extraction By Example. Data and Knowledge Engineering, 40(2): 121-154.", |
| "links": null |
| }, |
| "BIBREF23": { |
| "ref_id": "b23", |
| "title": "Vide: A vision-based approach for deep web data extraction", |
| "authors": [ |
| { |
| "first": "Wei", |
| "middle": [], |
| "last": "Liu", |
| "suffix": "" |
| }, |
| { |
| "first": "Xiaofeng", |
| "middle": [], |
| "last": "Meng", |
| "suffix": "" |
| }, |
| { |
| "first": "Weiyi", |
| "middle": [], |
| "last": "Meng", |
| "suffix": "" |
| } |
| ], |
| "year": 2010, |
| "venue": "IEEE Transactions on Knowledge and Data Engineering", |
| "volume": "22", |
| "issue": "3", |
| "pages": "447--460", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Wei Liu, Xiaofeng Meng and Weiyi Meng. 2010. Vide: A vision-based approach for deep web data extraction. IEEE Transactions on Knowledge and Data Engineering, 22 (3), pages 447-460.", |
| "links": null |
| }, |
| "BIBREF24": { |
| "ref_id": "b24", |
| "title": "Open Language Learning for Information Extraction", |
| "authors": [ |
| { |
| "first": "Michael", |
| "middle": [], |
| "last": "Mausam", |
| "suffix": "" |
| }, |
| { |
| "first": "Robert", |
| "middle": [], |
| "last": "Schmitz", |
| "suffix": "" |
| }, |
| { |
| "first": "Stephen", |
| "middle": [], |
| "last": "Bart", |
| "suffix": "" |
| }, |
| { |
| "first": "Oren", |
| "middle": [], |
| "last": "Soderland", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Etzioni", |
| "suffix": "" |
| } |
| ], |
| "year": 2012, |
| "venue": "Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Langauge Learning", |
| "volume": "", |
| "issue": "", |
| "pages": "523--534", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Mausam, Michael Schmitz, Robert Bart, Stephen Soderland and Oren Etzioni. 2012. Open Language Learning for Information Extraction. In Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Langauge Learning, pages 523-534.", |
| "links": null |
| }, |
| "BIBREF25": { |
| "ref_id": "b25", |
| "title": "A hierarchical approach to wrapper induction", |
| "authors": [ |
| { |
| "first": "Ion", |
| "middle": [], |
| "last": "Muslea", |
| "suffix": "" |
| }, |
| { |
| "first": "Steve", |
| "middle": [], |
| "last": "Minton", |
| "suffix": "" |
| }, |
| { |
| "first": "Craig", |
| "middle": [], |
| "last": "Knoblock", |
| "suffix": "" |
| } |
| ], |
| "year": 1999, |
| "venue": "Proceedings of the 3rd Annual Conference on Autonomous Agents", |
| "volume": "", |
| "issue": "", |
| "pages": "190--197", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Ion Muslea, Steve Minton and Craig Knoblock. 1999. A hierarchical approach to wrapper induction. In Proceedings of the 3rd Annual Conference on Autonomous Agents. pages 190- 197.", |
| "links": null |
| }, |
| "BIBREF26": { |
| "ref_id": "b26", |
| "title": "Extracting Positive Attributions from Scientific papers", |
| "authors": [ |
| { |
| "first": "Bao", |
| "middle": [], |
| "last": "Son", |
| "suffix": "" |
| }, |
| { |
| "first": "Achim", |
| "middle": [], |
| "last": "Pham", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Hoffmann", |
| "suffix": "" |
| } |
| ], |
| "year": 2004, |
| "venue": "Proceedings of the 7th International conference on Discovery Science Conference", |
| "volume": "", |
| "issue": "", |
| "pages": "169--182", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Son Bao Pham and Achim Hoffmann. 2004. Extracting Positive Attributions from Scientific papers. In Proceedings of the 7th International conference on Discovery Science Conference, pages 169-182.", |
| "links": null |
| }, |
| "BIBREF27": { |
| "ref_id": "b27", |
| "title": "Efficient Knowledge Acquisition for Extracting Temporal Relations", |
| "authors": [ |
| { |
| "first": "Bao", |
| "middle": [], |
| "last": "Son", |
| "suffix": "" |
| }, |
| { |
| "first": "Achim", |
| "middle": [], |
| "last": "Pham", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Hoffmann", |
| "suffix": "" |
| } |
| ], |
| "year": 2006, |
| "venue": "Proceedings of the 17th", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Son Bao Pham and Achim Hoffmann. 2006. Efficient Knowledge Acquisition for Extracting Temporal Relations. In Proceedings of the 17th", |
| "links": null |
| }, |
| "BIBREF28": { |
| "ref_id": "b28", |
| "title": "European Conference on Artificial Intelligence", |
| "authors": [], |
| "year": null, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "521--525", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "European Conference on Artificial Intelligence, pages 521-525.", |
| "links": null |
| }, |
| "BIBREF29": { |
| "ref_id": "b29", |
| "title": "Learning Information Extraction Rules for Semi-structured and Free Text", |
| "authors": [ |
| { |
| "first": "Stephen", |
| "middle": [], |
| "last": "Soderland", |
| "suffix": "" |
| } |
| ], |
| "year": 1999, |
| "venue": "Machine Learning", |
| "volume": "", |
| "issue": "", |
| "pages": "1--44", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Stephen Soderland. 1999. Learning Information Extraction Rules for Semi-structured and Free Text. Machine Learning, pages 1-44.", |
| "links": null |
| }, |
| "BIBREF30": { |
| "ref_id": "b30", |
| "title": "Open information extraction using Wikipedia", |
| "authors": [ |
| { |
| "first": "Fei", |
| "middle": [], |
| "last": "Wu", |
| "suffix": "" |
| }, |
| { |
| "first": "S", |
| "middle": [], |
| "last": "Daniel", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Weld", |
| "suffix": "" |
| } |
| ], |
| "year": 2010, |
| "venue": "Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "118--127", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Fei Wu and Daniel S. Weld. 2010. Open information extraction using Wikipedia. In Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics, pages 118-127.", |
| "links": null |
| }, |
| "BIBREF31": { |
| "ref_id": "b31", |
| "title": "RDRCE: Combining Machine Learning and Knowledge Acquisition", |
| "authors": [ |
| { |
| "first": "Han", |
| "middle": [], |
| "last": "Xu", |
| "suffix": "" |
| }, |
| { |
| "first": "Achim", |
| "middle": [], |
| "last": "Hoffmann", |
| "suffix": "" |
| } |
| ], |
| "year": 2010, |
| "venue": "Proceedings of the 11th International Workshop", |
| "volume": "", |
| "issue": "", |
| "pages": "165--179", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Han Xu and Achim Hoffmann. 2010. RDRCE: Combining Machine Learning and Knowledge Acquisition. In Proceedings of the 11th International Workshop, pages 165-179.", |
| "links": null |
| }, |
| "BIBREF32": { |
| "ref_id": "b32", |
| "title": "Web data extraction based on partial tree alignment", |
| "authors": [ |
| { |
| "first": "Yanhong", |
| "middle": [], |
| "last": "Zhai", |
| "suffix": "" |
| }, |
| { |
| "first": "Bing", |
| "middle": [], |
| "last": "Liu", |
| "suffix": "" |
| } |
| ], |
| "year": 2005, |
| "venue": "Proceedings of the 14th International Conference on World Wide Web", |
| "volume": "", |
| "issue": "", |
| "pages": "76--85", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Yanhong Zhai and Bing Liu. 2005. Web data extraction based on partial tree alignment. In Proceedings of the 14th International Conference on World Wide Web. pages 76-85.", |
| "links": null |
| } |
| }, |
| "ref_entries": { |
| "FIGREF0": { |
| "type_str": "figure", |
| "num": null, |
| "uris": null, |
| "text": "Components of RDR rule(1) A condition consists of four components in the form of (ATTRIBUTE_1.ATTRIBUTE_2 OP-ERATOR VALUE).(a) ATTRIBUTE _1 refers to one of the 5 sections of a given text segment which is in the form of [E1BEFORE, ENTITY_1, sections contain all the remaining tokens before the ENTITY_1 and after the ENTITY_2 sections, respectively. (b) ATTRIBUTE _2 refers to one of the NLP features; currently the following three NLP features are available:\u2022 Lexical feature: token (TKN)\u2022 Syntactic feature: Part-Of-Speech (POS)" |
| }, |
| "FIGREF1": { |
| "type_str": "figure", |
| "num": null, |
| "uris": null, |
| "text": "Rule construction example in MCRDR" |
| }, |
| "FIGREF2": { |
| "type_str": "figure", |
| "num": null, |
| "uris": null, |
| "text": "RDR Engine GUI for case-by-case incremental Knowledge Acquisition" |
| }, |
| "TABREF4": { |
| "num": null, |
| "type_str": "table", |
| "text": "", |
| "html": null, |
| "content": "<table><tr><td>: rGALA rule creation analysis on the training</td></tr><tr><td>Dataset. ('Ins' and 'Pat' refers to 'Instances' and 'Pat-</td></tr><tr><td>terns' respectively )</td></tr></table>" |
| }, |
| "TABREF5": { |
| "num": null, |
| "type_str": "table", |
| "text": "", |
| "html": null, |
| "content": "<table><tr><td>: The rGALA performance on the testing</td></tr><tr><td>dataset. ('Ins' refers to 'Instances')</td></tr></table>" |
| } |
| } |
| } |
| } |