| { |
| "paper_id": "Y16-2026", |
| "header": { |
| "generated_with": "S2ORC 1.0.0", |
| "date_generated": "2023-01-19T13:47:09.725615Z" |
| }, |
| "title": "Toward the automatic extraction of knowledge of usable goods", |
| "authors": [ |
| { |
| "first": "Mei", |
| "middle": [], |
| "last": "Uemura", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "Tohoku University", |
| "location": { |
| "country": "Japan" |
| } |
| }, |
| "email": "mei.uemura@ecei.tohoku.ac.jp" |
| }, |
| { |
| "first": "Naho", |
| "middle": [], |
| "last": "Orita", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "Tohoku University", |
| "location": { |
| "country": "Japan" |
| } |
| }, |
| "email": "" |
| }, |
| { |
| "first": "Naoaki", |
| "middle": [], |
| "last": "Okazaki", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "Tohoku University", |
| "location": { |
| "country": "Japan" |
| } |
| }, |
| "email": "okazaki@ecei.tohoku.ac.jp" |
| }, |
| { |
| "first": "Kentaro", |
| "middle": [], |
| "last": "Inui", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "Tohoku University", |
| "location": { |
| "country": "Japan" |
| } |
| }, |
| "email": "inui@ecei.tohoku.ac.jp" |
| } |
| ], |
| "year": "", |
| "venue": null, |
| "identifiers": {}, |
| "abstract": "Knowledge of usable goods (e.g., toothbrush is used to clean the teeth and treadmill is used for exercise) is ubiquitous and in constant demand. This study proposes semantic labels to capture aspects of knowledge of usable goods and builds a benchmark corpus, Usable Goods Corpus, to explore this new semantic labeling task. Our human annotation experiment shows that human annotators can generally identify pieces of information of usable goods in text. Our first attempt toward the automatic identification of such knowledge shows that a model using conditional random fields approaches the human annotation (F score 73.2%). These results together suggest future directions to build a large-scale corpus and improve the automatic identification of knowledge of usable goods.", |
| "pdf_parse": { |
| "paper_id": "Y16-2026", |
| "_pdf_hash": "", |
| "abstract": [ |
| { |
| "text": "Knowledge of usable goods (e.g., toothbrush is used to clean the teeth and treadmill is used for exercise) is ubiquitous and in constant demand. This study proposes semantic labels to capture aspects of knowledge of usable goods and builds a benchmark corpus, Usable Goods Corpus, to explore this new semantic labeling task. Our human annotation experiment shows that human annotators can generally identify pieces of information of usable goods in text. Our first attempt toward the automatic identification of such knowledge shows that a model using conditional random fields approaches the human annotation (F score 73.2%). These results together suggest future directions to build a large-scale corpus and improve the automatic identification of knowledge of usable goods.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Abstract", |
| "sec_num": null |
| } |
| ], |
| "body_text": [ |
| { |
| "text": "A rich body of information extraction techniques focuses on acquiring knowledge from a huge amount of text data (Nickel et al. 2016) . This allows large-scale knowledge bases to cover a broad range of knowledge. However, an important subfield of knowledge is not fully addressed: knowledge about use of objects such that hand sanitizer is used to kill bacteria and dental floss is used to remove plaque. Every object that humans create has its own purpose and function. We call these pieces of information knowledge of usable goods. Knowledge of usable goods is ubiquitous and in constant demand. People use search engines to find information on effect caused by using a new product, its proper way to use, and so on.", |
| "cite_spans": [ |
| { |
| "start": 112, |
| "end": 132, |
| "text": "(Nickel et al. 2016)", |
| "ref_id": "BIBREF11" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "Knowledge sources that contain such information would also be beneficial for various kinds of natural language processing tasks, such as question answering systems and textual entailment. However, knowledge of usable goods is not thoroughly covered by current knowledge bases because these resources focus on entities (e.g. person or organization) and their relations (e.g. Is-PresidentOf) . Section 4.3 shows the gap between kinds of knowledge available in the current knowledge bases and the ones that we aim to acquire.", |
| "cite_spans": [ |
| { |
| "start": 374, |
| "end": 389, |
| "text": "Is-PresidentOf)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "To fill in this gap, this study proposes a set of semantic labels to capture knowledge of usable goods and builds a benchmark corpus, Usable Goods Corpus, to explore the automatic extraction of such knowledge. This work begins with focusing on information of health care and household goods such as air freshener, rice cooker, and nasal strip.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "We assume that one of the most important aspects of knowledge of usable goods is about effects caused by using/consuming them as in (1). 1", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "(1) a. Fish-oils ... are known to reduce inflammation in the body, ...", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "b. Alcohol-based hand sanitizers are more effective at killing microorganisms than soaps... (hand sanitizers) c. BB cream and CC cream are both tinted moisturizers ... (CC cream) d. ... the American Dental Association reports that up to 80% of plaque can be eliminated with this method. (dental floss)", |
| "cite_spans": [ |
| { |
| "start": 92, |
| "end": 109, |
| "text": "(hand sanitizers)", |
| "ref_id": null |
| }, |
| { |
| "start": 168, |
| "end": 178, |
| "text": "(CC cream)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "(fish oil)", |
| "sec_num": null |
| }, |
| { |
| "text": "Humans can easily understand what the effects of these goods are: fish-oils reduce inflammation in the body (1a), hand sanitizers kill microorganisms (1b), BB cream tints and moisturizes skin (1c), and dental floss eliminate plaque (1d). However, the automatic extraction of such knowledge is challenging in that these effects can be expressed in various ways such as a verb phrase (1a), gerund (1b), noun phrase (1c), and clause (1d). This poses a problem that superficial linguistic patterns would not help identifying these kinds of expressions. To gauge difficulties of the automatic acquisition of these pieces of information, we conduct human annotation (Section 4) and automatic identification experiments (Section 5).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "(fish oil)", |
| "sec_num": null |
| }, |
| { |
| "text": "The major contributions of this work are: (i) We define a set of semantic labels to capture knowledge of usable goods, suggesting a new semantic labeling task. (ii) We experimentally build a benchmark corpus (Usable Goods Corpus) to explore the automatic extraction of knowledge of usable goods. The corpus and guidelines will be available when this paper is presented. (iii) We present our initial attempts toward the automatic extraction of such knowledge using a sequence labeling method. The results in this experiment provide measures to estimate the complexity of this task and suggest future directions to build a large-scale corpus.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "(fish oil)", |
| "sec_num": null |
| }, |
| { |
| "text": "To our knowledge, there is no resource that focuses on knowledge of usable goods. There are manually constructed and relatively accurate lexical resources such as WordNet (Miller 1995) and FrameNet (Baker et al. 1998) , but their coverage is inevitably limited and these ontologies do not contain knowledge of our interest. Current large-scale knowledge bases focus on knowledge of entities and their relations, but the coverage of knowledge of usable goods is still sparse as shown in Section 4.3. OpenIE systems ) such as TextRunner (Etzioni et al. 2008) and ReVerb ) extract a large number of relations such as treadmill, burns, more calories using lexico-syntactic patterns from massive corpora drawn from the Web. Though these systems cover a wide variety of relational expressions, they do not intend to extract information of usable goods.", |
| "cite_spans": [ |
| { |
| "start": 171, |
| "end": 184, |
| "text": "(Miller 1995)", |
| "ref_id": "BIBREF10" |
| }, |
| { |
| "start": 189, |
| "end": 217, |
| "text": "FrameNet (Baker et al. 1998)", |
| "ref_id": null |
| }, |
| { |
| "start": 535, |
| "end": 556, |
| "text": "(Etzioni et al. 2008)", |
| "ref_id": "BIBREF3" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related work", |
| "sec_num": "2" |
| }, |
| { |
| "text": "As for extracting information of objects, there is a body of research on the acquisition of telic and agentive roles in the context of generative lexicon theory (Pustejovsky 1991) . Pustejovsky proposes qualia structures that define prototypical aspects of word's meaning (Pustejovsky et al. 1993 ). Of four semantic roles in the qualia structures, the telic role describes the purpose or function of an object (e.g. read is a typical telic role for book). Computational approaches are suggested to automatically extract expressions of this role from text (Yamada et al. 2007, Cimiano and Wenderoth 2007) , but these models tend to focus on taking paraphrases of \"using X\", rather than the expressions of purpose or function of objects. While the telic roles cover a broader range of expressions (probably due to the unspecified definition of telicity in the original theory), our work focuses on effects caused by using/consuming objects, standing as complementary to these previous studies.", |
| "cite_spans": [ |
| { |
| "start": 161, |
| "end": 179, |
| "text": "(Pustejovsky 1991)", |
| "ref_id": "BIBREF12" |
| }, |
| { |
| "start": 272, |
| "end": 296, |
| "text": "(Pustejovsky et al. 1993", |
| "ref_id": "BIBREF13" |
| }, |
| { |
| "start": 556, |
| "end": 588, |
| "text": "(Yamada et al. 2007, Cimiano and", |
| "ref_id": null |
| }, |
| { |
| "start": 589, |
| "end": 604, |
| "text": "Wenderoth 2007)", |
| "ref_id": "BIBREF2" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related work", |
| "sec_num": "2" |
| }, |
| { |
| "text": "Information extraction research in biomedical domains concerns effects caused by using drugs such that drug X causes adverse effect Y (Gurulingappa et al. 2012) . This kind of information may overlap with what we aim to acquire, but ontologies in these studies are domain-specific such as protein interactions and adverse effects, contrary to our interest, which is more generic.", |
| "cite_spans": [ |
| { |
| "start": 134, |
| "end": 160, |
| "text": "(Gurulingappa et al. 2012)", |
| "ref_id": "BIBREF6" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related work", |
| "sec_num": "2" |
| }, |
| { |
| "text": "In summary, neither existing resources nor methods focus on knowledge of usable goods. In the next section, we propose a set of semantic labels that captures aspects of knowledge of usable goods. ", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related work", |
| "sec_num": "2" |
| }, |
| { |
| "text": "To capture aspects of knowledge of usable goods, we define semantic labels as in Table 1 based on observation of 25 Wikipedia lead sections on health care and household goods. The Wikipedia lead 4 is normally a summary of its most important contents, and therefore it may allow us to get rich information from relatively small amount of data.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 81, |
| "end": 88, |
| "text": "Table 1", |
| "ref_id": "TABREF1" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Semantic labels for capturing knowledge of usable goods", |
| "sec_num": "3" |
| }, |
| { |
| "text": "As shown in (1), we assume that one of the most important aspects of knowledge of usable goods is about effects caused by the use of goods. We also observe that there are various kinds of information that express degree/certainty of effects and conditions for the occurrence of effects.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Semantic labels for capturing knowledge of usable goods", |
| "sec_num": "3" |
| }, |
| { |
| "text": "3 The phrase precursors of certain eicosanoids in Figure 1 is not Composed of for fish oil because this phrase just denotes an explanation of the constituents of fish oil, omega-3 fatty acids eicosapentaenoic acid (EPA) and docosahexaenoic acid (DHA) are constituents of fish oil.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 50, |
| "end": 56, |
| "text": "Figure", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Semantic labels for capturing knowledge of usable goods", |
| "sec_num": "3" |
| }, |
| { |
| "text": "4 It is also known as the introduction of a Wikipedia article, the section before the table of contents and the first heading. Table 1 are intended to capture these kinds of information. In addition to these semantic labels, we define a label Target for name and other expressions that refer to a usable good in the article. Names of usable goods essentially correspond to titles of Wikipedia articles, which refer to the topic of the text. Figure 1 shows how these labels are assigned to pieces of information about fish oil.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 127, |
| "end": 134, |
| "text": "Table 1", |
| "ref_id": "TABREF1" |
| }, |
| { |
| "start": 441, |
| "end": 449, |
| "text": "Figure 1", |
| "ref_id": "FIGREF0" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Semantic labels for capturing knowledge of usable goods", |
| "sec_num": "3" |
| }, |
| { |
| "text": "The annotation guidelines are designed to increase consistency. We define rules for segmentation in the guidelines, along with definition and examples of each label. To capture various linguistic expressions as illustrated in (1), we do not define a particular syntactic category for each label. All labels can take any type of linguistic constituent, but function words that do not contribute to the meaning are not included in each segment to avoid inconsistency. For example, we ask annotators mark define the eyes in Eyeliner is a cosmetic used to define the eyes as Effect (i.e., to is not included).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Semantic labels in", |
| "sec_num": null |
| }, |
| { |
| "text": "The set of semantic labels in Table 1 proposes a new semantic labeling task. To gauge the complexity of this task, we conduct human annotation experiment (Section 4) and automatic identification experiment (Section 5) as follows.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 30, |
| "end": 37, |
| "text": "Table 1", |
| "ref_id": "TABREF1" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Semantic labels in", |
| "sec_num": null |
| }, |
| { |
| "text": "We conduct a pilot annotation experiment to measure the complexity of this task. Measures of inter-annotator agreement and distributional analysis of the annotated data provide indications to improve the annotation schema for building a large-scale corpus in the future. This pilot corpus is also used for the automatic identification in Section 5. The following describes our annotation experiment in details.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Annotation", |
| "sec_num": "4" |
| }, |
| { |
| "text": "We collect 200 English Wikipedia articles for annotation. Each article is about a health care or household goods such as toothpaste, tea cosy, and dishwasher. We choose these items using Ama- zon categories and products lists. 5 All of chosen items are expressed as common nouns. We exclude any company-specific product.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Data: snippets from Wikipedia leads", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "We extract the lead section of each Wikipedia article for annotation. We use at most the first 5 sentences of the lead to even out the number of sentences, ending up 792 sentences in total from 200 lead snippets.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Data: snippets from Wikipedia leads", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "Each annotator annotates same 100 snippets using brat (Stenetorp et al. 2012) . Figure 1 shows an example of annotation. In addition to these 100 snippets, one of the two annotators annotates another 100 snippets, resulting in 200 annotated snippets. We use this set of 200 annotated snippets as the gold standard dataset in the following automatic identification experiment.", |
| "cite_spans": [ |
| { |
| "start": 54, |
| "end": 77, |
| "text": "(Stenetorp et al. 2012)", |
| "ref_id": "BIBREF15" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 80, |
| "end": 88, |
| "text": "Figure 1", |
| "ref_id": "FIGREF0" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Data: snippets from Wikipedia leads", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "Two annotators were given the guidelines and a short training on texts not included in the corpus. Their task is to annotate linguistic expressions that correspond to the semantic labels in Table 1 . agreement. We compute these scores in two ways: (i) strict match: the starting and ending of the segment to be the same, (ii) lenient match: the starting and ending of the segment do not have to be the same but they overlap. We obtain Kappa coefficient of 0.57 in the lenient match, suggesting moderate agreement (Landis and Koch 1977) . F score in the strict match (micro average 36.8%) seems to be reasonable because we give annotators unparsed raw text to explore the range of linguistic expressions. Most segmentation disagreements occur in deciding whether to include function words (e.g. to protect skin or protect skin).", |
| "cite_spans": [ |
| { |
| "start": 513, |
| "end": 535, |
| "text": "(Landis and Koch 1977)", |
| "ref_id": "BIBREF8" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 190, |
| "end": 197, |
| "text": "Table 1", |
| "ref_id": "TABREF1" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Evaluation", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "In addition, there are label disagreements accounting for 20% of segment pairs that either partially or completely match. For example, one annotator marks hair and skin care in (2) as Effect and the other does so as Means of Use, where both labels seem to be appropriate. (2) It is used for topical applications such as hair and skin care. (egg oil)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Evaluation", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "This kind of disagreement may reflect differences in annotators' background knowledge. Hair and skin care does not explicitly denote the effect, but people usually have the relevant knowledge such that skin care improves skin elasticity.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Evaluation", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "The following 3shows an example of disagreement between Version and Composed of.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Evaluation", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "(3) A wet wipe ... is a small moistened piece of paper or cloth ... (Wet wipe)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Evaluation", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "Paper and cloth in (3) could be Version of wet wipe, but they are also materials that compose of wet wipe. Both Version and ComposedOf are valid in this example. These examples of label disagreement suggest that single-label annotation would not be able to sufficiently capture the knowledge of usable goods. Allowing multi-labeling would be one direction for further improvement.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Evaluation", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "We conduct distributional analysis to examine the extent to which the proposed semantic labels capture information of usable goods. Table 3 breaks up numbers of the annotated instances by two annotators. Effect results in the most frequent one, suggesting its significance at least in the domain of health care and household goods.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 132, |
| "end": 139, |
| "text": "Table 3", |
| "ref_id": "TABREF5" |
| } |
| ], |
| "eq_spans": [], |
| "section": "The distribution of the annotated data", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "On the other hand, there are a few number of instances for Certainty of Effect, Degree of Effect, Null Effect, Part of, Location, Time, and User. This may due to the content of the Wikipedia leads. These kinds of more precise information would usually appear after the lead section. 6 We further examine the syntactic distribution of Effect instances as in Table 4 . The majority of Effect instances are represented as verb phrases and there is a variation in those instances such as darken the eyelids (kohl), minimize shininess caused by oily skin (face powder), tones the face (face powder), reflect light at different angles (glitter) and so on, in addition to typical causal expressions such as causes anesthesia (anesthetic), prevent snoring (nasal strip), and promote oral hygiene (toothpaste). An example of noun phrase in Table 4 suggests an interesting problem in that lacquer itself is a usable good but also means effect caused by using a nail polish. This kind of information structure has not been addressed in previous work on information extraction.", |
| "cite_spans": [ |
| { |
| "start": 283, |
| "end": 284, |
| "text": "6", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 357, |
| "end": 364, |
| "text": "Table 4", |
| "ref_id": null |
| }, |
| { |
| "start": 831, |
| "end": 838, |
| "text": "Table 4", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "The distribution of the annotated data", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "Overall, we find that 81.8% of instances occur with Target in the same sentence. The remaining cases involve long-distance dependencies across the sentence. This distribution suggests that we do not need to annotate the relation between Target and each label and we could exploit these inter-sentential relations in the automatic identification task. The following Section 5 shows our automatic identification experiment using this distributional property.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "The distribution of the annotated data", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "The above human annotation experiment shows that Wikipedia leads contain a reasonable amount of information on effects caused by us-", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Comparison with current knowledge base", |
| "sec_num": "4.4" |
| }, |
| { |
| "text": "6 Besides these semantic labels, there are other descriptions on the manufacturing process and history of usable goods as in (4). ... that can be applied to decorate and protect the nail plates (nail polish) intransitive 14 (7.0%) It generally stays on longer than lipstick (lip stain) Noun phrase 44 (22.1%) Nail polish is a lacquer (nail polish) Adjective phrase 19 (9.5%) Choline is a water-soluble nutrient (choline) Sentence 1 (0.5%) ... reports that up to 80% of plaque can be eliminated (dental floss) Total 199 Table 4 : Syntactic distribution of Effect instances ing goods. However, it is possible that existing knowledge bases might have already acquired such knowledge. To examine the coverage of the current knowledge base, we compare Concept-Net (Speer and Havasi 2012) with our corpus. For comparison, we use 100 usable goods in our corpus such as ice pack, hand sanitizer and perfume. We then manually select 4 out of 39 pre-defined relations in ConceptNet that could be associated with effect expressions such as Used For, Capable Of, Causes Desire, and Causes. Of 100 usable goods, 27 usable goods have pieces of knowledge that are expressed with the above relations such as hand sanitizer, Causes, clean hand and Toothpaste, Capable Of, help remove plaque .", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 519, |
| "end": 526, |
| "text": "Table 4", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Comparison with current knowledge base", |
| "sec_num": "4.4" |
| }, |
| { |
| "text": "In short, though ConceptNet contains information of our interest, the coverage is still not sufficient (27/100 usable goods). The automatic extraction of information of usable goods would help populate this kind of knowledge base. The next section shows our initial attempt toward the automatic extraction of knowledge of usable goods.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Comparison with current knowledge base", |
| "sec_num": "4.4" |
| }, |
| { |
| "text": "This section presents our experiment for automatically identifying information of usable goods. The results provide baseline measures for this new semantic labeling task and suggest potential directions for improvement. Section 4.3 shows that almost all instances in our corpus occur with Target in the same sentence. We exploit this distributional property by using Target words as a cue to find information of usable goods and pose this task as a sequence labeling problem. We use Conditional Random Fields (CRFs), a popular approach to solve sequence labeling problems (Lafferty et al. 2001) . CRFsuite 7 is used as an implementation of CRF for our purpose.", |
| "cite_spans": [ |
| { |
| "start": 572, |
| "end": 594, |
| "text": "(Lafferty et al. 2001)", |
| "ref_id": "BIBREF7" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Sequence labeling model for identifying information of usable goods", |
| "sec_num": "5" |
| }, |
| { |
| "text": "The training and test data consists of 792 sentences from 200 Wikipedia snippets (see Section 4.1). We select the four most frequent labels in the corpus, Effect, Means of use, Composed of and Version, for evaluation.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experimental Settings", |
| "sec_num": "5.1" |
| }, |
| { |
| "text": "For the data pre-processing, we first parse the raw text and assign a part of speech tag and a named entity tag to each word using Stanford CoreNLP (Manning et al. 2014 ). Then we add a semantic label to each word with BIO format (Beginning, Inside and Outside).", |
| "cite_spans": [ |
| { |
| "start": 148, |
| "end": 168, |
| "text": "(Manning et al. 2014", |
| "ref_id": "BIBREF9" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experimental Settings", |
| "sec_num": "5.1" |
| }, |
| { |
| "text": "Features shown in Table 5 are used for training. We use these features within a window of \u00b13 around the current word. Some of these features are used in combination with another feature as shown in Table 5 .", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 18, |
| "end": 25, |
| "text": "Table 5", |
| "ref_id": "TABREF7" |
| }, |
| { |
| "start": 198, |
| "end": 205, |
| "text": "Table 5", |
| "ref_id": "TABREF7" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Features", |
| "sec_num": "5.2" |
| }, |
| { |
| "text": "In addition to standard features, we add three features to exploit the characteristics of this corpus: Target, Disease and Repeat. Target feature is true when the current word is same as the title of Wikipedia article. Disease feature is true when the current word is in a list of disease names that we create using Freebase (Bollacker et al. 2008) . This feature is intended to capture effect expressions that include disease names such as provoke allergy and asthma symptoms (air freshener). Repeat feature is true when the current word has already been appeared in the sentence. This feature is intended to capture a parallel structure that is often used to express Version and Composed of.", |
| "cite_spans": [ |
| { |
| "start": 325, |
| "end": 348, |
| "text": "(Bollacker et al. 2008)", |
| "ref_id": "BIBREF1" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Features", |
| "sec_num": "5.2" |
| }, |
| { |
| "text": "We compute precision, recall and F1 measure using ten fold cross validation. We compute these scores in two ways, lenient match and strict match as in the human annotation experiment (see Section 4.2). Table 6 shows results. F score in the lenient match (73.2%) approaches the human annotation performance (81.9%). This suggests that the model is able to identify labels to some extent. For example, the model recognizes typical lexico-syntactic patterns such as be used to in (wallpaper) is used to cover and decorate the interior walls and be designed to in (rice cooker) is designed to boil or steam rice. Furthermore, the model captures various effect expressions such as an adjective phrase (5a), verb phrase (5b), and gerund (5c). On the other hand, the segmentation problem as discussed in the human annotation experiment influences the F score in the strict match (13.7%).", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 202, |
| "end": 209, |
| "text": "Table 6", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Evaluation", |
| "sec_num": "5.3" |
| }, |
| { |
| "text": "In sum, though there is the segmentation problem derived from the annotation, the results in the lenient match suggest that the model can identify information of usable goods to some extent. Improving the annotation schema and increasing the size of the corpus would be promising directions for future work.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Evaluation", |
| "sec_num": "5.3" |
| }, |
| { |
| "text": "This paper proposes semantic labels to capture aspects of knowledge of usable goods. We design annotation schema and build the benchmark corpus, Usable Goods Corpus, based on the proposed semantic labels. Our human annotation experiment shows that (i) while there is the segmentation mismatch problem, human annotators can generally identify pieces of information of usable goods, and (ii) Wikipedia leads contain a reasonable amount of information on effects caused by using goods in contrast to the coverage of the current knowledge base. The automatic identification experiment shows that despite of the influence of the segmentation problem in the human annotation, the model can to some extent identify pieces of information of usable goods.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusion", |
| "sec_num": "6" |
| }, |
| { |
| "text": "Our next steps are to alleviate the segmentation problem and increase the corpus size. With these goals in mind, we plan to revise the annotation schema as follows: (a) Some semantic labels do not seem to be important as seen in the statistics in Table 3 . Reducing the variation of the semantic labels is a reasonable direction. (b) Defining a syntactic category for each label and giving annotators/models parsed text would increase consistency in the segmentation. (c) These simplifications (a,b) would allow us to Table 6 : 10-fold cross-validation try crowdsourcing annotation to increase the size of the corpus.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 247, |
| "end": 254, |
| "text": "Table 3", |
| "ref_id": "TABREF5" |
| }, |
| { |
| "start": 518, |
| "end": 525, |
| "text": "Table 6", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Conclusion", |
| "sec_num": "6" |
| }, |
| { |
| "text": "Throughout this paper, each typewriter word in a round bracket (e.g. toothbrush) indicates a name of a usable good that corresponds to the title of Wikipedia article.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "30th Pacific Asia Conference on Language, Information and Computation (PACLIC 30)Seoul, Republic of Korea, October 28-30, 2016", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "http://www.chokkan.org/software/crfsuite/", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| } |
| ], |
| "back_matter": [ |
| { |
| "text": "This study is supported by CREST, JST and JSPS KAKENHI Grant Number JP15H05318.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Acknowledgements", |
| "sec_num": null |
| } |
| ], |
| "bib_entries": { |
| "BIBREF0": { |
| "ref_id": "b0", |
| "title": "The Berkeley FrameNet Project", |
| "authors": [ |
| { |
| "first": "F", |
| "middle": [], |
| "last": "Collin", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Baker", |
| "suffix": "" |
| }, |
| { |
| "first": "J", |
| "middle": [], |
| "last": "Charles", |
| "suffix": "" |
| }, |
| { |
| "first": "John B", |
| "middle": [], |
| "last": "Fillmore", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Lowe", |
| "suffix": "" |
| } |
| ], |
| "year": 1998, |
| "venue": "Proceedings of the 17th International Conference on Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "86--90", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Collin F Baker, Charles J Fillmore, and John B Lowe. The Berkeley FrameNet Project. In Proceedings of the 17th International Confer- ence on Computational Linguistics, pages 86- 90. Association for Computational Linguistics, 1998.", |
| "links": null |
| }, |
| "BIBREF1": { |
| "ref_id": "b1", |
| "title": "Freebase: A Collaboratively Created Graph Database for Structuring Human Knowledge", |
| "authors": [ |
| { |
| "first": "Kurt", |
| "middle": [], |
| "last": "Bollacker", |
| "suffix": "" |
| }, |
| { |
| "first": "Colin", |
| "middle": [], |
| "last": "Evans", |
| "suffix": "" |
| }, |
| { |
| "first": "Praveen", |
| "middle": [], |
| "last": "Paritosh", |
| "suffix": "" |
| }, |
| { |
| "first": "Tim", |
| "middle": [], |
| "last": "Sturge", |
| "suffix": "" |
| }, |
| { |
| "first": "Jamie", |
| "middle": [], |
| "last": "Taylor", |
| "suffix": "" |
| } |
| ], |
| "year": 2008, |
| "venue": "Proceedings of ACM Special Interest Group on Management of Data", |
| "volume": "", |
| "issue": "", |
| "pages": "1247--1250", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Kurt Bollacker, Colin Evans, Praveen Paritosh, Tim Sturge, and Jamie Taylor. Freebase: A Collaboratively Created Graph Database for Structuring Human Knowledge. In Proceed- ings of ACM Special Interest Group on Man- agement of Data, pages 1247-1250, 2008.", |
| "links": null |
| }, |
| "BIBREF2": { |
| "ref_id": "b2", |
| "title": "Automatic Acquisition of Ranked Qualia Structures from the Web", |
| "authors": [ |
| { |
| "first": "Philipp", |
| "middle": [], |
| "last": "Cimiano", |
| "suffix": "" |
| }, |
| { |
| "first": "Johanna", |
| "middle": [], |
| "last": "Wenderoth", |
| "suffix": "" |
| } |
| ], |
| "year": 2007, |
| "venue": "Proceedings of the 45th Annual Meeting of the Association for Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "888--895", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Philipp Cimiano and Johanna Wenderoth. Au- tomatic Acquisition of Ranked Qualia Struc- tures from the Web. In Proceedings of the 45th Annual Meeting of the Association for Com- putational Linguistics, pages 888-895, 2007.", |
| "links": null |
| }, |
| "BIBREF3": { |
| "ref_id": "b3", |
| "title": "Open information extraction from the web", |
| "authors": [ |
| { |
| "first": "Oren", |
| "middle": [], |
| "last": "Etzioni", |
| "suffix": "" |
| }, |
| { |
| "first": "Michele", |
| "middle": [], |
| "last": "Banko", |
| "suffix": "" |
| }, |
| { |
| "first": "Stephen", |
| "middle": [], |
| "last": "Soderland", |
| "suffix": "" |
| }, |
| { |
| "first": "Daniel", |
| "middle": [ |
| "S" |
| ], |
| "last": "Weld", |
| "suffix": "" |
| } |
| ], |
| "year": 2008, |
| "venue": "Communications of the Association for Computing Machinery", |
| "volume": "51", |
| "issue": "12", |
| "pages": "68--74", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Oren Etzioni, Michele Banko, Stephen Soder- land, and Daniel S. Weld. Open information extraction from the web. Communications of the Association for Computing Machinery, 51 (12):68-74, 2008.", |
| "links": null |
| }, |
| "BIBREF4": { |
| "ref_id": "b4", |
| "title": "Open Information Extraction: the Second Generation", |
| "authors": [ |
| { |
| "first": "Oren", |
| "middle": [], |
| "last": "Etzioni", |
| "suffix": "" |
| }, |
| { |
| "first": "Anthony", |
| "middle": [], |
| "last": "Fader", |
| "suffix": "" |
| }, |
| { |
| "first": "Janara", |
| "middle": [], |
| "last": "Christensen", |
| "suffix": "" |
| }, |
| { |
| "first": "Stephen", |
| "middle": [], |
| "last": "Soderland", |
| "suffix": "" |
| }, |
| { |
| "first": "Mausam", |
| "middle": [], |
| "last": "", |
| "suffix": "" |
| } |
| ], |
| "year": 2011, |
| "venue": "In Internartional Joint Conference on Artificial Intelligence", |
| "volume": "", |
| "issue": "", |
| "pages": "3--10", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Oren Etzioni, Anthony Fader, Janara Chris- tensen, Stephen Soderland, and Mausam. Open Information Extraction: the Second Generation. In Internartional Joint Confer- ence on Artificial Intelligence, pages 3-10, 2011.", |
| "links": null |
| }, |
| "BIBREF5": { |
| "ref_id": "b5", |
| "title": "Identifying relations for open information extraction", |
| "authors": [ |
| { |
| "first": "Anthony", |
| "middle": [], |
| "last": "Fader", |
| "suffix": "" |
| }, |
| { |
| "first": "Stephen", |
| "middle": [], |
| "last": "Soderland", |
| "suffix": "" |
| }, |
| { |
| "first": "Oren", |
| "middle": [], |
| "last": "Etzioni", |
| "suffix": "" |
| } |
| ], |
| "year": 2011, |
| "venue": "Proceedings of the Conference on Empirical Methods in Natural Language Processing", |
| "volume": "", |
| "issue": "", |
| "pages": "1535--1545", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Anthony Fader, Stephen Soderland, and Oren Etzioni. Identifying relations for open infor- mation extraction. In Proceedings of the Con- ference on Empirical Methods in Natural Lan- guage Processing, pages 1535-1545, 2011.", |
| "links": null |
| }, |
| "BIBREF6": { |
| "ref_id": "b6", |
| "title": "Development of a benchmark corpus to support the automatic extraction of drug-related adverse effects from medical case reports", |
| "authors": [ |
| { |
| "first": "Harsha", |
| "middle": [], |
| "last": "Gurulingappa", |
| "suffix": "" |
| }, |
| { |
| "first": "Abdul", |
| "middle": [ |
| "Mateen" |
| ], |
| "last": "Rajput", |
| "suffix": "" |
| }, |
| { |
| "first": "Angus", |
| "middle": [], |
| "last": "Roberts", |
| "suffix": "" |
| }, |
| { |
| "first": "Juliane", |
| "middle": [], |
| "last": "Fluck", |
| "suffix": "" |
| }, |
| { |
| "first": "Martin", |
| "middle": [], |
| "last": "Hofmann-Apitius", |
| "suffix": "" |
| }, |
| { |
| "first": "Luca", |
| "middle": [], |
| "last": "Toldo", |
| "suffix": "" |
| } |
| ], |
| "year": 2012, |
| "venue": "Journal of biomedical informatics", |
| "volume": "45", |
| "issue": "5", |
| "pages": "885--892", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Harsha Gurulingappa, Abdul Mateen Ra- jput, Angus Roberts, Juliane Fluck, Martin Hofmann-Apitius, and Luca Toldo. Develop- ment of a benchmark corpus to support the automatic extraction of drug-related adverse effects from medical case reports. Journal of biomedical informatics, 45(5):885-892, 2012.", |
| "links": null |
| }, |
| "BIBREF7": { |
| "ref_id": "b7", |
| "title": "Conditional Random Fields: Probabilistic Models for Segmenting and Labeling Sequence Data", |
| "authors": [ |
| { |
| "first": "John", |
| "middle": [ |
| "D" |
| ], |
| "last": "Lafferty", |
| "suffix": "" |
| }, |
| { |
| "first": "Andrew", |
| "middle": [], |
| "last": "Mccallum", |
| "suffix": "" |
| }, |
| { |
| "first": "Fernando", |
| "middle": [ |
| "C N" |
| ], |
| "last": "Pereira", |
| "suffix": "" |
| } |
| ], |
| "year": 2001, |
| "venue": "Proceedings of the Eighteenth International Conference on Machine Learning", |
| "volume": "", |
| "issue": "", |
| "pages": "282--289", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "John D. Lafferty, Andrew McCallum, and Fer- nando C. N. Pereira. Conditional Random Fields: Probabilistic Models for Segmenting and Labeling Sequence Data. In Proceedings of the Eighteenth International Conference on Machine Learning, pages 282-289, 2001.", |
| "links": null |
| }, |
| "BIBREF8": { |
| "ref_id": "b8", |
| "title": "The Measurement of Observer Agreement for Categorical Data", |
| "authors": [ |
| { |
| "first": "Richard", |
| "middle": [], |
| "last": "Landis", |
| "suffix": "" |
| }, |
| { |
| "first": "Gary G", |
| "middle": [], |
| "last": "Koch", |
| "suffix": "" |
| } |
| ], |
| "year": 1977, |
| "venue": "Biometrics", |
| "volume": "33", |
| "issue": "1", |
| "pages": "159--174", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "J Richard Landis and Gary G Koch. The Mea- surement of Observer Agreement for Categor- ical Data. Biometrics, 33(1):159-174, 1977.", |
| "links": null |
| }, |
| "BIBREF9": { |
| "ref_id": "b9", |
| "title": "The Stanford CoreNLP Natural Language Processing Toolkit", |
| "authors": [ |
| { |
| "first": "Christopher", |
| "middle": [ |
| "D" |
| ], |
| "last": "Manning", |
| "suffix": "" |
| }, |
| { |
| "first": "Mihai", |
| "middle": [], |
| "last": "Surdeanu", |
| "suffix": "" |
| }, |
| { |
| "first": "John", |
| "middle": [], |
| "last": "Bauer", |
| "suffix": "" |
| }, |
| { |
| "first": "Jenny", |
| "middle": [], |
| "last": "Finkel", |
| "suffix": "" |
| }, |
| { |
| "first": "Steven", |
| "middle": [ |
| "J" |
| ], |
| "last": "Bethard", |
| "suffix": "" |
| }, |
| { |
| "first": "David", |
| "middle": [], |
| "last": "Mcclosky", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "Association for Computational Linguistics System Demonstrations", |
| "volume": "", |
| "issue": "", |
| "pages": "55--60", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Christopher D. Manning, Mihai Surdeanu, John Bauer, Jenny Finkel, Steven J. Bethard, and David McClosky. The Stanford CoreNLP Nat- ural Language Processing Toolkit. In Asso- ciation for Computational Linguistics System Demonstrations, pages 55-60, 2014.", |
| "links": null |
| }, |
| "BIBREF10": { |
| "ref_id": "b10", |
| "title": "Wordnet: a lexical database for English. Communications of the Association for Computing Machinery", |
| "authors": [ |
| { |
| "first": "A", |
| "middle": [], |
| "last": "George", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Miller", |
| "suffix": "" |
| } |
| ], |
| "year": 1995, |
| "venue": "", |
| "volume": "38", |
| "issue": "", |
| "pages": "39--41", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "George A Miller. Wordnet: a lexical database for English. Communications of the Associa- tion for Computing Machinery, 38(11):39-41, 1995.", |
| "links": null |
| }, |
| "BIBREF11": { |
| "ref_id": "b11", |
| "title": "A Review of Relational Machine Learning for Knowledge Graphs", |
| "authors": [ |
| { |
| "first": "Maximilian", |
| "middle": [], |
| "last": "Nickel", |
| "suffix": "" |
| }, |
| { |
| "first": "Kevin", |
| "middle": [], |
| "last": "Murphy", |
| "suffix": "" |
| }, |
| { |
| "first": "Volker", |
| "middle": [], |
| "last": "Tresp", |
| "suffix": "" |
| }, |
| { |
| "first": "Evgeniy", |
| "middle": [], |
| "last": "Gabrilovich", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "Proceedings of the Instute of Electrical and Electronics Engineers", |
| "volume": "104", |
| "issue": "", |
| "pages": "11--33", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Maximilian Nickel, Kevin Murphy, Volker Tresp, and Evgeniy Gabrilovich. A Review of Relational Machine Learning for Knowledge Graphs. Proceedings of the Instute of Electri- cal and Electronics Engineers, 104(1):11-33, 2016.", |
| "links": null |
| }, |
| "BIBREF12": { |
| "ref_id": "b12", |
| "title": "The Generative Lexicon", |
| "authors": [ |
| { |
| "first": "James", |
| "middle": [], |
| "last": "Pustejovsky", |
| "suffix": "" |
| } |
| ], |
| "year": 1991, |
| "venue": "Computational linguistics", |
| "volume": "17", |
| "issue": "4", |
| "pages": "409--441", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "James Pustejovsky. The Generative Lexi- con. Computational linguistics, 17(4):409-441, 1991.", |
| "links": null |
| }, |
| "BIBREF13": { |
| "ref_id": "b13", |
| "title": "Lexical Semantic Techniques for Corpus", |
| "authors": [ |
| { |
| "first": "James", |
| "middle": [], |
| "last": "Pustejovsky", |
| "suffix": "" |
| }, |
| { |
| "first": "Peter", |
| "middle": [], |
| "last": "Anick", |
| "suffix": "" |
| }, |
| { |
| "first": "Sabine", |
| "middle": [], |
| "last": "Bergler", |
| "suffix": "" |
| } |
| ], |
| "year": 1993, |
| "venue": "Analysis. Computational Linguistics", |
| "volume": "19", |
| "issue": "2", |
| "pages": "331--358", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "James Pustejovsky, Peter Anick, and Sabine Bergler. Lexical Semantic Techniques for Cor- pus Analysis. Computational Linguistics, 19 (2):331-358, 1993.", |
| "links": null |
| }, |
| "BIBREF14": { |
| "ref_id": "b14", |
| "title": "Representing General Relational Knowledge in Concept-Net 5", |
| "authors": [ |
| { |
| "first": "Robert", |
| "middle": [], |
| "last": "Speer", |
| "suffix": "" |
| }, |
| { |
| "first": "Catherine", |
| "middle": [], |
| "last": "Havasi", |
| "suffix": "" |
| } |
| ], |
| "year": 2012, |
| "venue": "Language Resources and Evaluation Conference", |
| "volume": "", |
| "issue": "", |
| "pages": "3679--3686", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Robert Speer and Catherine Havasi. Represent- ing General Relational Knowledge in Concept- Net 5. In Language Resources and Evaluation Conference, pages 3679-3686, 2012.", |
| "links": null |
| }, |
| "BIBREF15": { |
| "ref_id": "b15", |
| "title": "BRAT: a Web-based Tool for NLP-Assisted Text Annotation", |
| "authors": [ |
| { |
| "first": "Pontus", |
| "middle": [], |
| "last": "Stenetorp", |
| "suffix": "" |
| }, |
| { |
| "first": "Sampo", |
| "middle": [], |
| "last": "Pyysalo", |
| "suffix": "" |
| }, |
| { |
| "first": "Goran", |
| "middle": [], |
| "last": "Topi\u0107", |
| "suffix": "" |
| }, |
| { |
| "first": "Tomoko", |
| "middle": [], |
| "last": "Ohta", |
| "suffix": "" |
| }, |
| { |
| "first": "Sophia", |
| "middle": [], |
| "last": "Ananiadou", |
| "suffix": "" |
| }, |
| { |
| "first": "Jun'ichi", |
| "middle": [], |
| "last": "Tsujii", |
| "suffix": "" |
| } |
| ], |
| "year": 2012, |
| "venue": "Proceedings of the Demonstrations at the 13th Eutopean Chapter of the Association for Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "102--107", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Pontus Stenetorp, Sampo Pyysalo, Goran Topi\u0107, Tomoko Ohta, Sophia Ananiadou, and Jun'ichi Tsujii. BRAT: a Web-based Tool for NLP-Assisted Text Annotation. In Proceed- ings of the Demonstrations at the 13th Euto- pean Chapter of the Association for Computa- tional Linguistics, pages 102-107, 2012.", |
| "links": null |
| }, |
| "BIBREF16": { |
| "ref_id": "b16", |
| "title": "Automatic Acquisition of Qualia Structure from Corpus Data. The Institute of Electronics, Information and Communication Engineers transactions on information and systems", |
| "authors": [ |
| { |
| "first": "Ichiro", |
| "middle": [], |
| "last": "Yamada", |
| "suffix": "" |
| }, |
| { |
| "first": "Timothy", |
| "middle": [], |
| "last": "Baldwin", |
| "suffix": "" |
| }, |
| { |
| "first": "Hideki", |
| "middle": [], |
| "last": "Sumiyoshi", |
| "suffix": "" |
| }, |
| { |
| "first": "Masahiro", |
| "middle": [], |
| "last": "Shibata", |
| "suffix": "" |
| }, |
| { |
| "first": "Yagi", |
| "middle": [], |
| "last": "Nobuyuki", |
| "suffix": "" |
| } |
| ], |
| "year": 2007, |
| "venue": "", |
| "volume": "90", |
| "issue": "", |
| "pages": "1534--1541", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Ichiro Yamada, Timothy Baldwin, Hideki Sumiyoshi, Masahiro Shibata, and Yagi Nobuyuki. Automatic Acquisition of Qualia Structure from Corpus Data. The Institute of Electronics, Information and Communica- tion Engineers transactions on information and systems, 90(10):1534-1541, 2007.", |
| "links": null |
| } |
| }, |
| "ref_entries": { |
| "FIGREF0": { |
| "uris": null, |
| "text": "Assigned semantic labels for an excerpt from Wikipedia article on fish oil 3 .", |
| "num": null, |
| "type_str": "figure" |
| }, |
| "FIGREF1": { |
| "uris": null, |
| "text": "(4) a. (herbal distillate) ... obtained by steam distillation or hydrodistillation (herbal distillate)b. Modern perfumery began in late 19th century with the commercial synthesis (perfume)", |
| "num": null, |
| "type_str": "figure" |
| }, |
| "FIGREF2": { |
| "uris": null, |
| "text": "(5) a. Chandeliers are often ornate, and normally use... (chandelier) b. A diuretic is any substance that promotes the production of urine.(diuretic)c. An espresso machine brews coffee by forcing pressurized water near boiling point... (espresso machine)", |
| "num": null, |
| "type_str": "figure" |
| }, |
| "TABREF0": { |
| "content": "<table><tr><td>Label</td><td>Definition</td><td>Example</td><td/><td/><td/></tr><tr><td>Target</td><td>expression referring to a target object,</td><td>BB cream</td><td>stands</td><td>for</td><td>blemish balm,</td></tr><tr><td/><td>including aliases and pronouns</td><td colspan=\"3\">blemish base (BB cream)</td><td/></tr><tr><td>Degree of Effect</td><td>description that states a degree of Ef-</td><td colspan=\"4\">poor substitute for protective clothing</td></tr><tr><td/><td>fect</td><td colspan=\"2\">(barrier cream)</td><td/><td/></tr><tr><td>Certainty of Effect</td><td>description that states a cer-</td><td colspan=\"4\">a have not been proven to give lasting or ma-</td></tr><tr><td/><td>tainty/reliability of Effect</td><td colspan=\"4\">jor positive effects (anti-aging cream)</td></tr><tr><td>Means of Use</td><td>description of how Target is used</td><td colspan=\"4\">is applied around the contours of the eye(s)</td></tr><tr><td/><td/><td>(eye liner)</td><td/><td/><td/></tr><tr><td>Composed of</td><td>material/ingredient that composes of</td><td colspan=\"4\">consisting mainly of triglycerides (egg oil)</td></tr><tr><td/><td>Target</td><td/><td/><td/><td/></tr><tr><td>Part of</td><td>material/object that Target is a part</td><td colspan=\"4\">Cinnamon is a spice obtained from the</td></tr><tr><td/><td>of</td><td colspan=\"2\">inner bark (cinnamon)</td><td/><td/></tr><tr><td>Location</td><td>description of where Target is used</td><td colspan=\"4\">often used where sunlight can impair seeing</td></tr><tr><td/><td/><td>(eye black)</td><td/><td/><td/></tr><tr><td>Time</td><td>description of when Target is used</td><td colspan=\"2\">soon after birth (kohl)</td><td/><td/></tr><tr><td>User</td><td>description of who uses/receives Ef-</td><td colspan=\"4\">mothers would apply kohl to their infants' eyes</td></tr><tr><td/><td>fect</td><td>(kohl)</td><td/><td/><td/></tr><tr><td>Version</td><td>different version of Target</td><td colspan=\"4\">It is distributed as a liquid or a soft solid (lip</td></tr><tr><td/><td/><td>gloss)</td><td/><td/><td/></tr></table>", |
| "text": "Effect effect caused by using Target to decorate and protect the nail plates (nail polish) Null Effect description that states there is no EffectThe myth of its effectiveness (bear's grease)", |
| "type_str": "table", |
| "html": null, |
| "num": null |
| }, |
| "TABREF1": { |
| "content": "<table/>", |
| "text": "", |
| "type_str": "table", |
| "html": null, |
| "num": null |
| }, |
| "TABREF2": { |
| "content": "<table><tr><td>Type of match</td><td>F-score (%)</td></tr><tr><td>lenient match (micro average)</td><td>77.2</td></tr><tr><td>lenient match (macro average)</td><td>52.5</td></tr><tr><td>strict match (micro average)</td><td>36.8</td></tr><tr><td>strict match (macro average)</td><td>27.1</td></tr></table>", |
| "text": "", |
| "type_str": "table", |
| "html": null, |
| "num": null |
| }, |
| "TABREF3": { |
| "content": "<table/>", |
| "text": "", |
| "type_str": "table", |
| "html": null, |
| "num": null |
| }, |
| "TABREF5": { |
| "content": "<table/>", |
| "text": "Numbers of the annotated labels", |
| "type_str": "table", |
| "html": null, |
| "num": null |
| }, |
| "TABREF7": { |
| "content": "<table/>", |
| "text": "Features", |
| "type_str": "table", |
| "html": null, |
| "num": null |
| } |
| } |
| } |
| } |