| { |
| "paper_id": "W14-0124", |
| "header": { |
| "generated_with": "S2ORC 1.0.0", |
| "date_generated": "2023-01-19T05:54:00.843802Z" |
| }, |
| "title": "Graph Based Algorithm for Automatic Domain Segmentation of WordNet", |
| "authors": [ |
| { |
| "first": "Brijesh", |
| "middle": [], |
| "last": "Bhatt", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "Indian Institute of Technology Bombay Mumbai", |
| "location": { |
| "country": "India" |
| } |
| }, |
| "email": "brijesh@cse.iitb.ac.in" |
| }, |
| { |
| "first": "Subhash", |
| "middle": [], |
| "last": "Kunnath", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "Indian Institute of Technology Bombay Mumbai", |
| "location": { |
| "country": "India" |
| } |
| }, |
| "email": "subhash@cse.iitb.ac.in" |
| }, |
| { |
| "first": "Pushpak", |
| "middle": [], |
| "last": "Bhattacharyya", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "Indian Institute of Technology Bombay Mumbai", |
| "location": { |
| "country": "India" |
| } |
| }, |
| "email": "" |
| } |
| ], |
| "year": "", |
| "venue": null, |
| "identifiers": {}, |
| "abstract": "We present a graph based algorithm for automatic domain segmentation of Wordnet. We pose the problem as a Markov Random Field Classification problem and show how existing graph based algorithms for Image Processing can be used to solve the problem. Our approach is unsupervised and can be easily adopted for any language. We conduct our experiments for two domains, health and tourism. We achieve F-Score more than .70 in both domains. This work can be useful for many critical problems like word sense disambiguation, domain specific ontology extraction etc.", |
| "pdf_parse": { |
| "paper_id": "W14-0124", |
| "_pdf_hash": "", |
| "abstract": [ |
| { |
| "text": "We present a graph based algorithm for automatic domain segmentation of Wordnet. We pose the problem as a Markov Random Field Classification problem and show how existing graph based algorithms for Image Processing can be used to solve the problem. Our approach is unsupervised and can be easily adopted for any language. We conduct our experiments for two domains, health and tourism. We achieve F-Score more than .70 in both domains. This work can be useful for many critical problems like word sense disambiguation, domain specific ontology extraction etc.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Abstract", |
| "sec_num": null |
| } |
| ], |
| "body_text": [ |
| { |
| "text": "Over the years, Wordnet has served as an important lexical resource for many Natural Language Processing (NLP) applications. Picking up a right sense of a word from the fine grained sense repository of Wordnet is at the heart of many NLP problems. Many researchers have used Wordnet for domain specific applications like word sense disambiguation (Magnini et al., 2002a; Khapra et al., 2010) , domain specific taxonomy/ontology extraction (Cimiano and Vlker, 2005; Yanna and Zili, 2009) etc. These applications rely on 'One sense per discourse' (Gale et al., 1992) hypothesis to identify domain specific sense of a word. 'Dividing Wordnet's lexical and conceptual space into various domain specific subspace can significantly reduce search space and thus help many domain specific applications' (Xiaojuan and Fellbaum, 2012) .", |
| "cite_spans": [ |
| { |
| "start": 347, |
| "end": 370, |
| "text": "(Magnini et al., 2002a;", |
| "ref_id": "BIBREF15" |
| }, |
| { |
| "start": 371, |
| "end": 391, |
| "text": "Khapra et al., 2010)", |
| "ref_id": "BIBREF10" |
| }, |
| { |
| "start": 439, |
| "end": 464, |
| "text": "(Cimiano and Vlker, 2005;", |
| "ref_id": "BIBREF5" |
| }, |
| { |
| "start": 465, |
| "end": 486, |
| "text": "Yanna and Zili, 2009)", |
| "ref_id": "BIBREF20" |
| }, |
| { |
| "start": 545, |
| "end": 564, |
| "text": "(Gale et al., 1992)", |
| "ref_id": "BIBREF7" |
| }, |
| { |
| "start": 795, |
| "end": 824, |
| "text": "(Xiaojuan and Fellbaum, 2012)", |
| "ref_id": "BIBREF19" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "With the purpose of categorizing Wordnet senses for different domain specific applications, Magnini and Cavagli (2000) constructed a domain hierarchy of 164 domain labels and annotated Wordnet synsets with one or more label from the hierarchy. The categories were further refined by linking domain labels to subject codes of Dewey Decimal Classification system (Bentivogli et al., 2004) . Beginning with Wordnet 2.0, Domain category pointers were introduced to link domain specific synsets across part of speech. However, the manual determination of a set of domain labels and assigning them to Wordnet synsets is a time consuming task. Also, the senses of words evolves over a period of time and accordingly Wordnet synsets also undergo changes. This makes the static assignment of domain label a costly exercise.", |
| "cite_spans": [ |
| { |
| "start": 92, |
| "end": 118, |
| "text": "Magnini and Cavagli (2000)", |
| "ref_id": "BIBREF14" |
| }, |
| { |
| "start": 361, |
| "end": 386, |
| "text": "(Bentivogli et al., 2004)", |
| "ref_id": "BIBREF2" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "With the intention to reduce manual labor of domain categorization and to facilitate use of Wordnet in domain specific applications, there has been efforts to (semi) automatically assign domain labels to Wordnet synsets. Most of these efforts rely on Wordnet concept hierarchy and use label propagation schemes to propagate domain labels through the hierarchy. However, the heterogeneous level of generality poses a key challenge to such approaches. For example, 'Under Animal (subsumed by Life Form) we find out specific concepts, such as Work Animal, Domestic Animal, kept together with general classes such as Chordate, Fictional Animal, etc.' (Gangemi et al., 2003) . Another key challenge in assigning the domain labels is the quality of domain hierarchy and semantic distance between domain labels (Xiaojuan and Fellbaum, 2012) .", |
| "cite_spans": [ |
| { |
| "start": 647, |
| "end": 669, |
| "text": "(Gangemi et al., 2003)", |
| "ref_id": "BIBREF8" |
| }, |
| { |
| "start": 804, |
| "end": 833, |
| "text": "(Xiaojuan and Fellbaum, 2012)", |
| "ref_id": "BIBREF19" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "In this paper, we present a corpus based approach for automatic domain segmentation of Wordnet. The aim of our work is to provide a general solution that can be used across languages to construct domain specific conceptualization from Wordnet. The proposed system works in two steps, 1. We construct domain specific conceptualiza-tion from the corpus.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "disambiguated and linked to Wordnet synsets to generate domain labels.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "The domain specific conceptualization is then", |
| "sec_num": "2." |
| }, |
| { |
| "text": "We pose Wordnet domain segmentation as an image labeling problem and use existing techniques in the field of image processing system to solve Wordnet domain labeling problem. The proposed method is completely unsupervised and requires only Part Of Speech tagged corpus. Hence, it can be easily adopted across languages. Our method also does not require any predefined set of domain category labels, however if such labels are available it can be incorporated into system to generate better labeling.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "The domain specific conceptualization is then", |
| "sec_num": "2." |
| }, |
| { |
| "text": "The remaining of the paper is organized as follows, section 2 describes related work. Section 3 describes the proposed graph based algorithm for Wordnet domain labeling. Section 4 and 5 discuss the experiments and conclusion.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "The domain specific conceptualization is then", |
| "sec_num": "2." |
| }, |
| { |
| "text": "Two major attempts to categorize Wordnet synsets are Wordnet Domain (Magnini and Cavagli, 2000) and Wordnet Domain Category pointers. In this section we first present a brief overview of these efforts and then describe some efforts to automate the task of domain labeling of Wordnet synsets. We also mention the attempts made for other languages apart from English.", |
| "cite_spans": [ |
| { |
| "start": 68, |
| "end": 95, |
| "text": "(Magnini and Cavagli, 2000)", |
| "ref_id": "BIBREF14" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Work", |
| "sec_num": "2" |
| }, |
| { |
| "text": "Domain categorization of Wordnet synset has been an active area of research for more than a decade now. Magnini and Cavagli (2000) have developed Wordnet Domain Hierarchy (WDH) by annotating Wordnet1.6 using 250 Subject Field Codes (SFC). They used semi-automated approach in which the top level concepts are manually marked with SFC and then the labels are automatically propagated through the hierarchy. Finally, the labeling is again evaluated and refined manually. The semantic structure of WDH was further refined by Bentivogli et al. (2004) . Starting from Wordnet 2.0, domain category pointers were introduced in the Wordnet. 'Unlike the original Wordnet Domain, the domain category pointers use Wordnet synsets as domain labels and synsets across part of speech are linked through domain pointers' (Xiaojuan and Fellbaum, 2012) . However, 'only 5% of Wordnet 3.0 synsets are linked to 438 domain categories and out of these linked synsets only 30% synsets have same label in both Wordnet Domain and Domain Category'.", |
| "cite_spans": [ |
| { |
| "start": 104, |
| "end": 130, |
| "text": "Magnini and Cavagli (2000)", |
| "ref_id": "BIBREF14" |
| }, |
| { |
| "start": 522, |
| "end": 546, |
| "text": "Bentivogli et al. (2004)", |
| "ref_id": "BIBREF2" |
| }, |
| { |
| "start": 806, |
| "end": 835, |
| "text": "(Xiaojuan and Fellbaum, 2012)", |
| "ref_id": "BIBREF19" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Wordnet Domain Hierarchy and Domain Category Pointers", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "Considering the growing size of Wordnet and the amount of efforts required to construct domain categories, it is apparent to develop semiautomated or automated methods for domain categorization of Wordnets. One of the earlier efforts in this direction was by Buitelaar and Sacaleanu (2001) . They extracted domain specific terms using tf*idf measure and then disambiguated these terms using GermaNet synsets. The disambiguation was performed based on the assumption that the hypernymy and hyponymy terms are more likely to have same domain label. Magnini et al. (2002b) have performed a comparative study of corpus based and ontology based domain annotation. They have used frequency of words in the synonym set as a measure to identify domain of a synset.", |
| "cite_spans": [ |
| { |
| "start": 259, |
| "end": 289, |
| "text": "Buitelaar and Sacaleanu (2001)", |
| "ref_id": "BIBREF3" |
| }, |
| { |
| "start": 547, |
| "end": 569, |
| "text": "Magnini et al. (2002b)", |
| "ref_id": "BIBREF16" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Automated Approaches", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "Gonzalez-Agirre et al. 2012have proposed a semi-automatic method to align the original Wordnet 1.6 based domains to Wordnet 3.0. They have used domain labels already assigned to some top level synsets and then propagated the domain label across Wordnet hierarchy using UKB algorithm (Agirre and Soroa, 2009) . Their approach is based on an assumption that 'A synset directly related to several synsets labeled with a particular domain (i.e biology) would itself possibly be also related somehow to that domain (i.e. biology)'(Gonzalez-Agirre et al., 2012).", |
| "cite_spans": [ |
| { |
| "start": 283, |
| "end": 307, |
| "text": "(Agirre and Soroa, 2009)", |
| "ref_id": "BIBREF0" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Automated Approaches", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "Fukumoto and Suzuki (2011) have adopted a corpus based approach to assign domain labels to Wordnet synsets. They first disambiguate the corpus words with Wordnet senses and then use Markov Random Walk based Page Rank Algorithm to rank domain relevance of Wordnet senses. Zhu et al. (2011) have proposed gloss based disambiguation technique for domain assignment to Wordnet synset. They used existing domain labels of Wordnet 3.0 and predicted domains based on words in the gloss of the synsets.", |
| "cite_spans": [ |
| { |
| "start": 271, |
| "end": 288, |
| "text": "Zhu et al. (2011)", |
| "ref_id": "BIBREF21" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Automated Approaches", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "There have also been efforts to adopt English Wordnet domain labels for other languages. Lee et al. (2009) have used English-Chinese Wordnet mapping to domain tag Chinese Wordnet.", |
| "cite_spans": [ |
| { |
| "start": 89, |
| "end": 106, |
| "text": "Lee et al. (2009)", |
| "ref_id": "BIBREF13" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Automated Approaches", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "Like Buitelaar and Sacaleanu (2001) , Magnini et al. (2002b) and Fukumoto and Suzuki (2011) , we also follow corpus based approach for Wordnet Domain Labeling. Key points of difference among these approaches can be summerized as follows, 1. Both Buitelaar and Sacaleanu (2001) and Magnini et al. (2002b) used word frequency to detect domain specificity of a term. They do not consider the label of neighbor terms to determine the label for a term.", |
| "cite_spans": [ |
| { |
| "start": 5, |
| "end": 35, |
| "text": "Buitelaar and Sacaleanu (2001)", |
| "ref_id": "BIBREF3" |
| }, |
| { |
| "start": 38, |
| "end": 60, |
| "text": "Magnini et al. (2002b)", |
| "ref_id": "BIBREF16" |
| }, |
| { |
| "start": 65, |
| "end": 91, |
| "text": "Fukumoto and Suzuki (2011)", |
| "ref_id": "BIBREF6" |
| }, |
| { |
| "start": 246, |
| "end": 276, |
| "text": "Buitelaar and Sacaleanu (2001)", |
| "ref_id": "BIBREF3" |
| }, |
| { |
| "start": 281, |
| "end": 303, |
| "text": "Magnini et al. (2002b)", |
| "ref_id": "BIBREF16" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Proposed Approach", |
| "sec_num": "2.3" |
| }, |
| { |
| "text": "2. Fukumoto and Suzuki (2011) have modeled domain labeling as a Markov Random Walk problem, but they run their algorithm on entire Wordnet graph. This is costly in terms of time and space required for the processing. In addition to that, Wordnet hypernymyhyponymy graph may not be a true representative of domain specific conceptualization.", |
| "cite_spans": [ |
| { |
| "start": 3, |
| "end": 29, |
| "text": "Fukumoto and Suzuki (2011)", |
| "ref_id": "BIBREF6" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Proposed Approach", |
| "sec_num": "2.3" |
| }, |
| { |
| "text": "In contrast to the above mentioned approaches, our approach is based on the hypothesis that, 'Domain specificity of a term depends on the spatial property of the term'. So it is important to construct a domain specific conceptualization to identify domain of a term. The domain for a concept/term depends not only on the occurence of the term in the domain but also on the neighbors of the concept/term. Hence, we follow two step process in which first we construct a domain conceptualization from the corpus and then we align this conceptualization with Wordnet.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Proposed Approach", |
| "sec_num": "2.3" |
| }, |
| { |
| "text": "The proposed algorithm carves out a domain specific subgraph from the Wordnet. For that, we first construct concept graph from the corpus and then associate concepts with Wordnet senses. Figure 1 shows the overall system architecture. As shown in the figure 1 after preprocessing, the similarity graph is constructed from the corpus. Using a graph based algorithm similarity graph is converted into domain conceptualization and then it is linked with Wordnet synsets to assign domain labels to Wordnet synsets. The detailed description of each component is as follows.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 187, |
| "end": 193, |
| "text": "Figure", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Algorithm", |
| "sec_num": "3" |
| }, |
| { |
| "text": "The text corpus is first POS tagged using Stanford POS tagger 1 and Morph Analyzer 2 . Then term frequency of each term is calculated using weirdness measure (Ahmad et al., 1999) . Context vector for each term is constructed using Point Wise Mutual Information (Church and Hanks, 1990 ) measure. We used a sentence as a boundary to calculate context vector. Output of the preprocessing step is a list of domain specific terms and their context vector.", |
| "cite_spans": [ |
| { |
| "start": 158, |
| "end": 178, |
| "text": "(Ahmad et al., 1999)", |
| "ref_id": "BIBREF1" |
| }, |
| { |
| "start": 261, |
| "end": 284, |
| "text": "(Church and Hanks, 1990", |
| "ref_id": "BIBREF4" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Preprocessing", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "Using the term list and context vector generated from the preprocessing step, a graph G(V, E) is constructed in which each v i \u2208 V is term and each edge e(v i , v j ) is semantic relatedness between terms v i and v j . Semantic relatedness between two terms ", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Constructing Document Graph", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "v i and v", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Constructing Document Graph", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "V, E) where, V is set of random variables, E is penalty cost, f (v l ) is cost of assigning label l \u2208 L to v \u2208 V , A set of initial labeling {(v, l),", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Constructing Document Graph", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "for all v \u2208 V and l \u2208 L} and Energy Function \u03b8 ", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Constructing Document Graph", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "for v i and v j \u2208 V do Source \u2190 v i Target \u2190 v", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Constructing Document Graph", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "tcost = i\u2208V E i (X i )", |
| "eq_num": "(1)" |
| } |
| ], |
| "section": "Constructing Document Graph", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "where, X i is the label assigned to term i and E i is the cost of assigning label X i to node i.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Constructing Document Graph", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "We use term frequency based measure to calculate cost of assigning label to a term. A term should be assigned to a domain in which it occurs more frequently. Hence, high tf indicates less cost to assign the term to domain.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Constructing Document Graph", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "Thus,", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Constructing Document Graph", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "E i (X i ) = 1 \u2212 tf i (2)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Constructing Document Graph", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "where, tf i is the term frequency of the term i in domain X.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Constructing Document Graph", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "\u2022 Edge Cost: This measures the cost of assigning separate labels to the two adjacent nodes of an edge. This is formulated as a cost function as shown in equation 3.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Constructing Document Graph", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "ecost = (i,j)\u2208E E ij (x i , x j )", |
| "eq_num": "(3)" |
| } |
| ], |
| "section": "Constructing Document Graph", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "where E ij (x i , x j ) = cost of assigning different label to neighboring nodes i and j. E ij (x i , x j ) is equal to semantic similarity between nodes x i and x j . Higher the similarity between nodes x i and x j more is the penalty to assign different labels to x i and x j .", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Constructing Document Graph", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "This can be formulated as an energy minimization over a Markov Random Field (Kleinberg and Tardos, 2002) . Finding optimal solution is equal to minimizing equation 4. Figure 3 shows an example configuration of the concept graph with three nodes t 1 , t 2 and t 3 and two domains d 1 and d 2 . Edges from the nodes t i to d j indicates value of cost function c(p, d) of equation 2. and edges between nodes t i and t j indicates cost for assigning different labels to node t i and t j . As can be seen in the figure to minimize \u03b8 of equation 4, node t 1 will be assign to domain d 2 and node t 3 will be assign to domain d 1 . Choice is to be made for t 2 , since it has equal cost to be in d 1 or d 2 . If t 2 is assigned label d 1 , then ecost of equation 2 is 0.8, since label for node t 1 and t 2 will be different. In the same way ecost will be 0.2 if t 2 is assigned d 2 . So to minimize \u03b8, Final labeling is t 1 and t 2 are assigned d 2 and t 3 is assigned d 1 .", |
| "cite_spans": [ |
| { |
| "start": 76, |
| "end": 104, |
| "text": "(Kleinberg and Tardos, 2002)", |
| "ref_id": "BIBREF11" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 167, |
| "end": 175, |
| "text": "Figure 3", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Constructing Document Graph", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "minimze\u03b8 = i\u2208V E i (X i ) + (i,j)\u2208E E ij (x i , x j ) (4)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Constructing Document Graph", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "In other words, to minimize the cost of assignment \u03b8 we cut the edge (t 2 , t 3 ) . Thus the energy minimization problem can be solved by performing 'Min-Cut' on graph. For two labels the problem is solvable in polynomial time. However, for more than 2 labels, solving this optimization problem is NP hard (Kolmogorov and Zabih, 2002) .", |
| "cite_spans": [ |
| { |
| "start": 306, |
| "end": 334, |
| "text": "(Kolmogorov and Zabih, 2002)", |
| "ref_id": "BIBREF12" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 69, |
| "end": 81, |
| "text": "(t 2 , t 3 )", |
| "ref_id": "FIGREF4" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Constructing Document Graph", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "In the field of image processing, many problems, e.g. image forground-background detection, image segmentation etc. are formulated as energy minimization in Markov Random Filed. Some of the graph-cut based algorithms to perform the task are, \u03b1-expansion, \u03b1 \u2212 \u03b2 swap (Schmidt and Alahari, 2011) and \u03b1 swap \u03b2 shrink algorithm . For our experiment we use \u03b1 swap \u03b2 shrink algorithm proposed in Schmidt and Alahari (2011). We are briefly describing the basic idea of the algorithms here. Readers are directed to Kolmogorov and Zabih (2002) and Szeliski et al. (2008) for further details.", |
| "cite_spans": [ |
| { |
| "start": 507, |
| "end": 534, |
| "text": "Kolmogorov and Zabih (2002)", |
| "ref_id": "BIBREF12" |
| }, |
| { |
| "start": 539, |
| "end": 561, |
| "text": "Szeliski et al. (2008)", |
| "ref_id": "BIBREF18" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Constructing Document Graph", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "For more than two labels (domains), a suboptimal solution can be derived by iteratively performing graph cut for a pair of labels. This problem is usually solved using iterative descent technique. As shown in algorithm 1, the algorithm start with an initial assignment. In each iteration the algorithm selects a pair of labels and performs the graph cut. Based on the graph cut the labels will be reassigned to the nodes. The energy function \u03b8 is calculated at the end of each iteration and the value of \u03b8 is minimized after every iteration to guarantee the convergence.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Constructing Document Graph", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "This module takes domain specific concept graph generated from previous step as an input and assigns wordnet sense to each term. A term can have more than one sense in the Wordnet and two terms can refer to same Wordnet synset. So the basic approach for the disambiguation is 'Split for Polysemy and Merge for Synonymy'. As shown in Algorithm 2, the algorithm iterates through the nodes of the concept graph in a breadth first manner. For each vertex in the graph, all possible senses are found from the Wordnet. If a vertex v has n senses then new nodes v 1 , v 2 , ..., v n are created. Then the sense nodes are linked with each other using Wordnet semantic relation, e.g. if two senses s i and s j are hypernymhyponym in wordnet then and edge is created between them. Figure 4 shows an example of vertex split. The left side of the fig. 4 shows concept graph for term node cancer. The term cancer has five different senses. Hence the algorithm creates five nodes for the term, one for each Wordnet sense. Then the edges are distributed across vertices depending upon the participating sense. Node sign is assigned to sense 1977832, and nodes leukemia, lymphoma and Ailment are assigned to sense 14239981. Other sense nodes do not have neighbors in the domain. Hence, sense 14239981 becomes winner sense in Health domain and it is tagged in the domain. Right side of the fig. 4 shows resulting wordnet sense graph.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 771, |
| "end": 779, |
| "text": "Figure 4", |
| "ref_id": null |
| }, |
| { |
| "start": 835, |
| "end": 841, |
| "text": "fig. 4", |
| "ref_id": null |
| }, |
| { |
| "start": 1373, |
| "end": 1379, |
| "text": "fig. 4", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Split-Merge algorithm to Link concept to Wordnet", |
| "sec_num": "3.4" |
| }, |
| { |
| "text": "Once new vertices are created for all vertices in the graph, the vertices with no edge are deleted and vertices for which the sense ids are same are merged as synonymy. Thus, at the end of the process we get a Wordnet sense graph specific to the domain. We label each sense with the specific do- ", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Split-Merge algorithm to Link concept to Wordnet", |
| "sec_num": "3.4" |
| }, |
| { |
| "text": "We have conducted our experiments on publicly available Heath and Tourism Corpus 3 (Khapra et al., 2010). As shown in Table 1 the total number of unique terms after preprocessing and stop word removal are 25056 in health domain and 56325 in tourism domain. We applied further thresholding and remove low frequency terms (Frequency less than 10) to reduce the size of the graph. For preprocessing we have used Stanford POS tagger and morpha morph analyzer. We have used Matlab UGM package 4 which is publicly available for researchers. UGM package provides implementation of \u03b1-expand, \u03b1 \u2212 \u03b2 swap Domain Precision Recall F-Score Health 0.69 0.82 0.74 Tourism 0.65 0.80 0.71 Table 2 : Precision and Recall of domain labeling and \u03b1-expansion-\u03b2-Shrink algorithms. The graph based disambiguation algorithm is written using JGraphT library 5 . The overall performance of the system is calculated against manually labeled domain tags. Table 2 shows overall precision, recall and f-score for both the domains.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 118, |
| "end": 125, |
| "text": "Table 1", |
| "ref_id": "TABREF1" |
| }, |
| { |
| "start": 672, |
| "end": 679, |
| "text": "Table 2", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Experiments", |
| "sec_num": "4" |
| }, |
| { |
| "text": "As shown in Table 2 the recall value is found to be higher than the precision in both the domains. Reason for high value of recall is the initial labels and high number of edges. Initial labels are assigned based on the term frequency, then based on the labels of the neighboring nodes, node labels are changed. We observe that in case of two domains this leads to add more false positives. In order to reduce recall value and increase precision, we need to run experiments for more domains and with higher edge weights.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 12, |
| "end": 19, |
| "text": "Table 2", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Experiments", |
| "sec_num": "4" |
| }, |
| { |
| "text": "We have proposed a novel graph based approach for automatic domain tagging of WordNet synsets. We pose domain labeling as an energy minization problem and show how the existing image labeling algorithms can be used for the task of Word-Net domain tagging. Our approach is completely unsupervised and can be easily adopted across languages. For our experiments we used term frequency based assignment of initial labels, however other existing label can be used to enhance the labeling. In future we aim to construct domain labels for more domains and compare our system with existing labeling. We are also aiming to test our system for multiple languages.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusion", |
| "sec_num": "5" |
| }, |
| { |
| "text": "http://nlp.stanford.edu/software/ tagger.shtml 2 http://www.sussex.ac.uk/Users/johnca/ morph.html", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "http://www.cfilt.iitb.ac.in/wsd/ annotated_corpus 4 http://www.di.ens.fr/\u02dcmschmidt/ Software/alphaBeta.html", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "http://jgrapht.org/", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| } |
| ], |
| "back_matter": [], |
| "bib_entries": { |
| "BIBREF0": { |
| "ref_id": "b0", |
| "title": "Personalizing pagerank for word sense disambiguation", |
| "authors": [ |
| { |
| "first": "Eneko", |
| "middle": [], |
| "last": "Agirre", |
| "suffix": "" |
| }, |
| { |
| "first": "Aitor", |
| "middle": [], |
| "last": "Soroa", |
| "suffix": "" |
| } |
| ], |
| "year": 2009, |
| "venue": "Proceedings of the 12th Conference of the European Chapter of the Association for Computational Linguistics, EACL '09", |
| "volume": "", |
| "issue": "", |
| "pages": "33--41", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Eneko Agirre and Aitor Soroa. 2009. Personalizing pagerank for word sense disambiguation. In Pro- ceedings of the 12th Conference of the European Chapter of the Association for Computational Lin- guistics, EACL '09, pages 33-41.", |
| "links": null |
| }, |
| "BIBREF1": { |
| "ref_id": "b1", |
| "title": "Weirdness indexing for logical document extrapolation and retrieval (wilder)", |
| "authors": [ |
| { |
| "first": "Khurshid", |
| "middle": [], |
| "last": "Ahmad", |
| "suffix": "" |
| }, |
| { |
| "first": "Lee", |
| "middle": [], |
| "last": "Gillam", |
| "suffix": "" |
| }, |
| { |
| "first": "Lena", |
| "middle": [], |
| "last": "Tostevin", |
| "suffix": "" |
| }, |
| { |
| "first": "Ai", |
| "middle": [], |
| "last": "Group", |
| "suffix": "" |
| } |
| ], |
| "year": 1999, |
| "venue": "The Eighth Text REtrieval Conference", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Khurshid Ahmad, Lee Gillam, Lena Tostevin, and Ai Group. 1999. Weirdness indexing for logical document extrapolation and retrieval (wilder). In The Eighth Text REtrieval Conference.", |
| "links": null |
| }, |
| "BIBREF2": { |
| "ref_id": "b2", |
| "title": "Revising the wordnet domains hierarchy: semantics, coverage and balancing", |
| "authors": [ |
| { |
| "first": "Luisa", |
| "middle": [], |
| "last": "Bentivogli", |
| "suffix": "" |
| }, |
| { |
| "first": "Pamela", |
| "middle": [], |
| "last": "Forner", |
| "suffix": "" |
| }, |
| { |
| "first": "Bernardo", |
| "middle": [], |
| "last": "Magnini", |
| "suffix": "" |
| }, |
| { |
| "first": "Emanuele", |
| "middle": [], |
| "last": "Pianta", |
| "suffix": "" |
| } |
| ], |
| "year": 2004, |
| "venue": "Proceedings of the Workshop on Multilingual Linguistic Ressources, MLR '04", |
| "volume": "", |
| "issue": "", |
| "pages": "101--108", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Luisa Bentivogli, Pamela Forner, Bernardo Magnini, and Emanuele Pianta. 2004. Revising the wordnet domains hierarchy: semantics, coverage and balanc- ing. In Proceedings of the Workshop on Multilingual Linguistic Ressources, MLR '04, pages 101-108.", |
| "links": null |
| }, |
| "BIBREF3": { |
| "ref_id": "b3", |
| "title": "Ranking and selecting synsets by domain relevance", |
| "authors": [ |
| { |
| "first": "Paul", |
| "middle": [], |
| "last": "Buitelaar", |
| "suffix": "" |
| }, |
| { |
| "first": "Bogdan", |
| "middle": [], |
| "last": "Sacaleanu", |
| "suffix": "" |
| } |
| ], |
| "year": 2001, |
| "venue": "proceedings NAACL wordnet workshop", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Paul Buitelaar and Bogdan Sacaleanu. 2001. Rank- ing and selecting synsets by domain relevance. In proceedings NAACL wordnet workshop.", |
| "links": null |
| }, |
| "BIBREF4": { |
| "ref_id": "b4", |
| "title": "Word association norms, mutual information, and lexicography", |
| "authors": [ |
| { |
| "first": "Kenneth", |
| "middle": [ |
| "Ward" |
| ], |
| "last": "Church", |
| "suffix": "" |
| }, |
| { |
| "first": "Patrick", |
| "middle": [], |
| "last": "Hanks", |
| "suffix": "" |
| } |
| ], |
| "year": 1990, |
| "venue": "Comput. Linguist", |
| "volume": "16", |
| "issue": "1", |
| "pages": "22--29", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Kenneth Ward Church and Patrick Hanks. 1990. Word association norms, mutual information, and lexicog- raphy. Comput. Linguist., 16(1):22-29.", |
| "links": null |
| }, |
| "BIBREF5": { |
| "ref_id": "b5", |
| "title": "Text2onto -a framework for ontology learning and data-driven change discovery", |
| "authors": [ |
| { |
| "first": "Philipp", |
| "middle": [], |
| "last": "Cimiano", |
| "suffix": "" |
| }, |
| { |
| "first": "Johanna", |
| "middle": [], |
| "last": "Vlker", |
| "suffix": "" |
| } |
| ], |
| "year": 2005, |
| "venue": "Proceedings of the 10th International Conference on Applications of Natural Language to Information Systems (NLDB)", |
| "volume": "3513", |
| "issue": "", |
| "pages": "227--238", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Philipp Cimiano and Johanna Vlker. 2005. Text2onto -a framework for ontology learning and data-driven change discovery. In Proceedings of the 10th In- ternational Conference on Applications of Natural Language to Information Systems (NLDB), volume 3513 of Lecture Notes in Computer Science, pages 227-238.", |
| "links": null |
| }, |
| "BIBREF6": { |
| "ref_id": "b6", |
| "title": "Identification of domain-specific senses in a machinereadable dictionary", |
| "authors": [ |
| { |
| "first": "Fumiyo", |
| "middle": [], |
| "last": "Fukumoto", |
| "suffix": "" |
| }, |
| { |
| "first": "Yoshimi", |
| "middle": [], |
| "last": "Suzuki", |
| "suffix": "" |
| } |
| ], |
| "year": 2011, |
| "venue": "ACL (Short Papers)", |
| "volume": "", |
| "issue": "", |
| "pages": "552--557", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Fumiyo Fukumoto and Yoshimi Suzuki. 2011. Iden- tification of domain-specific senses in a machine- readable dictionary. In ACL (Short Papers), pages 552-557.", |
| "links": null |
| }, |
| "BIBREF7": { |
| "ref_id": "b7", |
| "title": "One sense per discourse", |
| "authors": [ |
| { |
| "first": "William", |
| "middle": [ |
| "A" |
| ], |
| "last": "Gale", |
| "suffix": "" |
| }, |
| { |
| "first": "Kenneth", |
| "middle": [ |
| "W" |
| ], |
| "last": "Church", |
| "suffix": "" |
| }, |
| { |
| "first": "David", |
| "middle": [], |
| "last": "Yarowsky", |
| "suffix": "" |
| } |
| ], |
| "year": 1992, |
| "venue": "Proceedings of the workshop on Speech and Natural Language, HLT '91", |
| "volume": "", |
| "issue": "", |
| "pages": "233--237", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "William A. Gale, Kenneth W. Church, and David Yarowsky. 1992. One sense per discourse. In Pro- ceedings of the workshop on Speech and Natural Language, HLT '91, pages 233-237.", |
| "links": null |
| }, |
| "BIBREF8": { |
| "ref_id": "b8", |
| "title": "Sweetening wordnet with dolce", |
| "authors": [ |
| { |
| "first": "Aldo", |
| "middle": [], |
| "last": "Gangemi", |
| "suffix": "" |
| }, |
| { |
| "first": "Nicola", |
| "middle": [], |
| "last": "Guarino", |
| "suffix": "" |
| }, |
| { |
| "first": "Claudio", |
| "middle": [], |
| "last": "Masolo", |
| "suffix": "" |
| }, |
| { |
| "first": "Alessandro", |
| "middle": [], |
| "last": "Oltramari", |
| "suffix": "" |
| } |
| ], |
| "year": 2003, |
| "venue": "AI Mag", |
| "volume": "24", |
| "issue": "3", |
| "pages": "13--24", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Aldo Gangemi, Nicola Guarino, Claudio Masolo, and Alessandro Oltramari. 2003. Sweetening wordnet with dolce. AI Mag., 24(3):13-24.", |
| "links": null |
| }, |
| "BIBREF9": { |
| "ref_id": "b9", |
| "title": "A proposal for improving wordnet domains", |
| "authors": [ |
| { |
| "first": "Aitor", |
| "middle": [], |
| "last": "Gonzalez-Agirre", |
| "suffix": "" |
| }, |
| { |
| "first": "Mauro", |
| "middle": [], |
| "last": "Castillo", |
| "suffix": "" |
| }, |
| { |
| "first": "German", |
| "middle": [], |
| "last": "Rigau", |
| "suffix": "" |
| } |
| ], |
| "year": 2012, |
| "venue": "Proceedings of Language Resources and Evaluation Conference", |
| "volume": "", |
| "issue": "", |
| "pages": "3457--3462", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Aitor Gonzalez-Agirre, Mauro Castillo, and German Rigau. 2012. A proposal for improving wordnet do- mains. In Proceedings of Language Resources and Evaluation Conference, pages 3457-3462.", |
| "links": null |
| }, |
| "BIBREF10": { |
| "ref_id": "b10", |
| "title": "All words domain adapted wsd: finding a middle ground between supervision and unsupervision", |
| "authors": [ |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Mitesh", |
| "suffix": "" |
| }, |
| { |
| "first": "Anup", |
| "middle": [], |
| "last": "Khapra", |
| "suffix": "" |
| }, |
| { |
| "first": "Saurabh", |
| "middle": [], |
| "last": "Kulkarni", |
| "suffix": "" |
| }, |
| { |
| "first": "Pushpak", |
| "middle": [], |
| "last": "Sohoney", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Bhattacharyya", |
| "suffix": "" |
| } |
| ], |
| "year": 2010, |
| "venue": "Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics, ACL '10", |
| "volume": "", |
| "issue": "", |
| "pages": "1532--1541", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Mitesh M. Khapra, Anup Kulkarni, Saurabh Sohoney, and Pushpak Bhattacharyya. 2010. All words do- main adapted wsd: finding a middle ground between supervision and unsupervision. In Proceedings of the 48th Annual Meeting of the Association for Com- putational Linguistics, ACL '10, pages 1532-1541.", |
| "links": null |
| }, |
| "BIBREF11": { |
| "ref_id": "b11", |
| "title": "Approximation algorithms for classification problems with pairwise relationships: metric labeling and markov random fields", |
| "authors": [ |
| { |
| "first": "Jon", |
| "middle": [], |
| "last": "Kleinberg", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Tardos", |
| "suffix": "" |
| } |
| ], |
| "year": 2002, |
| "venue": "J. ACM", |
| "volume": "49", |
| "issue": "5", |
| "pages": "616--639", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Jon Kleinberg and\u00c9va Tardos. 2002. Approximation algorithms for classification problems with pairwise relationships: metric labeling and markov random fields. J. ACM, 49(5):616-639.", |
| "links": null |
| }, |
| "BIBREF12": { |
| "ref_id": "b12", |
| "title": "What energy functions can be minimized via graph cuts?", |
| "authors": [ |
| { |
| "first": "Vladimir", |
| "middle": [], |
| "last": "Kolmogorov", |
| "suffix": "" |
| }, |
| { |
| "first": "Ramin", |
| "middle": [], |
| "last": "Zabih", |
| "suffix": "" |
| } |
| ], |
| "year": 2002, |
| "venue": "Proceedings of the 7th European Conference on Computer Vision-Part III, ECCV '02", |
| "volume": "", |
| "issue": "", |
| "pages": "65--81", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Vladimir Kolmogorov and Ramin Zabih. 2002. What energy functions can be minimized via graph cuts? In Proceedings of the 7th European Conference on Computer Vision-Part III, ECCV '02, pages 65-81.", |
| "links": null |
| }, |
| "BIBREF13": { |
| "ref_id": "b13", |
| "title": "Chinese wordnet domains: Bootstrapping chinese wordnet with semantic domain labels", |
| "authors": [ |
| { |
| "first": "Lung-Hao", |
| "middle": [], |
| "last": "Lee", |
| "suffix": "" |
| }, |
| { |
| "first": "Yu-Ting", |
| "middle": [], |
| "last": "Yu", |
| "suffix": "" |
| }, |
| { |
| "first": "Chu-Ren", |
| "middle": [], |
| "last": "Huang", |
| "suffix": "" |
| } |
| ], |
| "year": 2009, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "288--296", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Lung-Hao Lee, Yu-Ting Yu, and Chu-Ren Huang. 2009. Chinese wordnet domains: Bootstrapping chinese wordnet with semantic domain labels. In Olivia Kwong, editor, PACLIC, pages 288-296.", |
| "links": null |
| }, |
| "BIBREF14": { |
| "ref_id": "b14", |
| "title": "tegrating subject field codes into wordnet", |
| "authors": [ |
| { |
| "first": "Bernardo", |
| "middle": [], |
| "last": "Magnini", |
| "suffix": "" |
| }, |
| { |
| "first": "Gabriela", |
| "middle": [], |
| "last": "Cavagli", |
| "suffix": "" |
| } |
| ], |
| "year": 2000, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "1413--1418", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Bernardo Magnini and Gabriela Cavagli. 2000. In- tegrating subject field codes into wordnet. pages 1413-1418.", |
| "links": null |
| }, |
| "BIBREF15": { |
| "ref_id": "b15", |
| "title": "The role of domain information in word sense disambiguation", |
| "authors": [ |
| { |
| "first": "Bernardo", |
| "middle": [], |
| "last": "Magnini", |
| "suffix": "" |
| }, |
| { |
| "first": "Giovanni", |
| "middle": [], |
| "last": "Pezzulo", |
| "suffix": "" |
| }, |
| { |
| "first": "Alfio", |
| "middle": [], |
| "last": "Gliozzo", |
| "suffix": "" |
| } |
| ], |
| "year": 2002, |
| "venue": "Natural Language Engineering", |
| "volume": "8", |
| "issue": "", |
| "pages": "359--373", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Bernardo Magnini, Giovanni Pezzulo, and Alfio Gliozzo. 2002a. The role of domain information in word sense disambiguation. Natural Language Engineering, 8:359-373.", |
| "links": null |
| }, |
| "BIBREF16": { |
| "ref_id": "b16", |
| "title": "Comparing ontology-based and corpus-based domain annotations in wordnet", |
| "authors": [ |
| { |
| "first": "Bernardo", |
| "middle": [], |
| "last": "Magnini", |
| "suffix": "" |
| }, |
| { |
| "first": "Carlo", |
| "middle": [], |
| "last": "Strapparava", |
| "suffix": "" |
| }, |
| { |
| "first": "Giovanni", |
| "middle": [], |
| "last": "Pezzulo", |
| "suffix": "" |
| }, |
| { |
| "first": "Alfio", |
| "middle": [], |
| "last": "Gliozzo", |
| "suffix": "" |
| } |
| ], |
| "year": 2002, |
| "venue": "First International Global WordNet Conference", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Bernardo Magnini, Carlo Strapparava, Giovanni Pez- zulo, and Alfio Gliozzo. 2002b. Comparing ontology-based and corpus-based domain annota- tions in wordnet. In First International Global WordNet Conference, Mysore, India.", |
| "links": null |
| }, |
| "BIBREF17": { |
| "ref_id": "b17", |
| "title": "Generalized fast approximate energy minimization via graph cuts: Alpha-expansion beta-shrink moves", |
| "authors": [ |
| { |
| "first": "W", |
| "middle": [], |
| "last": "Mark", |
| "suffix": "" |
| }, |
| { |
| "first": "Karteek", |
| "middle": [], |
| "last": "Schmidt", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Alahari", |
| "suffix": "" |
| } |
| ], |
| "year": 2011, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Mark W. Schmidt and Karteek Alahari. 2011. General- ized fast approximate energy minimization via graph cuts: Alpha-expansion beta-shrink moves. CoRR, abs/1108.5710.", |
| "links": null |
| }, |
| "BIBREF18": { |
| "ref_id": "b18", |
| "title": "A comparative study of energy minimization methods for markov random fields with smoothnessbased priors", |
| "authors": [ |
| { |
| "first": "Richard", |
| "middle": [], |
| "last": "Szeliski", |
| "suffix": "" |
| }, |
| { |
| "first": "Ramin", |
| "middle": [], |
| "last": "Zabih", |
| "suffix": "" |
| }, |
| { |
| "first": "Daniel", |
| "middle": [], |
| "last": "Scharstein", |
| "suffix": "" |
| }, |
| { |
| "first": "Olga", |
| "middle": [], |
| "last": "Veksler", |
| "suffix": "" |
| }, |
| { |
| "first": "Vladimir", |
| "middle": [], |
| "last": "Kolmogorov", |
| "suffix": "" |
| }, |
| { |
| "first": "Aseem", |
| "middle": [], |
| "last": "Agarwala", |
| "suffix": "" |
| }, |
| { |
| "first": "Marshall", |
| "middle": [], |
| "last": "Tappen", |
| "suffix": "" |
| }, |
| { |
| "first": "Carsten", |
| "middle": [], |
| "last": "Rother", |
| "suffix": "" |
| } |
| ], |
| "year": 2008, |
| "venue": "IEEE Trans. Pattern Anal. Mach. Intell", |
| "volume": "30", |
| "issue": "6", |
| "pages": "1068--1080", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Richard Szeliski, Ramin Zabih, Daniel Scharstein, Olga Veksler, Vladimir Kolmogorov, Aseem Agar- wala, Marshall Tappen, and Carsten Rother. 2008. A comparative study of energy minimization meth- ods for markov random fields with smoothness- based priors. IEEE Trans. Pattern Anal. Mach. In- tell., 30(6):1068-1080.", |
| "links": null |
| }, |
| "BIBREF19": { |
| "ref_id": "b19", |
| "title": "Rethinking wordnet's domains", |
| "authors": [ |
| { |
| "first": "Ma", |
| "middle": [], |
| "last": "Xiaojuan", |
| "suffix": "" |
| }, |
| { |
| "first": "Christiane", |
| "middle": [], |
| "last": "Fellbaum", |
| "suffix": "" |
| } |
| ], |
| "year": 2012, |
| "venue": "Proceedings of 6th International Global WordNet Conference", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Ma Xiaojuan and Christiane Fellbaum. 2012. Re- thinking wordnet's domains. In Proceedings of 6th International Global WordNet Conference, Matsue, Japan, jan.", |
| "links": null |
| }, |
| "BIBREF20": { |
| "ref_id": "b20", |
| "title": "Domain ontology generation based on wordnet and internet", |
| "authors": [ |
| { |
| "first": "Wang", |
| "middle": [], |
| "last": "Yanna", |
| "suffix": "" |
| }, |
| { |
| "first": "Zhou", |
| "middle": [], |
| "last": "Zili", |
| "suffix": "" |
| } |
| ], |
| "year": 2009, |
| "venue": "Proceedings of the International Conference on Management and Service Science, 2009. MASS '09", |
| "volume": "", |
| "issue": "", |
| "pages": "1--5", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Wang Yanna and Zhou Zili. 2009. Domain on- tology generation based on wordnet and internet. In Proceedings of the International Conference on Management and Service Science, 2009. MASS '09, pages 1 -5.", |
| "links": null |
| }, |
| "BIBREF21": { |
| "ref_id": "b21", |
| "title": "Gloss-based word domain assignment", |
| "authors": [ |
| { |
| "first": "Chaoyong", |
| "middle": [], |
| "last": "Zhu", |
| "suffix": "" |
| }, |
| { |
| "first": "Shumin", |
| "middle": [], |
| "last": "Shi", |
| "suffix": "" |
| }, |
| { |
| "first": "Haijun", |
| "middle": [], |
| "last": "Zhang", |
| "suffix": "" |
| } |
| ], |
| "year": 2011, |
| "venue": "7th International Conference on Natural Language Processing and Knowledge Engineering", |
| "volume": "", |
| "issue": "", |
| "pages": "150--155", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Chaoyong Zhu, Shumin Shi, and Haijun Zhang. 2011. Gloss-based word domain assignment. In 7th Inter- national Conference on Natural Language Process- ing and Knowledge Engineering (NLP-KE), pages 150-155.", |
| "links": null |
| } |
| }, |
| "ref_entries": { |
| "FIGREF0": { |
| "type_str": "figure", |
| "uris": null, |
| "text": "j is calculated by taking cosine of terms vectors of v i and v j , as shown in fig 2.", |
| "num": null |
| }, |
| "FIGREF1": { |
| "type_str": "figure", |
| "uris": null, |
| "text": "Graph Cut Based Energy MinimizationInput: set of labels L, undirected graph G(", |
| "num": null |
| }, |
| "FIGREF2": { |
| "type_str": "figure", |
| "uris": null, |
| "text": "System ArchitectureThis module takes document graphs as an input and constructs a cohesive domain specific conceptual structure. In order to do this, we need to classify each node in the corpus graph into various domains. Assignment of a domain label to a node depends on two parameters,\u2022 Term Cost: This measures how strongly a term belongs to the domain. It is measured by frequency of occurrence of a term within domain. This is formulated as a cost function as shown in equation 1.", |
| "num": null |
| }, |
| "FIGREF3": { |
| "type_str": "figure", |
| "uris": null, |
| "text": "Figure 3: Domain Labeling", |
| "num": null |
| }, |
| "FIGREF4": { |
| "type_str": "figure", |
| "uris": null, |
| "text": "Link with Wordnet G(V, E) V := vertices arranged in breadth first order E := set of edges |V | := m |E| := n for v i \u2208 V do create node v i for each sense of v i distribute edges across senses end for v := new sense vertex set; k := |v | for i := 0 \u2192 k do if Edge set v i == 0 then delete v i end if for j := 0 \u2192 k do if Edge set v i == Edge set v j then merge v i and v j end if end for end for", |
| "num": null |
| }, |
| "FIGREF5": { |
| "type_str": "figure", |
| "uris": null, |
| "text": "Figure 4: Sense Splitting", |
| "num": null |
| }, |
| "TABREF1": { |
| "text": "", |
| "content": "<table><tr><td>: Corpus Statistics</td></tr><tr><td>main tag.</td></tr></table>", |
| "type_str": "table", |
| "num": null, |
| "html": null |
| } |
| } |
| } |
| } |