ACL-OCL / Base_JSON /prefixY /json /Y03 /Y03-1044.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "Y03-1044",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T13:34:39.684230Z"
},
"title": "Corpus-based Ontology Learning for Word Sense Disambiguation",
"authors": [
{
"first": "Jae",
"middle": [],
"last": "Kang",
"suffix": "",
"affiliation": {},
"email": "sjkang@daegu.ac.kr"
},
{
"first": "Nairi",
"middle": [],
"last": "-Ri",
"suffix": "",
"affiliation": {},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "This paper proposes to disambiguate word senses by corpus-based ontology learning. Our approach is a hybrid method. First, we apply the previously-secured dictionary information to select the correct senses of some ambiguous words with high precision, and then use the ontology to disambiguate the remaining ambiguous words. The mutual information between concepts in the ontology was calculated before using the ontology as knowledge for disambiguating word senses. If mutual information is regarded as a weight between ontology concepts, the ontology can be treated as a graph with weighted edges, and then we locate the least weighted path from one concept to the other concept. In our practical machine translation system, our word sense disambiguation method achieved a 9% improvement over methods which do not use ontology for Korean translation.",
"pdf_parse": {
"paper_id": "Y03-1044",
"_pdf_hash": "",
"abstract": [
{
"text": "This paper proposes to disambiguate word senses by corpus-based ontology learning. Our approach is a hybrid method. First, we apply the previously-secured dictionary information to select the correct senses of some ambiguous words with high precision, and then use the ontology to disambiguate the remaining ambiguous words. The mutual information between concepts in the ontology was calculated before using the ontology as knowledge for disambiguating word senses. If mutual information is regarded as a weight between ontology concepts, the ontology can be treated as a graph with weighted edges, and then we locate the least weighted path from one concept to the other concept. In our practical machine translation system, our word sense disambiguation method achieved a 9% improvement over methods which do not use ontology for Korean translation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "An ontology is a knowledge base with information about concepts existing in the world, their properties, and how they relate to each other. An ontology is different from a thesaurus in that it contains only language independent information and many other semantic relations, as well as taxonomic relations. In this paper, we propose to use the ontology to disambiguate word senses.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "All approaches to word sense disambiguation (WSD) make use of words in a sentence to mutually disambiguate each other. The distinctions between various approaches lie in the source and type of knowledge made by the lexical units in a sentence. Thus, all these approaches can be classified into Al-based, knowledge-based, or corpus-based approaches, according to their sources and types of knowledge (Ide, 1998) . AI-based WSD methods (Dahlgren, 1988 ) use a semantic network, or frames containing information about word functions and the relation to other words in individual sentences; or preference semantics, which specifies selectional restrictions for combinations of lexical items in a sentence. The difficulty with handcrafting the knowledge sources is the major disadvantage of AI-based systems. Knowledge-based methods (Resnik, 1995a; Yarowsky, 1992) have utilized machine-readable dictionaries (MRD), thesauri, and computational lexicons, such as WordNet. Since most MRDs and thesauri were created for human use and display inconsistencies, these methods have clear limitations. Corpus-based methods (Dagan, 1994; Gale, 1992) extract statistical information from corpora which is monolingual or bilingual, and raw or sense-tagged. The problem of data sparseness commonly occurs in the corpus-based approach, and is especially severe when processing in WSD. A smoothing and concept-based method is used to address this problem.",
"cite_spans": [
{
"start": 399,
"end": 410,
"text": "(Ide, 1998)",
"ref_id": "BIBREF5"
},
{
"start": 434,
"end": 449,
"text": "(Dahlgren, 1988",
"ref_id": "BIBREF2"
},
{
"start": 828,
"end": 843,
"text": "(Resnik, 1995a;",
"ref_id": "BIBREF11"
},
{
"start": 844,
"end": 859,
"text": "Yarowsky, 1992)",
"ref_id": "BIBREF13"
},
{
"start": 1110,
"end": 1123,
"text": "(Dagan, 1994;",
"ref_id": "BIBREF1"
},
{
"start": 1124,
"end": 1135,
"text": "Gale, 1992)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "Our WSD approach is a hybrid method, which combines the advantages of corpus-based and knowledge-based methods. We use our semi-automatically constructed ontology as an external knowledge source and secured dictionary information as context information. First, we apply the previously-secured dictionary information to select the correct senses of some ambiguous words with high precision, and then use the ontology to disambiguate the remaining ambiguous words.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "The remainder of this paper is organized as follows. In the next section, we describe the semi-automatic ontology construction methodology briefly. The ontology learning is explained in Root (dummy node) I 1 1 I I 1 I 1 I nature character change action feeling human disposition society institute thi gs 0 1 2 3 4 5 6 7 8 goods medicine stationery machine 90 ",
"cite_spans": [],
"ref_spans": [
{
"start": 186,
"end": 384,
"text": "Root (dummy node) I 1 1 I I 1 I 1 I nature character change action feeling human disposition society institute thi gs 0 1 2 3 4 5 6 7 8 goods medicine stationery machine 90",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "To construct a practical ontology for WSD, we developed two strategies. First, we introduced the same number and grain size of concepts of the Kadokawa thesaurus (Ohno & Hamanishi, 1981) and its taxonomic hierarchy into the ontology. The thesaurus has 1,110 semantic categories and a 4-level hierarchy as a taxonomic relation (Fig. 1) . Semantic categories in level L I , L 10 , and L 100 are further divided into 10 subclasses. The root node is merely a dummy node. Noun and verb categories coexist in the same taxonomic hierarchy of the Kadokawa thesaurus. Verb categories mainly correspond to the code 2xx, 3xx, and 4xx in level L 1000 . This approach is a moderate shortcut to construct a practical ontology and easily enables us to utilize its results, since some resources are readily available, such as bilingual dictionaries of COBALT-J/K (Collocation-Based Language Translator from Japanese to Korean) (Park et al., 1997) and COBALT-IQJ (Collocation-Based Language Translator from Korean to Japanese) , which are machine translation systems developed by POSTECH (Pohang University of Science and Technology, Korea). In these bilingual dictionaries, nominal and verbal words are already annotated with concept codes from the Kadokawa thesaurus. By using the same sense inventories of these MT systems, we can easily apply and evaluate our ontology without additional lexicographic works. In addition, the Kadokawa thesaurus proved to be useful for providing a fundamental foundation to build lexical disambiguation knowledge in COBALT-J/K and COBALT-K/J MT systems (Li et al., 2000) .",
"cite_spans": [
{
"start": 162,
"end": 186,
"text": "(Ohno & Hamanishi, 1981)",
"ref_id": "BIBREF9"
},
{
"start": 1573,
"end": 1590,
"text": "(Li et al., 2000)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [
{
"start": 326,
"end": 334,
"text": "(Fig. 1)",
"ref_id": null
}
],
"eq_spans": [],
"section": "Ontology Construction",
"sec_num": "2."
},
{
"text": "The second strategy to construct a practical ontology is to extend the hierarchy of the Kadokawa thesaurus by inserting additional semantic relations into its hierarchy. The additional semantic relations can be classified as case relations and other semantic relations. Thus far, case relations have been occasionally used to disambiguate lexical ambiguities in the form of valency information and case frame, but other semantic relations have not, because of the problem of discriminating them from each other, making them difficult to recognize. We define a total of 30 semantic relation types for WSD by referring mainly to the SELK (Sejong Electronic Lexicon of Korean) (Hong & Pak, 2001 ) and the Mikrokosmos ontology (Mahesh, 1996) , as shown in Table 1 Other semantic relation has-member, has-element, contains, material-of, headed-by, operated-by, controls, owner-of, represents, symbol-of, name-of, producer-of, composer-of, inventor-of, make, measured-in These semantic relation types cannot express all possible semantic relations existing among concepts, but experimental results demonstrated their usefulness for WSD. Table 2 shows the number of semantic relations semi-automatically inserted into the ontology from computational dictionaries and large corpora. ",
"cite_spans": [
{
"start": 674,
"end": 691,
"text": "(Hong & Pak, 2001",
"ref_id": "BIBREF4"
},
{
"start": 723,
"end": 737,
"text": "(Mahesh, 1996)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [
{
"start": 752,
"end": 759,
"text": "Table 1",
"ref_id": null
},
{
"start": 1131,
"end": 1138,
"text": "Table 2",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Ontology Construction",
"sec_num": "2."
},
{
"text": "To use the ontology in natural language processing (NLP) applications, a scoring mechanism was required to determine whether the governor and dependent concepts satisfy their semantic constraints in the ontology. Therefore, in order to measure concept association, we use an association ratio based on the information theoretic concept of mutual information (MI), which is a natural measure of the dependence between random variables (Church & Hanks, 1989) . Resnik (1995b) suggested a measure of semantic similarity in an IS-A taxonomy, based on the notion of information content. However, his method differs from ours in that we consider all semantic relations in the ontology, not taxonomy relations only. To implement this idea, source concepts (SC) and semantic relations (SR) are bound into one entity, since SR is mainly influenced by SC, not the destination concepts (DC). Therefore, if two entities, < SC, SR>, and DC have probabilities P(<SC, SR>) and P(DC), then their mutual information I(<SC, SR>, DC) is defined as:",
"cite_spans": [
{
"start": 434,
"end": 456,
"text": "(Church & Hanks, 1989)",
"ref_id": "BIBREF0"
},
{
"start": 459,
"end": 473,
"text": "Resnik (1995b)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Ontology Learning",
"sec_num": "3."
},
{
"text": "I(< SC, SR >, DC) = log, P(< SC , SR >, DC) + 1 P(< SC ,SR >)P(DC) (1)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Ontology Learning",
"sec_num": "3."
},
{
"text": "The MI between concepts in the ontology must be calculated before using the ontology as knowledge for disambiguating word senses. Figure 2 shows the construction process for training data in the form of <SC (governor), SR, DC (dependent), frequency> and the calculation of MI between the ontology concepts. We performed a slight modification on COBALT-K/J and COBALT-J/K to enable them to produce sense-tagged valency information instances with the specific concept codes of the Kadokawa thesaurus. After producing the instances, we converted syntactic relations into semantic relations by relying on the specific rules and human intuition (Fig. 3) . As a result, we extracted sufficient training data from the Korean raw corpus, which has 70 million words, and the Japanese raw corpus, which has eight hundred and ten thousand sentences. Take the most frequently appearing sense",
"cite_spans": [],
"ref_spans": [
{
"start": 130,
"end": 138,
"text": "Figure 2",
"ref_id": null
},
{
"start": 640,
"end": 648,
"text": "(Fig. 3)",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Ontology Learning",
"sec_num": "3."
},
{
"text": ". g , Fig. 4 Proposed WSD algorithm",
"cite_spans": [],
"ref_spans": [
{
"start": 6,
"end": 12,
"text": "Fig. 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Ontology Learning",
"sec_num": "3."
},
{
"text": "The ontology is applicable to many fields. In this paper, we propose to use the ontology to disambiguate word senses. All approaches to word sense disambiguation (WSD) make use of words in a sentence to mutually disambiguate each other. The distinctions between various approaches lie in the source and type of knowledge constructed by the lexical units in a sentence.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Word Sense Disambiguation",
"sec_num": "4."
},
{
"text": "Our WSD approach is a hybrid method, which combines the advantages of corpus-based and knowledge-based methods. We use the ontology as an external knowledge source and secured dictionary information as context information.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Word Sense Disambiguation",
"sec_num": "4."
},
{
"text": "For a given ambiguous word W, Figure 4 describes our overall WSD algorithm. First, the verb's valency information is applied by using the formulas (2) and (3). Sal9 denotes a set of word senses of the word W, SR(V) a selectional restriction of a verb V that takes the word W as its argument. Ci and P.; are concept types. Csim(Ch Pi) in Eq. (2) is used to compute the concept similarity between Ci and Pi, where MSCA(Ci, P\") is the most specific common ancestor of concept types Ci and Pi. If the matching score of valency information (Eq. (3)) is greater than a threshold, then set the sense of the word W to Ci and exit. Otherwise, LSPs and UCWs are used in order (Li et al., 2000) .",
"cite_spans": [
{
"start": 666,
"end": 683,
"text": "(Li et al., 2000)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [
{
"start": 30,
"end": 38,
"text": "Figure 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Word Sense Disambiguation",
"sec_num": "4."
},
{
"text": "2 * level(MSCA(Ci,Pd)) Csim(C , Pi ) = *weight (2) level(Ci )\u00b1 level(Pf) Vsim(S(W),SR(V))=max(Csim(Ci,Pi)), (3) j m,CiES(W);PiESR(V)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Word Sense Disambiguation",
"sec_num": "4."
},
{
"text": "If lexical information such as valency information, LSPs and UCWs is unavailable, or fails to match with other words, we use the ontology as a second knowledge base. The roles of the ontology in WSD are as follows. First, if previously-secured information for a concept is not available in a dictionary, the ontology provides extended semantic constraints for the concept. The extended semantic constraints were made in the previous ontology-building phase by other semantic constraints, including the same concept code. Second, if a direct semantic relation between concepts is not available in the ontology, the ontology and its scoring mechanism provide a relaxation procedure, which approximates their semantic association. The following are detailed descriptions of the procedure for applying the ontology to WSD work.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Word Sense Disambiguation",
"sec_num": "4."
},
{
"text": "If MI is regarded as a weight between ontology concepts, the ontology can be treated as a graph with weighted edges. All edge weights are non-negative and weights are converted into penalties by the formula below. c indicates a constant, such as a maximum MI value over all possible pairs.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Word Sense Disambiguation",
"sec_num": "4."
},
{
"text": "We use the formulas below to locate the least weighted path from one concept to the other. The score functions are defined as respectively: if and Ck has direct relations with C1.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Pe(< SC, SR >, DC) = c 1.(< SC, SR >, DC) (4)",
"sec_num": null
},
{
"text": "Score+ (C\"C j ) = Cke{conceptsinCi-+Cf} if ci",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Pe(< SC, SR >, DC) = c 1.(< SC, SR >, DC) (4)",
"sec_num": null
},
{
"text": "and Ck has direct relations with C.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Pe(< SC, SR >, DC) = c 1.(< SC, SR >, DC) (4)",
"sec_num": null
},
{
"text": "Here C and R indicate concepts and semantic relations, respectively. It was found from the result that Score* and Score formulas are almost same performance in inferring with the ontology. By applying these formulas, we can verify how well selectional constraints between concepts are satisfied. In addition, if there is no direct semantic relation between concepts, these formulas provide a relaxation procedure, which enables it to approximate their semantic relations. This characteristic enables us to obtain hints toward resolving metaphor and metonymy expressions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Pe(< SC, SR >, DC) = c 1.(< SC, SR >, DC) (4)",
"sec_num": null
},
{
"text": "To locate the best path, the search mechanism of the ontology applies the following heuristics. Firstly, a taxonomic relation must be treated as exceptional from other semantic relations, because they inherently lack frequencies between parent and child concepts. So we experimentally assign a fixed weight to those edges. Secondly, the weight given to an edge is sensitive to the context of prior edges in the path. Therefore, our mechanism restricts the number of times that a particular relation can be traversed in one path. Thirdly, this mechanism avoids an excessive change in the gradient.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Pe(< SC, SR >, DC) = c 1.(< SC, SR >, DC) (4)",
"sec_num": null
},
{
"text": "We performed an evaluation of the proposed WSD algorithm using the ontology. This section presents the experimental results. Eight ambiguous nouns and four ambiguous verbs were selected, along with a total of 604 test sentences in which one test noun or verb appears. The test sentences were randomly selected from the raw Korean corpus. Out of several senses for each ambiguous word, we considered only two or three senses that are most frequently used in the corpus.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental results",
"sec_num": "5."
},
{
"text": "We performed three experiments for MT system: The first experiment, BASE, is the case where the most frequently used senses are always taken as the senses of test words. The purpose of this experiment is to show a baseline in WSD work. The second, LEX, uses only secured dictionary information, such as the selectional restriction of verbs, local syntactic patterns, and unordered cooccurring word patterns in disambiguating word senses. This is a general method without an ontology.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental results",
"sec_num": "5."
},
{
"text": "The third, ONTO, shows the results of our WSD method using the ontology. The experimental results are compared with each other in Table 3 . In these experiments, the ONTO method achieved a 9% improvement over the LEX method. Table 4 shows the applicability and precision for each phase in the WSD algorithm. In ontology phase, the ratio of applicability was 18.1% and precision was 86.4%. The main reason for these results is that, in the absence of secured dictionary information about an ambiguous word, the ontology provides an extended case frame by the concept code of the word. In addition, when there is no direct semantic constraint between concepts, our search mechanism provides a relaxation procedure. Therefore, the quality and usefulness of the ontology were indirectly proved by these results.",
"cite_spans": [],
"ref_spans": [
{
"start": 130,
"end": 137,
"text": "Table 3",
"ref_id": "TABREF4"
},
{
"start": 225,
"end": 232,
"text": "Table 4",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Experimental results",
"sec_num": "5."
},
{
"text": "In this paper we have proposed a corpus-based ontology learning method and an ontology-based WSD algorithm. The ontology, which includes extensive semantic relations between concepts, differs from many resources in that it has no language-dependent knowledge, which is a network of concepts, not words. The ontology can be applied to other languages, if the concept codes, which corresponding to the senses of each headword, are merely inserted into their dictionaries.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6."
},
{
"text": "In order to learn ontology for WSD, we automatically produced sense-tagged valency information instances from large raw corpus. After producing the instances, we semi-automatically converted syntactic relations into semantic relations, and then the mutual information between concepts in the ontology was calculated. If mutual information is regarded as a weight between ontology concepts, the ontology can be treated as a graph with weighted edges and weights are converted into penalties. By locating the least weighted path from one concept to the other concept, we can verify how well selectional constraints between concepts are satisfied.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6."
},
{
"text": "The ontology is applied to disambiguate word senses in the form of an ontological graph search. The search mechanism determines whpther selectional constraints between concepts are satisfied or not, and includes a relaxation procedure, which enables concept pairs with no direct selectional restriction to approximate their semantic association. This characteristic enables us to obtain hints toward resolving metaphor and metonymy expressions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6."
},
{
"text": "The ontology calls for further specific concepts and semantic relations to improve the WSD performance. A further direction of this study will be focused on how to combine the concept of the semantic web (Berners-Lee et al., 2001 ) and the ontology in NLP applications.",
"cite_spans": [
{
"start": 225,
"end": 229,
"text": "2001",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6."
}
],
"back_matter": [
{
"text": "This work was supported by the Korea Science and Engineering Foundation (KOSEF) through the Advanced Information Technology Research Center (Mac).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgements",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Word association norms, mutual information, and lexicography",
"authors": [
{
"first": "T",
"middle": [],
"last": "Bemers-Lee",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Hendler",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Church",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Hanks",
"suffix": ""
}
],
"year": 1989,
"venue": "Proceedings of the 27th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "76--83",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bemers-Lee, T., Hendler, J., and Lasilla, 0. 2001. The Semantic Web. Scientific American, May. Church, K. and Hanks, P. 1989. Word association norms, mutual information, and lexicography, In Proceedings of the 27th Annual Meeting of the Association for Computational Linguistics, Vancouver, Canada, pp. 76-83.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Word sense disambiguation using a second language monolingual corpus",
"authors": [
{
"first": "I",
"middle": [],
"last": "Dagan",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Itai",
"suffix": ""
}
],
"year": 1994,
"venue": "Computational Linguistics",
"volume": "20",
"issue": "4",
"pages": "563--596",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dagan, I. and Itai, A. 1994. Word sense disambiguation using a second language monolingual corpus. Computational Linguistics, 20(4):563-596.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Naive Semantics for Natural Language Understanding",
"authors": [
{
"first": "K",
"middle": [
"G"
],
"last": "Dahlgren",
"suffix": ""
}
],
"year": 1988,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dahlgren, K. G. 1988. Naive Semantics for Natural Language Understanding. Kluwer Academic Pub., Boston.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "A method for disambiguating word senses in a large corpus",
"authors": [
{
"first": "W",
"middle": [
"A"
],
"last": "Gale",
"suffix": ""
},
{
"first": "K",
"middle": [
"W"
],
"last": "Church",
"suffix": ""
},
{
"first": "Yarowsky",
"middle": [],
"last": "",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "",
"suffix": ""
}
],
"year": 1992,
"venue": "Computers and the Humanities",
"volume": "26",
"issue": "5",
"pages": "415--439",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gale, W. A., Church, K. W., and Yarowsky, D. 1992. A method for disambiguating word senses in a large corpus. Computers and the Humanities, 26(5):415-439.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Developing a large scale computational lexical database of contemporary Korean : SELK",
"authors": [
{
"first": "C",
"middle": [
"S"
],
"last": "Hong",
"suffix": ""
},
{
"first": "M",
"middle": [
"G"
],
"last": "Pak",
"suffix": ""
}
],
"year": 2001,
"venue": "Proceedings of the 19th International Conference on Computer Processing of Oriental Languages",
"volume": "",
"issue": "",
"pages": "20--26",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hong, C. S. and Pak, M. G. 2001. Developing a large scale computational lexical database of contemporary Korean : SELK, In Proceedings of the 19th International Conference on Computer Processing of Oriental Languages (ICCPOL 2001), Seoul, Korea, pp. 20-26.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Introduction to the special issue on word sense disambiguation: the state of the art",
"authors": [
{
"first": "N",
"middle": [],
"last": "Ide",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Veronis",
"suffix": ""
}
],
"year": 1998,
"venue": "Computational Linguistics",
"volume": "24",
"issue": "1",
"pages": "1--40",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ide, N. and Veronis, J. 1998. Introduction to the special issue on word sense disambiguation: the state of the art. Computational Linguistics, 24(1) : 1-40.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Lexical Transfer Ambiguity Resolution Using Automatically-Extracted Concept Co-occurrence Information",
"authors": [
{
"first": "H",
"middle": [
"F"
],
"last": "Li",
"suffix": ""
},
{
"first": "N",
"middle": [
"W"
],
"last": "Heo",
"suffix": ""
},
{
"first": "K",
"middle": [
"H"
],
"last": "Moon",
"suffix": ""
},
{
"first": "J",
"middle": [
"H"
],
"last": "Lee",
"suffix": ""
},
{
"first": "G",
"middle": [
"B"
],
"last": "Lee",
"suffix": ""
}
],
"year": 2000,
"venue": "International Journal of Computer Processing of Oriental Languages",
"volume": "13",
"issue": "1",
"pages": "53--68",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Li, H. F., Heo, N. W., Moon, K. H., Lee, J. H., and Lee, G. B. 2000. Lexical Transfer Ambiguity Resolution Using Automatically-Extracted Concept Co-occurrence Information, International Journal of Computer Processing of Oriental Languages, World Scientific Pub., 13(1):53-68.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Knowledge-based systems for Natural Language Processing, Memoranda in Computer and Cognitive Science",
"authors": [
{
"first": "K",
"middle": [],
"last": "Mahesh",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Nirenburg",
"suffix": ""
}
],
"year": 1996,
"venue": "",
"volume": "",
"issue": "",
"pages": "96--296",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mahesh, K., and Nirenburg, S. 1996. Knowledge-based systems for Natural Language Processing, Memoranda in Computer and Cognitive Science. NMSU CRL Technical Report, MCCS-96-296.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Representation and Recognition Method for Multi-Word Translation Units in Korean-to-Japanese MT System",
"authors": [
{
"first": "K",
"middle": [
"H"
],
"last": "Moon",
"suffix": ""
},
{
"first": "J",
"middle": [
"H"
],
"last": "Lee",
"suffix": ""
}
],
"year": 2000,
"venue": "the 18th International Conference on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "544--550",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Moon, K. H. and Lee, J. H. 2000. Representation and Recognition Method for Multi-Word Translation Units in Korean-to-Japanese MT System, In the 18th International Conference on Computational Linguistics (COLING 2000), Germany, pp. 544-550.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "New Synonyms Dictionary, Kadokawa Shoten",
"authors": [
{
"first": "S",
"middle": [],
"last": "Ohno",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Hamanishi",
"suffix": ""
}
],
"year": 1981,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ohno, S. and Hamanishi, M. 1981. New Synonyms Dictionary, Kadokawa Shoten, Tokyo. (Written in Japanese.)",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Collocation-Based Transfer Method in Japanese-Korean Machine Translation",
"authors": [
{
"first": "C",
"middle": [
"J"
],
"last": "Park",
"suffix": ""
},
{
"first": "J",
"middle": [
"H"
],
"last": "Lee",
"suffix": ""
},
{
"first": "G",
"middle": [
"B"
],
"last": "Lee",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Kakechi",
"suffix": ""
}
],
"year": 1997,
"venue": "",
"volume": "38",
"issue": "",
"pages": "707--718",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Park, C. J., Lee, J. H., Lee, G. B., and Kakechi, K. 1997. Collocation-Based Transfer Method in Japanese- Korean Machine Translation, Transaction of Information Processing Society of Japan, 38(4):707-718. (Written in Japanese)",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Disambiguating noun groupings with respect to WordNet senses",
"authors": [
{
"first": "P",
"middle": [],
"last": "Resnik",
"suffix": ""
}
],
"year": 1995,
"venue": "Proceedings of the Third Workshop on Very Large Corpora",
"volume": "",
"issue": "",
"pages": "54--68",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Resnik, P. 1995a. Disambiguating noun groupings with respect to WordNet senses. In Proceedings of the Third Workshop on Very Large Corpora, Cambridge, MA, pp.54-68.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Using Information Content to Evaluate Semantic Similarity in a Taxonomy",
"authors": [
{
"first": "P",
"middle": [],
"last": "Resnik",
"suffix": ""
}
],
"year": 1995,
"venue": "Proceedings of LICAI-95",
"volume": "",
"issue": "",
"pages": "448--453",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Resnik, P. 1995b. Using Information Content to Evaluate Semantic Similarity in a Taxonomy. In Proceedings of LICAI-95, Montreal, Canada, pp. 448-453.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Word sense disambiguation using statistical models of Roget's categories trained on large corpora",
"authors": [
{
"first": "D",
"middle": [],
"last": "Yarowsky",
"suffix": ""
}
],
"year": 1992,
"venue": "The le International Conference on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "454--460",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yarowsky, D. 1992. Word sense disambiguation using statistical models of Roget's categories trained on large corpora. The le International Conference on Computational Linguistics, Nantes, France, pp.454-460.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"type_str": "figure",
"num": null,
"text": "Example of conversion from syntactic patterns to semantic patterns",
"uris": null
},
"FIGREF2": {
"type_str": "figure",
"num": null,
"text": "if C,=Ci, mmkP e(< C Rp>,C1)) Score* (C,,Ci if C, # C f j and C\"C j have direct relations RP , (5-1) min (Score(C,,Ck )* Score(Ck l C j )) 0 if C,=Cf, min(Pe(< Ci , Rp >,C j)) if C, # Cj and C,,C j have direct relations Rp , (5-2) min (Score(C,,Ck)+ Score(Ck ,C j)) Cke{conceptsinCi-4C11}",
"uris": null
},
"TABREF2": {
"content": "<table><tr><td>Types</td><td>Number</td></tr><tr><td>Taxonomic relations</td><td>1,100</td></tr><tr><td>Case relations</td><td>112,746</td></tr><tr><td>Other semantic relations</td><td>2,093</td></tr><tr><td>Total</td><td>115,939</td></tr></table>",
"text": "The number of ontological relation instances",
"type_str": "table",
"num": null,
"html": null
},
"TABREF4": {
"content": "<table><tr><td>POS</td><td>Lexical word</td><td>Sense</td><td>BASE</td><td>LEX</td><td>ONTO</td></tr><tr><td/><td>Pwuca</td><td colspan=\"2\">father &amp; child / rich man 65.3</td><td>69.2</td><td>86.0</td></tr><tr><td/><td>Kancang</td><td>liver I soy sauce</td><td>66.0</td><td>87.8</td><td>91.8</td></tr><tr><td/><td>Kasa</td><td>housework / words of song</td><td>48.0</td><td>88.5</td><td>96.1</td></tr><tr><td>Noun</td><td>Kwutwu</td><td>shoe / word of mouth</td><td>78.0</td><td>85.7</td><td>95.9</td></tr><tr><td/><td>Nwun</td><td>eye 1 snow</td><td>82.0</td><td>96.0</td><td>94.0</td></tr><tr><td/><td>Yongki</td><td>courage / container</td><td>62.0</td><td>74.0</td><td>82.0</td></tr><tr><td/><td>Kyengpi</td><td>expenses / defense</td><td>74.5</td><td>78.4</td><td>90.2</td></tr><tr><td/><td>Kyeons-ki</td><td>times / match</td><td>52.9</td><td>80.4</td><td>93.2</td></tr><tr><td/><td>Nayli-ta</td><td>get off / draw</td><td>42.0</td><td>72.0</td><td>88.0</td></tr><tr><td>Verb</td><td>Seywu-ta Ssu-ta</td><td colspan=\"2\">make (a plan) / build use I write /put on (a hat) 46.0 54.0 ,</td><td>88.0 86.0</td><td>95.4 96.0</td></tr><tr><td/><td>Taywu-ta</td><td>burn / give a ride</td><td>50.0</td><td>86.0</td><td>92.0</td></tr><tr><td>,</td><td colspan=\"2\">Average Precision</td><td>60.1</td><td>82.7</td><td>91.7</td></tr></table>",
"text": "Experimental results of word sense disambiguation in COBALT-KJJ (%)",
"type_str": "table",
"num": null,
"html": null
},
"TABREF5": {
"content": "<table><tr><td>Phase</td><td colspan=\"2\">Applicability (%) Precision (%)</td></tr><tr><td>Verb's valency information</td><td>34.8</td><td>91.6</td></tr><tr><td>Local syntactic patterns (LSPs)</td><td>9.8</td><td>91.4</td></tr><tr><td>Unordered co-occurring word patterns (UCWs)</td><td>28.2</td><td>92.3</td></tr><tr><td>Infer with the ontology</td><td>18.1</td><td>86.4</td></tr><tr><td>Take the most frequently appearing sense</td><td>9.1</td><td>74.2</td></tr><tr><td>Sum / Average Precision according to Applicability</td><td>100</td><td>89.2</td></tr></table>",
"text": "Applicability and precision for each phase in the WSD algorithm",
"type_str": "table",
"num": null,
"html": null
}
}
}
}