ACL-OCL / Base_JSON /prefixY /json /Y04 /Y04-1028.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "Y04-1028",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T13:35:11.862550Z"
},
"title": "Adaptive Word Sense Tagging on Chinese Corpus",
"authors": [
{
"first": "Sue-Jin",
"middle": [],
"last": "Ker",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Soochow University Taipei",
"location": {
"country": "Taiwan"
}
},
"email": ""
},
{
"first": "Jen-Nan",
"middle": [],
"last": "Chen",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Chuan University Taipei",
"location": {
"country": "Taiwan"
}
},
"email": "jnchen@mcu.edu.tw"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "This study describes a general framework for adaptive word sense disambiguation. The proposed framework begins with knowledge acquisition from the relatively easy context of a corpus. The proposed framework heavily relies on the adaptive step that enriches the initial knowledge base with knowledge gleaned from the partially disambiguated text. Once adjusted to fit the text at hand, the knowledge base is applied to the text again to finalize the disambiguation decision. The effectiveness of this approach was examined through sentences from the Sinica corpus. Experimental results indicated that adaptation significantly improved the performance of WSD. Moreover, the adaptive approach, achieved an applicability improvement from 33.0% up to 74.9% with a comparable precision.",
"pdf_parse": {
"paper_id": "Y04-1028",
"_pdf_hash": "",
"abstract": [
{
"text": "This study describes a general framework for adaptive word sense disambiguation. The proposed framework begins with knowledge acquisition from the relatively easy context of a corpus. The proposed framework heavily relies on the adaptive step that enriches the initial knowledge base with knowledge gleaned from the partially disambiguated text. Once adjusted to fit the text at hand, the knowledge base is applied to the text again to finalize the disambiguation decision. The effectiveness of this approach was examined through sentences from the Sinica corpus. Experimental results indicated that adaptation significantly improved the performance of WSD. Moreover, the adaptive approach, achieved an applicability improvement from 33.0% up to 74.9% with a comparable precision.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Word sense disambiguation is a long-standing problem in natural language understanding. Statistically acquiring sufficient knowledge about a language to build a robust WSD system is extremely difficult. For such a system to be efficient, a large mass of balanced materials must be gathered to cover many idiosyncratic facets of the language. Three issues must be addressed in a lexicalized statistical word sense disambiguation (WSD) model: data sparseness, lack of abstraction, and static learning. First, a word-based model has a multiplicity of parameters that are difficult to measure consistently, even with an extremely large corpus. Under-trained models lead to low precision. Second, word-based models lack a crucial degree of abstraction for a broad coverage system. Third, a static WSD model is probably neither robust nor portable, since it is difficult to construct a model relevant to a broad range of unrestricted texts. Several WSD systems have been created that apply word-based models to a specific domain to disambiguate senses appearing in generally easy contexts with a large number of typically salient words. In an unrestricted text, however, the context is usually diverse and difficult to capture with a lexicalized model; therefore, a corpus-trained system is unlikely to transfer suitably to a new domain. Generality and adaptability are, therefore, essential to a robust and portable WSD system. An adaptive system, armed with an initial knowledge base extracted from defined words, is superior in two ways to static word-based models trained on a corpus. First, the initial knowledge is sufficiently rich and unbiased for a large portion of text to be disambiguated correctly. Second, based on the initial disambiguation, an adaptation step can then be implemented render the knowledge base more relevant to the task, thus resulting in broader and more precise WSD.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "This study explores in detail whether word-based knowledge provides a general solution for disambiguating contexts of unrestricted texts. This method assumes that a major part of a given text is easy or prototypical and, therefore, understandable using general knowledge. Adapting contextual representation of word senses to those in the easy context, will hopefully allow us to interpret the other part, which is normally considered a hard context. Adaptation makes the knowledge base more relevant to the text and, therefore, more effective for WSD in a hard context. Experimental results demonstrate the feasibility of this adaptive WSD approach.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The rest of this paper is organized as follows. Section 2 reviews recent WSD literature from the viewpoints of various contextual knowledge types and different representation systems. Section 3 then describes the strategy of using the adapted knowledge base and default is described. Next, Section 4 provides a detailed account of experiments conducted to evaluate the effectiveness of the adaptive approach, including the experiment setup, results and evaluation. Conclusions are finally made in Section 5.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Using a machine to select the chosen sense of a polysemous word in a specific context has received increasing interest. Various methods of WSD have been proposed recently in natural language processing literature, and newer ones have rapidly superseded old ideas. Central to these efforts are the contextual knowledge encoded and the way this knowledge is described. This section reviews recent WSD literature from the viewpoints of different forms of contextual knowledge and their representational schemes. Any scheme for gaining contextual information on word sense must begin with means of identifying the word sense, since word sense is an abstract concept, unclear on the surface. With this completed, the surrounding words to construct a contextual representation of the word sense for WSD. Three approaches are available to divide word senses. First, human means can be used to derive a hand-tagged corpus of word senses. Earlier WSD works adopted this approach and hand tagged the intended sense of each polysemous word in the training corpus (Kelly and Stone 1975; Hearst 1991) . Second, the numbered sense entries readily available in a machine-readable dictionary can be taken, with their definitions and examples treated as contextual information (Lesk 1986; Veronis and Ide 1990; Wilks et al. 1990; Guthrie et al. 1991) . The third way of eliciting word sense uses linguistic constraints. For instance, three linguistic constraints can be exploited for successful sense tagging and WSD.",
"cite_spans": [
{
"start": 1052,
"end": 1074,
"text": "(Kelly and Stone 1975;",
"ref_id": "BIBREF7"
},
{
"start": 1075,
"end": 1087,
"text": "Hearst 1991)",
"ref_id": "BIBREF6"
},
{
"start": 1260,
"end": 1271,
"text": "(Lesk 1986;",
"ref_id": "BIBREF7"
},
{
"start": 1272,
"end": 1293,
"text": "Veronis and Ide 1990;",
"ref_id": "BIBREF11"
},
{
"start": 1294,
"end": 1312,
"text": "Wilks et al. 1990;",
"ref_id": "BIBREF12"
},
{
"start": 1313,
"end": 1333,
"text": "Guthrie et al. 1991)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Previous Works",
"sec_num": "2"
},
{
"text": "One sense per discourse The senses of all instances of a polysemous word are highly consistent within any given document.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Previous Works",
"sec_num": "2"
},
{
"text": "One sense per collocation Words in close proximity offer strong and consistent clues to the sense of a target word, conditional on the relative distance, order, and syntactic relationship.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Previous Works",
"sec_num": "2"
},
{
"text": "One sense per translation Translations in a bilingual corpus can represent the senses of words.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Previous Works",
"sec_num": "2"
},
{
"text": "To exemplify the first constraint, consider the word suit. The constraint captures the intuition that if the first occurrence of suit is a LAWSUIT sense, then later occurrences in the same discourse are also likely to refer to LAWSUIT (Gale, Church and Yarowsky 1992a) . The second constraint reveals that most works on statistical disambiguation have assumed that word sense is closely correlated with particular contextual features, like occurrence of particular words in a window around the ambiguous word. However, Yarowsky (1995) proposed that strong collocations should be identified for WSD. In a bilingual corpus, differences in translations of the polysemous word allowed one to identify the intended sense, particularly in contrasting polysemy. Gale, Church and Yarowsky (1992b) used French translations in parallel texts to disambiguate some polysemous words in English. For instance, the senses of duty were typically translated as two different French words, droit and devoir, respectively, representing the senses tax and obligation. Thus, a number of tax sense examples of duty could be collected by extracting instances of duty that were translated as droit, and the same could be done for obligation sense examples of duty.",
"cite_spans": [
{
"start": 235,
"end": 268,
"text": "(Gale, Church and Yarowsky 1992a)",
"ref_id": "BIBREF4"
},
{
"start": 519,
"end": 534,
"text": "Yarowsky (1995)",
"ref_id": "BIBREF13"
},
{
"start": 755,
"end": 788,
"text": "Gale, Church and Yarowsky (1992b)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Previous Works",
"sec_num": "2"
},
{
"text": "Once word senses are identified, the context of a particular word sense can then be gathered and encoded in some way for use in the following disambiguation step. At least two ways are available to encode contextual knowledge. The obvious way, the lexicalized representation, is a surface scheme that keeps a weighted list of words occurring in the context of a particular sense. Conversely, the conceptual representation encodes the categories of words that might appear in the context.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Previous Works",
"sec_num": "2"
},
{
"text": "The first step of this study was to construct an initial knowledge from training corpus, then to describe how the knowledge was employed to resolve ambiguity for polysemous words in a context.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "An Adaptive Method for WSD",
"sec_num": "3"
},
{
"text": "To avoid accumulating extraneous information in the knowledge acquisition, sense information was only adopt from the easy text in context during the acquisition phase. First, a preparatory segmentation was made for the training material. Next, target words in each sentence were labelled associated senses. A knowledge base was then construct based on the occurrence frequency of the surrounding target words in each sentence. Finally, the target words' co-occurrence probability was computed and the above descriptive outline of the procedure was summed up as Algorithm 1.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Construct an Initial Knowledge from Training Set",
"sec_num": "3.1"
},
{
"text": "Algorithm 1:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Construct an Initial Knowledge from Training Set",
"sec_num": "3.1"
},
{
"text": "Step 1: Let D w denote a set of sentence collections consisting of a target word w in training material.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Construct an Initial Knowledge from Training Set",
"sec_num": "3.1"
},
{
"text": "Step 2: For each sense division s of word w, let count(c, w, s) denote the frequency count that word c occurs in all word occurrences for word w in sense s.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Construct an Initial Knowledge from Training Set",
"sec_num": "3.1"
},
{
"text": "Step 3: For each target word w, and its sense division s, the co-occurrence probability Pr(c, w, s) of possible context word c and w is calculated as the following:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Construct an Initial Knowledge from Training Set",
"sec_num": "3.1"
},
{
"text": "\u2211 \u2208 + + = w of senses s a s w c count a s w c count s w c ' ) ) ' , , ( ( ) , , ( ) , , Pr( Eq. 1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Construct an Initial Knowledge from Training Set",
"sec_num": "3.1"
},
{
"text": "This section took advantage of an adaptive approach to automatically resolve ambiguity for polysemous words in context. Na\u00efve Bayes is a simple but effective text classification algorithm for learning from labelled data alone (Lewis, 1998; McCallum and Nigam, 1998) . The parameterization given by Na\u00efve Bayes defines an underlying generative model assumed by the classifier. Considering the sense-tagging task as a classification problem this model assumes each word in a sentence was generated separately from the others, given the word sense.",
"cite_spans": [
{
"start": 226,
"end": 239,
"text": "(Lewis, 1998;",
"ref_id": "BIBREF8"
},
{
"start": 240,
"end": 265,
"text": "McCallum and Nigam, 1998)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Adaptive Sense Disambiguation",
"sec_num": "3.2"
},
{
"text": "The adaptive process began with an initial collection of labelled sentences L and one of unlabeled sentences set U. For all instances of polysemous word w in U and each sense s' of w, the sense-conditional probabilities score(s'| w, S) was computed. Next, the ratio R(w, S) was computed for all instances of w in U, and the most secure instance S was picked from U. Following that, the instance was labelled to sense s 1 and S added to L. Next, the same process was performed on sentences in U until they had been disambiguated. Algorithm 2 which gives a formal and detailed description of adaptive WSD, is shown as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Adaptive Sense Disambiguation",
"sec_num": "3.2"
},
{
"text": "Algorithm 2:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Adaptive Sense Disambiguation",
"sec_num": "3.2"
},
{
"text": "Step 1: If un-labeled test set U is an empty set, stop.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Adaptive Sense Disambiguation",
"sec_num": "3.2"
},
{
"text": "Step 2: For all instances of polysemous word w in U and each sense s' of w, compute the sense-conditional probabilities score(s'| w, S).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Adaptive Sense Disambiguation",
"sec_num": "3.2"
},
{
"text": ") ' , , Pr( ) ' ( ) , | ' ( score | | 1 s w c s P S w s k S k \u220f = = Eq. 2",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Adaptive Sense Disambiguation",
"sec_num": "3.2"
},
{
"text": "Step 3: Compute R(w, S) for all instances of polysemous word w in U. Step 4: Pick the most secure instance S with the largest value of R(w, S) and R(w, S) is greater than a preset threshold, \u03b8. Then, label sense s 1 to S and add it to L.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Adaptive Sense Disambiguation",
"sec_num": "3.2"
},
{
"text": "Step 5: Go to Step 1.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Adaptive Sense Disambiguation",
"sec_num": "3.2"
},
{
"text": "Two experiments were conducted to assess the effectiveness of the proposed method: a WSD experiment with adaptive process and an experiment without adaptive process. The experimental setup is described in a number of steps as follows. (1) A set of 20 polysemous words was chosen as the target for disambiguation and evaluation. Table 1 lists these words. The senses number from 2 to 8 of these words and their average sense numbers are 3.2. (2) For each polysemous word, a sense division was established based on the Chinese WordNet (Miller, 1990; Fellbaum, 1998; CKIP, 2003) . (3) Tests were performed on the sentences from the Sinica corpus (CKIP, 1995) . The ambiguity of these testing words in our experiment is shown as Table 1.",
"cite_spans": [
{
"start": 533,
"end": 547,
"text": "(Miller, 1990;",
"ref_id": "BIBREF10"
},
{
"start": 548,
"end": 563,
"text": "Fellbaum, 1998;",
"ref_id": "BIBREF2"
},
{
"start": 564,
"end": 575,
"text": "CKIP, 2003)",
"ref_id": "BIBREF1"
},
{
"start": 643,
"end": 655,
"text": "(CKIP, 1995)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [
{
"start": 328,
"end": 335,
"text": "Table 1",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "Experiment",
"sec_num": "4.1"
},
{
"text": "To assess the performance, two human judges were asked to give a sense label to each example of these twenty words in the testing set. The results of running the two programs on the testing set were compared against those of human assessors. The number of test instances and correct assignments in these two experiments were tallied to produce the precision rate for each experiment. Tables 2 and Table 3are summarized the experimental results. Based on the results, the adapting approach was reasonably helpful for WSD, achieving an applicability improvement from 33.0% up to 74.9% with comparable precision. Table 4 described the experimental precision and applicability for each run.",
"cite_spans": [],
"ref_spans": [
{
"start": 384,
"end": 407,
"text": "Tables 2 and Table 3are",
"ref_id": "TABREF1"
},
{
"start": 610,
"end": 617,
"text": "Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "4.2"
},
{
"text": "This study presented an adaptive approach to word sense disambiguation. Under this novel learning strategy, an initial knowledge set for WSD was first built based on the sense definition in training data. These disambiguated texts can be used to adjust the fundamental knowledge in an adaptive fashion so to improve disambiguation precision. We have demonstrated that this approach can outperform established static approaches based on direct comparison of experimental results. This level of performance is achieved without lengthy training or the use of a very large training corpus. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
}
],
"back_matter": [
{
"text": "The authors would like to thank the National Science Council of the Republic of China for partially supporting this research under Contract No. NSC 92-2411-H-031-014-ME.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgements",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "The Content and Illustration of Sinica Corpus of Academia Sinica",
"authors": [
{
"first": "",
"middle": [],
"last": "Ckip",
"suffix": ""
}
],
"year": 1995,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "CKIP. 1995. The Content and Illustration of Sinica Corpus of Academia Sinica, Technical Report No. 95-02.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "The sense and semantic of Chinese Word",
"authors": [
{
"first": "",
"middle": [],
"last": "Ckip",
"suffix": ""
}
],
"year": 2003,
"venue": "",
"volume": "",
"issue": "",
"pages": "3--5",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "CKIP. 2003. The sense and semantic of Chinese Word, Technical Report No. 03-01, 03-02.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "WordNet: An Electronic Lexical Database",
"authors": [
{
"first": "C",
"middle": [],
"last": "Fellbaum",
"suffix": ""
}
],
"year": 1998,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Fellbaum C. 1998. WordNet: An Electronic Lexical Database, The MIT Press.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Using Bilingual Materials to Develop Word Sense Disambiguation Methods",
"authors": [
{
"first": "W",
"middle": [
"A"
],
"last": "Gale",
"suffix": ""
},
{
"first": "K",
"middle": [
"W"
],
"last": "Church",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Yarowsky",
"suffix": ""
}
],
"year": 1992,
"venue": "Proceedings of the 4th International Conference on Theoretical and Methodological Issues in Machine Translation",
"volume": "",
"issue": "",
"pages": "101--112",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gale, W. A., K. W. Church, and D. Yarowsky. 1992b. Using Bilingual Materials to Develop Word Sense Disambiguation Methods. In Proceedings of the 4th International Conference on Theoretical and Methodological Issues in Machine Translation, 101-112.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "One Sense Per Discourse",
"authors": [
{
"first": "W",
"middle": [
"K"
],
"last": "Gale",
"suffix": ""
},
{
"first": "W",
"middle": [],
"last": "Church",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Yarowsky",
"suffix": ""
}
],
"year": 1992,
"venue": "Proceedings of the Speech and Natural Language Workshop",
"volume": "",
"issue": "",
"pages": "233--237",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gale, W. K., W. Church and D. Yarowsky. 1992a. One Sense Per Discourse, In Proceedings of the Speech and Natural Language Workshop, 233-237.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Subject-dependent Co-occurrence and Word Sense Disambiguation",
"authors": [
{
"first": "J",
"middle": [],
"last": "Guthrie",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Guthrie",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Wilks",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Aidinejad",
"suffix": ""
}
],
"year": 1991,
"venue": "Proceedings of the 29th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "146--152",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Guthrie, J., L. Guthrie, Y. Wilks and H. Aidinejad. 1991. Subject-dependent Co-occurrence and Word Sense Disambiguation. In Proceedings of the 29th Annual Meeting of the Association for Computational Linguistics, 146-152.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Noun Homonym Disambiguation using Local Context in Large Text Corpora",
"authors": [
{
"first": "M",
"middle": [],
"last": "Hearst",
"suffix": ""
}
],
"year": 1991,
"venue": "Proceedings of the 7th International Conference on of UW Centre for the New OED and Text Research: Using Corpora",
"volume": "",
"issue": "",
"pages": "1--22",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hearst, M. 1991. Noun Homonym Disambiguation using Local Context in Large Text Corpora. In Proceedings of the 7th International Conference on of UW Centre for the New OED and Text Research: Using Corpora, pages 1-22.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Automatic Sense Disambiguation using Machine Readable Dictionaries: How to Tell a Pine Cone from an Ice Cream Cone",
"authors": [
{
"first": "E",
"middle": [],
"last": "Kelly",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Stone",
"suffix": ""
},
{
"first": "Amsterdam",
"middle": [],
"last": "North-Holland",
"suffix": ""
},
{
"first": "M",
"middle": [
"E"
],
"last": "Lesk",
"suffix": ""
}
],
"year": 1975,
"venue": "Proceedings of the ACM SIGDOC Conference",
"volume": "",
"issue": "",
"pages": "24--26",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kelly, E. and P. Stone. 1975. Computer Recognition of English Word Senses, North-Holland, Amsterdam. Lesk., M. E. 1986. Automatic Sense Disambiguation using Machine Readable Dictionaries: How to Tell a Pine Cone from an Ice Cream Cone. In Proceedings of the ACM SIGDOC Conference, 24-26, Toronto, Ontario.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Na\u00efve Bayes at forty: The independence assumption retrieval",
"authors": [
{
"first": "D",
"middle": [
"D"
],
"last": "Lewis",
"suffix": ""
}
],
"year": 1998,
"venue": "Proceedings of ECML-98",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lewis, D. D. 1998. Na\u00efve Bayes at forty: The independence assumption retrieval. In Proceedings of ECML-98.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "A comparison of event models for Na\u00efve Bayes Text Classification",
"authors": [
{
"first": "A",
"middle": [],
"last": "Mccallum",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Nigam",
"suffix": ""
}
],
"year": 1998,
"venue": "AAAI-98 Workshop on Learning for Text Classification",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "McCallum, A. and K. Nigam. 1998. A comparison of event models for Na\u00efve Bayes Text Classification. In AAAI-98 Workshop on Learning for Text Classification.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Word Sense Disambiguation with Very Large Neural Networks Extracted from Machine Readable Dictionaries",
"authors": [
{
"first": "G",
"middle": [
"A"
],
"last": "Miller",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Veronis",
"suffix": ""
},
{
"first": "N",
"middle": [],
"last": "Ide",
"suffix": ""
}
],
"year": 1990,
"venue": "Proceedings of the 13th International Conference on Computational Linguistics",
"volume": "3",
"issue": "",
"pages": "389--394",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Miller, G. A., 1990. \"WordNet: An Online Lexical Database. In Special Issue of International Journal of Lexicography, 3(4). Veronis J. and N. Ide. 1990. Word Sense Disambiguation with Very Large Neural Networks Extracted from Machine Readable Dictionaries. In Proceedings of the 13th International Conference on Computational Linguistics, 389-394.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Word Sense Disambiguation with Very Large Neural Networks Extracted from Machine Readable Dictionaries",
"authors": [
{
"first": "J",
"middle": [],
"last": "Veronis",
"suffix": ""
},
{
"first": "N",
"middle": [],
"last": "Ide",
"suffix": ""
}
],
"year": 1990,
"venue": "Proceedings of the 13th International Conference on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "389--394",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Veronis J. and N. Ide. 1990. Word Sense Disambiguation with Very Large Neural Networks Extracted from Machine Readable Dictionaries. In Proceedings of the 13th International Conference on Computational Linguistics, 389-394.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Providing Tractable Dictionary Tools. Machine Translation",
"authors": [
{
"first": "Y",
"middle": [
"A"
],
"last": "Wilks",
"suffix": ""
},
{
"first": "D",
"middle": [
"C"
],
"last": "Fass",
"suffix": ""
},
{
"first": "C",
"middle": [
"M"
],
"last": "Guo",
"suffix": ""
},
{
"first": "J",
"middle": [
"E"
],
"last": "Mcdonald",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Plate",
"suffix": ""
},
{
"first": "B",
"middle": [
"M"
],
"last": "Slator",
"suffix": ""
}
],
"year": 1990,
"venue": "",
"volume": "5",
"issue": "",
"pages": "99--154",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wilks Y. A., D. C. Fass, C. M. Guo, J. E. McDonald, T. Plate and B. M. Slator. 1990. Providing Tractable Dictionary Tools. Machine Translation, 5, 99-154.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Unsupervised Word Sense Disambiguation Rivaling Supervised Methods",
"authors": [
{
"first": "Yarowsky",
"middle": [
"D"
],
"last": "",
"suffix": ""
}
],
"year": 1995,
"venue": "Proceedings of the 33rd Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "189--196",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yarowsky. D. 1995. Unsupervised Word Sense Disambiguation Rivaling Supervised Methods. In Proceedings of the 33rd Annual Meeting of the Association for Computational Linguistics, 189-196.",
"links": null
}
},
"ref_entries": {
"TABREF0": {
"content": "<table><tr><td>Word</td><td>Pos</td><td colspan=\"2\"># of senses Word</td><td>Pos</td><td colspan=\"2\"># of senses Word</td><td>Pos</td><td># of senses</td></tr><tr><td/><td>Na</td><td>2</td><td/><td>Ncd</td><td>2</td><td/><td>Na</td><td>4</td></tr><tr><td/><td>Na</td><td>2</td><td/><td>Nf</td><td>2</td><td/><td>Nes</td><td>4</td></tr><tr><td/><td>Na</td><td>2</td><td/><td>Nf</td><td>2</td><td/><td>Na</td><td>5</td></tr><tr><td>\uf9b3</td><td>Na</td><td>2</td><td/><td>Na</td><td>3</td><td/><td>Nf</td><td>5</td></tr><tr><td/><td>Na</td><td>2</td><td/><td>Na</td><td>3</td><td/><td>Na</td><td>6</td></tr><tr><td/><td>Na</td><td>2</td><td/><td>Na</td><td>3</td><td/><td>Nf</td><td>8</td></tr><tr><td/><td>Ncd</td><td>2</td><td/><td>Na</td><td>3</td><td/></tr><tr><td colspan=\"6\">Table 2 Experimental results without adaptive approach.</td><td/></tr><tr><td colspan=\"3\">Word Pos # of instance</td><td>#of tagged</td><td/><td>correct</td><td colspan=\"2\">Precision (%) Applicability (%)</td></tr><tr><td/><td>Na</td><td>244</td><td colspan=\"2\">102</td><td>94</td><td>92.1</td><td>41.8</td></tr><tr><td/><td>Na</td><td>10</td><td/><td>5</td><td>5</td><td>100</td><td>50</td></tr><tr><td/><td>Na</td><td>16</td><td/><td>8</td><td>7</td><td>87.5</td><td>50</td></tr><tr><td>\uf9b3</td><td>Na</td><td>53</td><td colspan=\"2\">21</td><td>21</td><td>100</td><td>39.6</td></tr><tr><td/><td>Na</td><td>258</td><td colspan=\"2\">152</td><td>151</td><td>99.3</td><td>58.9</td></tr><tr><td/><td>Na</td><td>5</td><td/><td>1</td><td>1</td><td>100</td><td>20</td></tr><tr><td/><td>Ncd</td><td>936</td><td colspan=\"2\">287</td><td>265</td><td>92.3</td><td>30.6</td></tr><tr><td/><td>Ncd</td><td>182</td><td colspan=\"2\">64</td><td>63</td><td>98.4</td><td>35.1</td></tr><tr><td/><td>Nf</td><td>319</td><td colspan=\"2\">124</td><td>123</td><td>99.1</td><td>38.8</td></tr><tr><td/><td>Nf</td><td>25</td><td colspan=\"2\">16</td><td>16</td><td>100</td><td>64</td></tr><tr><td/><td>Na</td><td>26</td><td/><td>8</td><td>6</td><td>75</td><td>30.7</td></tr><tr><td/><td>Na</td><td>108</td><td colspan=\"2\">39</td><td>39</td><td>100</td><td>36.1</td></tr><tr><td/><td>Na</td><td>11</td><td/><td>2</td><td>2</td><td>100</td><td>18.1</td></tr><tr><td/><td>Na</td><td>14</td><td/><td>5</td><td>5</td><td>100</td><td>35.7</td></tr><tr><td/><td>Na</td><td>17</td><td colspan=\"2\">13</td><td>13</td><td>100</td><td>76.4</td></tr><tr><td/><td>Nes</td><td>62</td><td/><td>3</td><td>3</td><td>100</td><td>4.8</td></tr><tr><td/><td>Na</td><td>239</td><td colspan=\"2\">35</td><td>34</td><td>97.1</td><td>14.6</td></tr><tr><td/><td>Nf</td><td>106</td><td colspan=\"2\">44</td><td>42</td><td>95.4</td><td>41.5</td></tr><tr><td/><td>Na</td><td>303</td><td colspan=\"2\">96</td><td>88</td><td>91.6</td><td>31.6</td></tr><tr><td/><td>Nf</td><td>381</td><td colspan=\"2\">68</td><td>66</td><td>97</td><td>17.8</td></tr><tr><td>Total</td><td/><td>3315</td><td colspan=\"2\">1093</td><td>1044</td><td>95.5</td><td>33.0</td></tr></table>",
"num": null,
"text": "Ambiguities of experimental testing data sets.",
"type_str": "table",
"html": null
},
"TABREF1": {
"content": "<table><tr><td colspan=\"2\">Word Pos</td><td colspan=\"3\"># of instance #of tagged</td><td colspan=\"3\">correct Precision (%) Applicability (%)</td></tr><tr><td/><td>Na</td><td/><td>244</td><td>232</td><td>216</td><td>93.1</td><td>95.1</td></tr><tr><td/><td>Na</td><td/><td>10</td><td>7</td><td>7</td><td>100.0</td><td>70.0</td></tr><tr><td/><td>Na</td><td/><td>16</td><td>8</td><td>7</td><td>87.5</td><td>50.0</td></tr><tr><td>\uf9b3</td><td>Na</td><td/><td>53</td><td>30</td><td>30</td><td>100.0</td><td>56.6</td></tr><tr><td/><td>Na</td><td/><td>258</td><td>228</td><td>225</td><td>98.7</td><td>88.4</td></tr><tr><td/><td>Na</td><td/><td>5</td><td>2</td><td>2</td><td>100.0</td><td>40.0</td></tr><tr><td/><td>Ncd</td><td/><td>936</td><td>916</td><td>849</td><td>92.7</td><td>97.9</td></tr><tr><td/><td>Ncd</td><td/><td>182</td><td>162</td><td>160</td><td>98.8</td><td>89.0</td></tr><tr><td/><td>Nf</td><td/><td>319</td><td>144</td><td>128</td><td>88.9</td><td>45.1</td></tr><tr><td/><td>Nf</td><td/><td>25</td><td>16</td><td>16</td><td>100.0</td><td>64.0</td></tr><tr><td/><td>Na</td><td/><td>26</td><td>10</td><td>6</td><td>60.0</td><td>38.5</td></tr><tr><td/><td>Na</td><td/><td>108</td><td>55</td><td>53</td><td>96.4</td><td>50.9</td></tr><tr><td/><td>Na</td><td/><td>11</td><td>2</td><td>2</td><td>100.0</td><td>18.2</td></tr><tr><td/><td>Na</td><td/><td>14</td><td>5</td><td>5</td><td>100.0</td><td>35.7</td></tr><tr><td/><td>Na</td><td/><td>17</td><td>16</td><td>15</td><td>93.8</td><td>94.1</td></tr><tr><td/><td>Nes</td><td/><td>62</td><td>5</td><td>4</td><td>80.0</td><td>8.1</td></tr><tr><td/><td>Na</td><td/><td>239</td><td>117</td><td>99</td><td>84.6</td><td>49.0</td></tr><tr><td/><td>Nf</td><td/><td>106</td><td>89</td><td>84</td><td>94.4</td><td>84.0</td></tr><tr><td/><td>Na</td><td/><td>303</td><td>291</td><td>250</td><td>85.9</td><td>96.0</td></tr><tr><td/><td>Nf</td><td/><td>381</td><td>149</td><td>143</td><td>96.0</td><td>39.1</td></tr><tr><td>Total</td><td/><td colspan=\"2\">3315</td><td>2484</td><td>2301</td><td>92.6</td><td>74.9</td></tr><tr><td colspan=\"5\">Table 4 Experimental results for each runs.</td><td/><td/></tr><tr><td colspan=\"3\">Runs tagged correct</td><td colspan=\"2\">Precision (%)</td><td colspan=\"2\">Applicability (%)</td></tr><tr><td>1</td><td>1093</td><td>1044</td><td/><td>95.5</td><td/><td>33.0</td></tr><tr><td>2</td><td>2185</td><td>2043</td><td/><td>93.5</td><td/><td>65.9</td></tr><tr><td>3</td><td>2440</td><td>2265</td><td/><td>92.8</td><td/><td>73.6</td></tr><tr><td>4</td><td>2484</td><td>2301</td><td/><td>92.6</td><td/><td>74.9</td></tr></table>",
"num": null,
"text": "Experimental results with adaptive approach.",
"type_str": "table",
"html": null
}
}
}
}