ACL-OCL / Base_JSON /prefixS /json /S01 /S01-1040.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "S01-1040",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T15:35:27.179626Z"
},
"title": "The ,Johns Hopkins SENSEVAL2 System Descriptions",
"authors": [
{
"first": "David",
"middle": [],
"last": "Yarowsky",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Johns Hopkins University Baltimore",
"location": {
"postCode": "21218",
"settlement": "Maryland",
"country": "USA"
}
},
"email": "yarowsky@cs.jhu.edu"
},
{
"first": "Silviu",
"middle": [],
"last": "Cucerzan",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Johns Hopkins University Baltimore",
"location": {
"postCode": "21218",
"settlement": "Maryland",
"country": "USA"
}
},
"email": "silviu@cs.jhu.edu"
},
{
"first": "Radu",
"middle": [],
"last": "Florian",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Johns Hopkins University Baltimore",
"location": {
"postCode": "21218",
"settlement": "Maryland",
"country": "USA"
}
},
"email": "rflorian@cs.jhu.edu"
},
{
"first": "Charles",
"middle": [],
"last": "Schafer",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Johns Hopkins University Baltimore",
"location": {
"postCode": "21218",
"settlement": "Maryland",
"country": "USA"
}
},
"email": "cschafer@cs.jhu.edu"
},
{
"first": "Richard",
"middle": [],
"last": "Wicentowski",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Johns Hopkins University Baltimore",
"location": {
"postCode": "21218",
"settlement": "Maryland",
"country": "USA"
}
},
"email": "richardw@cs.jhu.edu"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "This article describes the Johns Hopkins University (JHU) sense disambiguation systems that participated in seven SENSEVAL2 tasks: four supervised lexical choice systems (Basque, English, Spanish, Swedish), one unsupervised lexical choice system (Italian) and two supervised all-words systems (Czech, Estonian). The common core supervised system utilizes voting-based classifier combination over several diverse systems, including decision lists (Yarowsky, 2000), a cosine-based vector model and two Bayesian classifiers. The classifiers employed a rich set of features, including words, lemmas and part-of-speech informatino modeled in several syntactic relationships (e.g. verb-object), bag-of-words context and local collocational n-grams. The allwords systems relied heavily on morphological analysis in the two highly inflected languages. The unsupervised Italian system was a hierarchical class model using the Italian WordNet.",
"pdf_parse": {
"paper_id": "S01-1040",
"_pdf_hash": "",
"abstract": [
{
"text": "This article describes the Johns Hopkins University (JHU) sense disambiguation systems that participated in seven SENSEVAL2 tasks: four supervised lexical choice systems (Basque, English, Spanish, Swedish), one unsupervised lexical choice system (Italian) and two supervised all-words systems (Czech, Estonian). The common core supervised system utilizes voting-based classifier combination over several diverse systems, including decision lists (Yarowsky, 2000), a cosine-based vector model and two Bayesian classifiers. The classifiers employed a rich set of features, including words, lemmas and part-of-speech informatino modeled in several syntactic relationships (e.g. verb-object), bag-of-words context and local collocational n-grams. The allwords systems relied heavily on morphological analysis in the two highly inflected languages. The unsupervised Italian system was a hierarchical class model using the Italian WordNet.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "The JHU SENSEVAL2 systems utilized a rich feature space based on raw words, lemmas and partof-speech (POS) tags in a variety of positional relationships to the target word. These positions include traditional bag-of-word context, local bigram and trigram collocations and several syntactic relationships based on predicate-argument structure (described in Section 1.2). Their use is illustrated on a sample English sentence for train in Figure 1 .",
"cite_spans": [],
"ref_spans": [
{
"start": 437,
"end": 445,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "The Feature Space",
"sec_num": "1"
},
{
"text": "Lemmatization Part-of-speech tagger availability varied across the languages included in this sense-disambiguation system evaluation. Transformation-based taggers (Ngai and Florian, 2001) were trained on standard data for English (Penn Treebank), Swedish (SUC-1 corpus) and Estonian (MultextEast corpus). For Czech, an available POS tagger (Hajic and Hladka, 1998) , which includes lemmatization, was used. The remaining languages -Spanish, Italian and Basquewere tagged using an unsupervised tagger ( Yarowsky, 2000) . Lemmatization was performed using a combination of supervised and unsupervised methods (Yarowsky and Wicentowski, 2000) , and using existing trie-based supervised models for English.",
"cite_spans": [
{
"start": 163,
"end": 187,
"text": "(Ngai and Florian, 2001)",
"ref_id": "BIBREF2"
},
{
"start": 340,
"end": 364,
"text": "(Hajic and Hladka, 1998)",
"ref_id": "BIBREF1"
},
{
"start": 500,
"end": 501,
"text": "(",
"ref_id": null
},
{
"start": 502,
"end": 517,
"text": "Yarowsky, 2000)",
"ref_id": "BIBREF5"
},
{
"start": 607,
"end": 639,
"text": "(Yarowsky and Wicentowski, 2000)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Part-of-Speech Tagging and",
"sec_num": "1.1"
},
{
"text": "Extracted syntactic relationships in the feature space depended on the keyword's part of speech:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Syntactic Features",
"sec_num": "1.2"
},
{
"text": "\u2022 for verb keywords -the head noun of the verb's object, particle/preposition and objectof-preposition were extracted when available. \u2022 for noun keywords -the headword of any verbobject, subject-verb or noun-noun relationships identified for the keyword. \u2022 for adjective keywords -the head noun modified by the adjective (if identifiable).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Syntactic Features",
"sec_num": "1.2"
},
{
"text": "These syntactic features were extracted using simple heuristic patterns and regular expressions over the parts-of-speech surrounding the keyword.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Syntactic Features",
"sec_num": "1.2"
},
{
"text": "The supervised JHU systems utilize classifier combination merging the results of five diverse learning models.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Supervised Lexical Choice Systems",
"sec_num": "2"
},
{
"text": "The lexical choice task can be cast as a classification task: training data is given in the form of a set ing data. The training data T is used to estimate class probabilities and then the sense classification is made by choosing the class with the maximum a posteriori class probability:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Core Algorithm Design",
"sec_num": "2.1"
},
{
"text": "S = argmaxP (s'ID) = argmaxP (S') \u2022 P (DIS') S' S'",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Core Algorithm Design",
"sec_num": "2.1"
},
{
"text": "The disambiguation models used in our experiments are feature-based models. A feature is a boolean function defined as f w :",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Core Algorithm Design",
"sec_num": "2.1"
},
{
"text": "F x 1J -+ { 0, 1},",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Core Algorithm Design",
"sec_num": "2.1"
},
{
"text": "where F is the entire set of features and 1J is the document space. An overview of the exploited feature space was given in Section 1.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Core Algorithm Design",
"sec_num": "2.1"
},
{
"text": "Our Bayesian and cosine-based models use a common vector representation, capturing both traditional bag-of-words features and the extended Ngram and predicate-argument features in a single data structure.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Vector-based Algorithms",
"sec_num": "2.2"
},
{
"text": "In these models, a vector is created for each document in the collection:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Vector-based Algorithms",
"sec_num": "2.2"
},
{
"text": "D; = (D;J)j=l,IFI",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Vector-based Algorithms",
"sec_num": "2.2"
},
{
"text": "where F is the entire utilized feature space Cij Dij = NWJ where c;j is the the number i of times the feature fJ appears in document D;, Ni is the number of words in the document D; and Wj is the weight associated with the feature fi.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Vector-based Algorithms",
"sec_num": "2.2"
},
{
"text": "To avoid confusion between the same word in multiple feature roles, feature values are marked with their positional type (e.g. children_ object, toilet_ L, and their R as distinct from children, toilet and their in u;marked bag-of-words context).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Vector-based Algorithms",
"sec_num": "2.2"
},
{
"text": "The basic sense disambiguation algorithm proceeds as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Vector-based Algorithms",
"sec_num": "2.2"
},
{
"text": "1. Vectors in the training data are assigned to classes based on their classification; 2. For each vector in the test data, the a posteriori class distribution is computed as",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Vector-based Algorithms",
"sec_num": "2.2"
},
{
"text": "P (SID)= Sim (D, Cs) 2:: Sim (D, Cs') S' 164",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Vector-based Algorithms",
"sec_num": "2.2"
},
{
"text": "where Cs is the centroid corresponding to the sense S and Sim is the similarity measure used by the algorithm (cosine or Bayes). ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Vector-based Algorithms",
"sec_num": "2.2"
},
{
"text": "P (D;ICs) = II P (!JIGs) /jED;",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Vector-based Algorithms",
"sec_num": "2.2"
},
{
"text": "The probability distribution P (!jiGs) is obtained by smoothing the word relative frequencies in the cluster C s. Given the lack of independence between the word-based and lemma-based feature spaces, these are utilized in two separate Bayesian models with output combined in Section 2.5.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Vector-based Algorithms",
"sec_num": "2.2"
},
{
"text": "The decision list model we used in our system is a non-hierarchical variant of the method of interpolated decision lists described in Yarowsky (2000) . For each feature fi a smoothed log of likelihood ratio (log P(fdSi) ) is computed for each sense Sj, with",
"cite_spans": [
{
"start": 134,
"end": 149,
"text": "Yarowsky (2000)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Decision Lists",
"sec_num": "2.3"
},
{
"text": "P(f;i~Si)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Decision Lists",
"sec_num": "2.3"
},
{
"text": ". . . smoothing based on an empmcally estimated function of feature type and relative frequency. Candidate features are ordered by this smoothed ratio (putting the best evidence first), and the remaining probabilities are computed via the interpolation of the global and history-conditional probabilities. By utilizing the single strongest-matching evidence in context, non-independent feature spaces combine readily without inflated confidence, and can be mapped to accurate and robust probability estimates as shown in Figure 2 .",
"cite_spans": [],
"ref_spans": [
{
"start": 521,
"end": 529,
"text": "Figure 2",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Decision Lists",
"sec_num": "2.3"
},
{
"text": "The English task differs slightly from the other lexical-choice tasks in that phrasal verbs are expljcitly marked in the training and test data. To make reasonable use of this information, when a phrasal verb is marked, only corresponding phrasal senses are considered; conversely when a phrasal Likewise, when a training or test sentence matches a compound noun in the observed sense inventory (e.g. art_gallery%1:06:00::) only the matching phrasal sense(s) are considered unless there is at least one non-phrasal sense tagged in the training data for that compound (indicating the potential for both compositional and non-compositional interpretations).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Additional Details",
"sec_num": "2.4"
},
{
"text": "Several classifier combination approaches were investigated in the system development phase. They are outlined below, along with their cross-validated performance on the English lexical-sample training data (in Table 1 ). In each case four individual classifiers were combined: the cosine model, two Bayes models (one based on words and one based on lemmas 1 ), and the decision-list model. The first two model combination approches simply averages the output of the participating classifiers over each candidate sense tag, in terms of P(SjiDi) and rank(SjiDi) respectively, with each classifier given an equal vote 2 \u2022",
"cite_spans": [],
"ref_spans": [
{
"start": 211,
"end": 218,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Classifier Combination",
"sec_num": "2.5"
},
{
"text": "The remaining methods assign potentially variable weights to the votes of different classifiers. Interestingly, Equal Weighting of all four classifiers slightly outperforms classifier weighting proportional to each model's aggregate accuracy (Performance-Weighted voting), similar to the technique used for classifier combination in part-ofspeech tagging in van Halteren et al. (1998). Finally, it was observed that on sentences where decision lists have high model confidence their accuracy exceeds other classifiers. Thus the most effective approach, based on training-data cross validation, was found to be a very basic Thresholded Model Voting:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Classifier Combination",
"sec_num": "2.5"
},
{
"text": "\u2022 If the decision_list_confidence;::: 0.985 (an empirically selected threshold) then return the output of the decision list; \u2022 Otherwise, each system votes for the sense that is most likely under it and, another vote is obtained from the most probable class yielded by linear interpolation of the 4 classifiers. This simple top-performing approach was utilized in the evaluation system, and is reasonably close to the performance of an Oracle upper bound for classifier combination (using the output of the single best classifier on each test instance -unknowable in practice). 3 Supervised All-Words Systems 3.1 Estonian All-words Task Because of the importance of morphological analysis in a highly inflected language such as Estonian, a lemmatizer based on Yarowsky and Wicentowski (2000) was first applied to all words in the training data (and, at evaluation time, the test data). For each lemma, the P (sensejlemma) distribution was measured on the training data. For all lemmas exhibiting only one sense in the training data, this sense was returned. Likewise, if there was insufficient data for word-specific training (the sum of the minority sense examples for the word in training data was below a threshold) the majority sense in training was returned for all instances of that lemma. In the remaining cases where a lemma had more than one sense in training, with sufficient minority examples to adequately be modeled, the generic JHU lexical sample sense classifier was trained and applied.",
"cite_spans": [
{
"start": 760,
"end": 791,
"text": "Yarowsky and Wicentowski (2000)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Classifier Combination",
"sec_num": "2.5"
},
{
"text": "Czech is another example of a highly inflected language. A part-of-speech tagger and lemmatizer kindly provided by Jan Hajic of Charles University (Hajic and Hladk:a, 1998) was first applied to the data. Consistent with the spirit of evaluating sense disambiguation rather than morphology, the JHU system focused on those words where more than one sense was possible for a root word (e.g.",
"cite_spans": [
{
"start": 147,
"end": 172,
"text": "(Hajic and Hladk:a, 1998)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Czech All-words Task",
"sec_num": "3.2"
},
{
"text": "the -1 and -2 suffixes in the Czech inventory). In these cases, the fine-grained output of the Czech lemmatizer was ignored (in both training and test) and a generic lexical-sample sense classifier was applied to the sense-distinction tags extracted from the lemmatized training data (see Section 2), using the classification models employed in Estonian. Whenever insufficient numbers of minority tagged examples were available for training a word-specific classifier, the majority sense for the POS-level lemma was returned. Likewise, if only one possible sense tag was observed for any POS-levellemma analysis, then this unambiguous sense tag was returned.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Czech All-words Task",
"sec_num": "3.2"
},
{
"text": "The Italian task stands out from the group of lexical choice tasks because no labelled training was data provided for Italian; instead a subset of the Italian Wordnet was provided. To obtain a sense classifier for Italian, we employed an unsupervised method that used hierarchical class models of the Wordnet relationships among words (synonymy, hypernomy, etc) and a large unannotated corpus of Italian newspaper data to obtain sense centroids. First, every relationship type in the Italian Wordnet received an initial weight, based on a roughly estimated measure of the relative dissimilarity of two words in that relationship. For instance, the synonymy relationship received a small weight (words are semantically \"close\"), while other relationships (has_ near_ synonym, causes, has_ hypemym) received proportionately larger weights (words are more semantically distant). Starting from the senses Sofa target k, the wordnet relationships graph was explored, up to a given distance (two links away), creating \"clouds\" of similar words, Ms, together with a similarity 3 to the original sense, S.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Unsupervised Italian System",
"sec_num": "4"
},
{
"text": "For each of the words win Ms, we extracted sentences from the unannotated corpus that contained the word w, and then considered them as being examples of context for the sense S of target k, and assigned them to the centroid C s (the centroid of the sense S) with a weight corresponding to the similarity between the word w and the sense S (computed using the wordnet graph). After all the documents were distributed, the test documents were also assigned to the most probable cluster, similar to the other lexical choice tasks. The centroids were then allowed to adjust in a manner similar to k-means clustering. At each step, the centroids were recomputed, after which each document migrated to the closest cluster (i.e. argmaxs P (CsiD)), and the process was repeated.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Unsupervised Italian System",
"sec_num": "4"
},
{
"text": "After the process converged, each test document was 3 The weight on a path was computed as the sum of the weights on the path, and the similarity was computed as Sim ( w, S) = e-c(w,S) -large weights result in 0 similarity. assigned the label corresponding to the sense centroid it converged into. This process is completely unsupervised, and the only structured resource that was used is the provided Italian Wordnet subset. Table 2 lists the official performance of the JHU systems on unseen test data in the final SENSEVAL2 evaluation. Coarse-grained performance scores are based on a hierarchical sense clustering given by the task organizers in 4 of the languages. In the lexical sample tasks, these scores were obtained after correction of a simple bug in the merger of final system output as provided for in the SENSEVAL evaluation protocols.",
"cite_spans": [
{
"start": 52,
"end": 53,
"text": "3",
"ref_id": null
}
],
"ref_spans": [
{
"start": 426,
"end": 433,
"text": "Table 2",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Unsupervised Italian System",
"sec_num": "4"
},
{
"text": "As illustrated in the comparative performance tables elsewhere in this volume, the JHU systems are consistently very successful across all 7 languages and 3 major system types described here.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "5"
},
{
"text": "0n training-set cross-validation it was observed that the two systems were uncorrelated enough to make it useful to keep both of them.2 Decision lists are not included because they only assign a probability to their selected classifier output but not to lowerranked candidates.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Language independent minimally supervised induction of lexical probabilities",
"authors": [
{
"first": "S",
"middle": [],
"last": "Cucerzan",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Yarowsky",
"suffix": ""
}
],
"year": 2000,
"venue": "Proceedings of ACL-2000",
"volume": "",
"issue": "",
"pages": "270--277",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "S. Cucerzan and D. Yarowsky. 2000. Language inde- pendent minimally supervised induction of lexical probabilities. In Proceedings of ACL-2000, pages 270-277, Hong Kong.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Tagging inflective languages: Prediction of morphological categories for a rich, structured tagset",
"authors": [
{
"first": "J",
"middle": [],
"last": "Hajic",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Hladka",
"suffix": ""
}
],
"year": 1998,
"venue": "Proceedings of CO LING/ A CL-98",
"volume": "",
"issue": "",
"pages": "483--490",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. Hajic and Hladka. 1998. Tagging inflective lan- guages: Prediction of morphological categories for a rich, structured tagset. In Proceedings of CO LING/ A CL-98, pages 483-490, Montreal.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Transformationbased learning in the fast lane",
"authors": [
{
"first": "G",
"middle": [],
"last": "Ngai",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Florian",
"suffix": ""
}
],
"year": 2001,
"venue": "Proceedings of NAACL-2001",
"volume": "",
"issue": "",
"pages": "40--47",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "G. Ngai and R. Florian. 2001. Transformation- based learning in the fast lane. In Proceedings of NAACL-2001, pages 40-47, Pittsburgh.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Improving Data Driven Wordclass Tagging by System Combination",
"authors": [
{
"first": "H",
"middle": [],
"last": "Van Halteren",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Zavrel",
"suffix": ""
},
{
"first": "W",
"middle": [],
"last": "Daelemans",
"suffix": ""
}
],
"year": 1998,
"venue": "Proceedings of COLING/ACL-1998",
"volume": "",
"issue": "",
"pages": "491--497",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "H. van Halteren, J. Zavrel and W. Daelemans. 1998. Improving Data Driven Wordclass Tag- ging by System Combination In Proceedings of COLING/ACL-1998, pages 491-497, Montreal.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Minimally supervised morphological analysis by multimodal alignment",
"authors": [
{
"first": "D",
"middle": [],
"last": "Yarowsky",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Wicehtowski",
"suffix": ""
}
],
"year": 2000,
"venue": "Proceedings of ACL-2000",
"volume": "",
"issue": "",
"pages": "207--216",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "D. Yarowsky and R. Wicehtowski. 2000. Minimally supervised morphological analysis by multimodal alignment. In Proceedings of ACL-2000, pages 207-216, Hong Kong.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Hierarchical decision lists for word sense disambiguation",
"authors": [
{
"first": "D",
"middle": [],
"last": "Yarowsky",
"suffix": ""
}
],
"year": 2000,
"venue": "Computers and the Humanities",
"volume": "34",
"issue": "2",
"pages": "179--186",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "D. Yarowsky. 2000. Hierarchical decision lists for word sense disambiguation. Computers and the Humanities, 34(2):179-186.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"text": "of word-document pairs T = [(w;, D;j), S;jJi,j (Sij being the sense associated with the document D;j of keyword wi), labeled with the corresponding gold standard class. The goal is to establish the classification of a set of unlabeled word-document pairs T' = { (wi, D~J\u2022)} .. , not previously seen in the train-\u2022J .",
"uris": null,
"type_str": "figure",
"num": null
},
"FIGREF2": {
"text": "Mapping between raw confidence scores and classification accuracy for English decision lists verb is not marked, no phrasal senses are considered.",
"uris": null,
"type_str": "figure",
"num": null
},
"TABREF0": {
"text": "Many mothers do not even try to toilet trmn their children until the age of 2 years or later ... \"",
"num": null,
"html": null,
"type_str": "table",
"content": "<table><tr><td colspan=\"2\">.t&lt;eature type Context . Context Word try</td><td>POS VB</td><td>Lemma tryjv</td></tr><tr><td>Context</td><td>to</td><td>TO</td><td>to/T</td></tr><tr><td>Context</td><td>toilet</td><td>NN</td><td>toilet/N</td></tr><tr><td>Context</td><td>train</td><td>VBP</td><td>train/v</td></tr><tr><td>Context Context</td><td>their . . .</td><td>DT . . .</td><td>their/D ...</td></tr><tr><td colspan=\"4\">Syntactic {predicate-argument) features</td></tr><tr><td>Object</td><td>children</td><td>NNS</td><td>child/N</td></tr><tr><td>Prep</td><td>until</td><td>IN</td><td>until/I</td></tr><tr><td>ObjPrep</td><td>age</td><td>NN</td><td>age/N</td></tr><tr><td colspan=\"4\">Ngram collocational features</td></tr><tr><td>-1 bigram</td><td>toilet</td><td>NN</td><td>toilet/N</td></tr><tr><td>+1 bigram</td><td>their</td><td>DT</td><td>their/D</td></tr><tr><td>-2/-1 trigram</td><td colspan=\"3\">to toilet \u2022 TO-NN tojT toilet/N \u2022</td></tr><tr><td>-1/H trigram</td><td>to \u2022 their</td><td colspan=\"2\">TO-DT to/T \u2022 their/D</td></tr><tr><td colspan=\"4\">+1/+2 trigram their children DT-NN their/D child/N</td></tr><tr><td colspan=\"4\">Figure 1: Example sentence and extracted features</td></tr><tr><td>and</td><td/><td/><td/></tr></table>"
},
"TABREF1": {
"text": "3. The sample D is labeled with sense S if S = argmaxP (S'ID). The weight associated with a feature (Fj) is its inverse document frequency Wj =log!:;, where N is the total number of documents and Nj is the number of documents containing feature fJ.",
"num": null,
"html": null,
"type_str": "table",
"content": "<table><tr><td>S'</td></tr><tr><td>2.2.1 The Cosine-based Model</td></tr><tr><td>In this model, traditional cosine similarity is used</td></tr><tr><td>to compute similarity between a document D and</td></tr><tr><td>a centroid C. Function words and POS tags were excluced from</td></tr><tr><td>the cosine vectors.</td></tr><tr><td>2.2.2 The Bayesian Models</td></tr><tr><td>In the Bayes model, the Bayes similarity is computed</td></tr><tr><td>as:</td></tr><tr><td>and the following assumption of independence is</td></tr><tr><td>made:</td></tr></table>"
},
"TABREF4": {
"text": "Official JHU system performance",
"num": null,
"html": null,
"type_str": "table",
"content": "<table/>"
}
}
}
}