ACL-OCL / Base_JSON /prefixC /json /C94 /C94-1049.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "C94-1049",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T12:49:00.417253Z"
},
"title": "CO-OCCURRENCE VECTORS FROM CORPORA VS. DISTANCE VECTORS FROM DICTIONARIES",
"authors": [
{
"first": "Yoshiki",
"middle": [],
"last": "Niwa",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Yoshihiko",
"middle": [],
"last": "Nitta",
"suffix": "",
"affiliation": {},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "A comparison W~LS made of vectors derived by using ordinary co-occurrence statistics from large text corpora and of vectors derived by measuring the interword distances in dictionary definitions. The precision of word sense disambiguation by using co-occurrence vectors frorn the 1987 Wall Street Journal (20M total words) was higher than that by using distance vectors from the Collins English l)ictionary (60K head words + 1.6M definition words), llowever, other experimental results suggest that distance vectors contain some different semantic information from co-occurrence vectors.",
"pdf_parse": {
"paper_id": "C94-1049",
"_pdf_hash": "",
"abstract": [
{
"text": "A comparison W~LS made of vectors derived by using ordinary co-occurrence statistics from large text corpora and of vectors derived by measuring the interword distances in dictionary definitions. The precision of word sense disambiguation by using co-occurrence vectors frorn the 1987 Wall Street Journal (20M total words) was higher than that by using distance vectors from the Collins English l)ictionary (60K head words + 1.6M definition words), llowever, other experimental results suggest that distance vectors contain some different semantic information from co-occurrence vectors.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Word vectors reflecting word meanings are expected to enable numerical approaches to semantics. Some early attempts at vector representation in I)sycholinguistics were the semantic d (O'erential approach (Osgood et al. 1957) and the associative distribution apl)roach (Deese 1962) . llowever, they were derived manually through psychological experiments. An early attempt at automation was made I)y Wilks el aL (t990) us-.",
"cite_spans": [
{
"start": 183,
"end": 224,
"text": "(O'erential approach (Osgood et al. 1957)",
"ref_id": null
},
{
"start": 268,
"end": 280,
"text": "(Deese 1962)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "ing co-occurrence statistics. Since then, there haw\" been some promising results from using co-occurrence vectors, such as word sense disambiguation (Schiitze [993) , and word clustering (Pereira eL al. 1993 ).",
"cite_spans": [
{
"start": 149,
"end": 164,
"text": "(Schiitze [993)",
"ref_id": null
},
{
"start": 187,
"end": 207,
"text": "(Pereira eL al. 1993",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "llowever, using the co-occurrence statistics requires a huge corpus that covers even most rare words. We recently developed word vectors that are derived from an ordinary dictionary by measuring the interword distances in the word definitions (Niwa and Nitta 1993) . 'this method, by its nature, h~s no prol)lom handling rare words. In this paper we examine the nsefldness of these distance vectors as semantic re W resentations by comparing them with co-occur,'ence vectors.",
"cite_spans": [
{
"start": 243,
"end": 264,
"text": "(Niwa and Nitta 1993)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "A reference network of the words in a dictionary (Fig. 1 ) is used to measure the distance between words, q'he network is a graph that shows which words are used in the. definition of each word (Nitta 1988) . The network shown in Fig. 1 is for a w~ry small portion of the reference network for the Collins English 1)ictionary (1979 edition) in the CI)-I{OM I (Liberman 1991), with 60K head words -b 1.6M definition words. For example, tile delinition for diclionarg is % book ill which the words of a language are listed alphabetically .... \" The word dicliona~d is thus linked to the words book, word, language, and alphabelical.",
"cite_spans": [
{
"start": 195,
"end": 207,
"text": "(Nitta 1988)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [
{
"start": 49,
"end": 57,
"text": "(Fig. 1",
"ref_id": "FIGREF0"
},
{
"start": 231,
"end": 237,
"text": "Fig. 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Distance Vectors",
"sec_num": null
},
{
"text": "A word w~etor is defined its the list of distances from a word to a certain sew of selected words, which we call origins. The words in Fig. 1 marked with Oi (unit, book, and people) m'e assumed to be origin words. In principle, origin words can be freoly chosen. In our exl~eriments we used mi(Idle fi'equency words: the 51st to 1050th most frequent words in the reference Collins English I)ictiotmry (CI';D),",
"cite_spans": [],
"ref_spans": [
{
"start": 135,
"end": 141,
"text": "Fig. 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Distance Vectors",
"sec_num": null
},
{
"text": "The distance w~ctor fl)r diclionary is deriwM it'* fob lOWS:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Distance Vectors",
"sec_num": null
},
{
"text": "~) ... disti~uc,, ((ticl., 01) dictionary ~ 1 ... distance (dict., 0'2) 2 ... distance (dicL, Oa)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Distance Vectors",
"sec_num": null
},
{
"text": "The i-4,h element is the distance (the length of the shortest path) between diclionary and the i-th origin, Oi. To begin, we assume every link has a constant length o[' 1. The actual definition for link length will be given later.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Distance Vectors",
"sec_num": null
},
{
"text": "If word A is used in the definition of word B, t.he,m words are expected to be strongly related. This is the basis of our hypothesis that the distances in the refi~rence network reflect the associative distances between words (Nitta 1933 Vdronis and Ide (1990) for word sense disainl)iguation) and as fields for artificial association, such its spreading activation (by Kojiina and l:urugori (1993) for context-coherence measurement). The distance vector of a word can be considered to be a list, of the activation strengths at the origin nodes when the word node is activated. Therefore, distance w~ctors can be expected to convey almost the santo information as the entire network, and clearly they are Ili~icli easier to handle.",
"cite_spans": [
{
"start": 226,
"end": 237,
"text": "(Nitta 1933",
"ref_id": null
},
{
"start": 238,
"end": 260,
"text": "Vdronis and Ide (1990)",
"ref_id": null
},
{
"start": 370,
"end": 398,
"text": "Kojiina and l:urugori (1993)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Distance Vectors",
"sec_num": null
},
{
"text": "As a seinant{c representation of words, distltllCe w~ctors are expected to depend very weakly on the particular source dictionary. We eolilpared two sets of distance vectors, one from I,I)OCE (Procter 1978) and the other from COBUILD (Sinclair 1987) , and verified that their difference is at least snlaller than the difDrence of the word definitions themselves (Niwa and Nitta 1993 ).",
"cite_spans": [
{
"start": 192,
"end": 206,
"text": "(Procter 1978)",
"ref_id": "BIBREF12"
},
{
"start": 234,
"end": 249,
"text": "(Sinclair 1987)",
"ref_id": null
},
{
"start": 362,
"end": 382,
"text": "(Niwa and Nitta 1993",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Dependence on Dietiolnlrles",
"sec_num": null
},
{
"text": "We will now describe some technical details al)Ollt the derivation of distance vectors.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dependence on Dietiolnlrles",
"sec_num": null
},
{
"text": "Distance measurenient in a reference network depends on the detinition of link length. Previously, we assumed for siinplicity that every link has a construct length. Ilowever, this shnph; definition seerns tlnnatllral because it does not relh'.ct word frequency. Because tt path through low-fi'equency words (rare words) implies a strong relation, it should be ineasnred ms a shorter path. Therefore, we use the following definition of link length, which takes accotltlt of word frequency.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Lhlk Length",
"sec_num": null
},
{
"text": "length (Wi, W2) d,'I:---log (7N-77'~.,)n'",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Lhlk Length",
"sec_num": null
},
{
"text": "This shows the length of the links between words Fig, 2 , where Ni denotes the total minibet of links front and to }Vi and n denotes the uulnlmr of direct links bt.'tween these two words. coordinal.e is changed hire its deviation in thc ~ vector:",
"cite_spans": [],
"ref_spans": [
{
"start": 49,
"end": 55,
"text": "Fig, 2",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Lhlk Length",
"sec_num": null
},
{
"text": "Wi(i = 1,2) ill",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Lhlk Length",
"sec_num": null
},
{
"text": "where t? and cd are tile average .~_llld i,he standard deviation of v} (i = I .... ).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Lhlk Length",
"sec_num": null
},
{
"text": "We use ordinary co-o(:Clll'rl;llCe statistics ;tlld illellSllre the co-occurrei/ce likelihood betweeii two words, X and Y, hy the Inutua] hiforlnaLioii estilnate. ((]hurch and ll~uiks 1989)'.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Co-occurro.ric(; Vectors",
"sec_num": null
},
{
"text": "l(X,V) = i<,g i P(x IV) P(X) '",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Co-occurro.ric(; Vectors",
"sec_num": null
},
{
"text": "where P(X) is the oCcilrreilce, density of word X hi whole corllus, and the conditional probability l'(x Iv)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Co-occurro.ric(; Vectors",
"sec_num": null
},
{
"text": "is the density of X in a neight>orhood of word Y, llere the neighl)orhood is defined as 50 words lie.fore or after s.iiy appearance of word Y. (There is a variety of neighborhood definitions Sllch as \"100 sllrrollllding words\" (Yarowsky 1992) and \"within a distance of no more thall 3 words igllorh/g filnction words\" (I)agarl el, al.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Co-occurro.ric(; Vectors",
"sec_num": null
},
{
"text": "The logarithm with '-t-' is dellned to be () for an arg;ument less than 1. Negative estimates were neglected because they are mostly accidental except when X and Y are frequent enough (Chnrch and lIanl,:s 1989).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "l~)n:/).)",
"sec_num": null
},
{
"text": "A co-occurence vector of a word is defined as the list of co-occtlrrellce likelihood of the word with a certahi set o['orighi words. We tlsed the salne set oforight words ;is for the distance vectors.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "l~)n:/).)",
"sec_num": null
},
{
"text": "I(w, \u00a230 l(w,%) CV[w} = I(w, 0,,,) C(~-oeelll'l'elle( ~, V(~t'tol'.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "l~)n:/).)",
"sec_num": null
},
{
"text": "When the frequency of X or Y is zero, we can not measure their co-c, ccurence likelihood, and such cruses are not exceptional. This sparseness problem is wellknown and serious in the co-occurrence sLatisC[cs. We used as ~ corpus the 1!)87 Wall Street; JournM in the CI)-I~.OM i (1991), which has a total of 20M words. ']'he nUlliber of words which appeared al, least OIlCe, was about 50% of the total 62I( head words of CEI), and tile. percentage Of\" tile word-origin pairs which appeared tit least once was about 16% of total 62K \u00d7 1K (=62M) pairs. When the co-occurrence likelihood Call liOt Im ineasurc~d> I,he vahle I(X, Y) was set to 0.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "l~)n:/).)",
"sec_num": null
},
{
"text": "We compared the two vector representations by using them for the following two semantic tmsks. The first is word sense disambiguation (WSD) based on the similarity of context vectors; the second is the learning of positive or negative meanings from example words.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental R, esults",
"sec_num": "4"
},
{
"text": "With WSD, the precision by using co-occurrence vectors from a 20M words corpus was higher than by using distance vectors from the CEIL",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental R, esults",
"sec_num": "4"
},
{
"text": "Word sense disambiguation is a serious semantic problena. A variety of approaches have been proposed for solving it. For example, V(!ronis and Ide (1990) used reference networks as neural networks, llearst (1991) used (shallow) syntactic similarity between contexts, Cowie el al. (1992) used simulated annealing for quick parallel disambignation, and Yarowsky (1992) used co-occurrence statistics between words and thesaurus categories.",
"cite_spans": [
{
"start": 267,
"end": 286,
"text": "Cowie el al. (1992)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Word Sense Disambiguation",
"sec_num": "4.1"
},
{
"text": "Our disambiguation method is based on the shnilarity of context vectors, which was originated by Wilks el al. (1990) . In this method, a context vector is the sum of its constituent word vectors (except the target word itself). That is, tile context vector for context,",
"cite_spans": [
{
"start": 97,
"end": 116,
"text": "Wilks el al. (1990)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Word Sense Disambiguation",
"sec_num": "4.1"
},
{
"text": "C: ...W_N ...W_l WWl ...WN, ... ~ is N t v(c) = ~ V(w~).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Word Sense Disambiguation",
"sec_num": "4.1"
},
{
"text": "The similarity of contexts is measured by the angle of their vectors (or actually the inner product of their normalized vectors). We infer that the sense of word w in an arhitrary context C is si if for some j the similarity, sire(C, Cij), is maximum among all tile context examples.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "i= -N",
"sec_num": null
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "V(CI) V",
"eq_num": "("
}
],
"section": "i= -N",
"sec_num": null
},
{
"text": "Another possible way to infer the sense is to choose sense si such that the average of sim(C, Cij) over j = 1,2,...,hi is maximum. We selected the first method because a peculiarly similar example is more important than the average similarity. Figure 3 (next page) shows the disamhiguation precision for 9 words. For each word, we selected two senses shown over each graph. These senses were chosen because they are clearly different and we could collect sufficient nmnber (more than 20) of context examples. The names of senses were chosen from the category names in Roger's International Thesaurus, except organ's.",
"cite_spans": [],
"ref_spans": [
{
"start": 244,
"end": 252,
"text": "Figure 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "i= -N",
"sec_num": null
},
{
"text": "The results using distance vectors are shown by clots (. \u2022 .), and using co-occurrence vectors from the",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "i= -N",
"sec_num": null
},
{
"text": "A context size (x-axis) of, for example, 10 means 10 words before tile target word and 10 words after tile target word. Wc used 20 examples per sense; they were taken from tlle 1988 WSJ. Tile test contexts were from the 1987 WSJ: The nmnber of test contexts varies from word to word (100 to 1000). The precision is the simple average of the respective precisions for the two senses.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "vsa (20M words) by cir,.tes (o o o).",
"sec_num": "1987"
},
{
"text": "The results of Fig. 3 show that the precision by using co-occurrence vectors are higher than that by using distance vectors except two cases, interest and customs. And we have not yet found a case where the distance vectors give higher precision. Therefore we conclude that co-occurrence vectors are advantageous over distance vectors to WSD based on the context similarity.",
"cite_spans": [],
"ref_spans": [
{
"start": 15,
"end": 21,
"text": "Fig. 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "vsa (20M words) by cir,.tes (o o o).",
"sec_num": "1987"
},
{
"text": "The sl)arseness problem for co-occurrence vectors is not serious in this case because each context consists of plural words. In this case, the distance vectors were advantageous. The precision by using distance vectors increased to about 80% and then leveled off, while the precision by using co-occurrence vectors stayed arouud 60%. We can therefore conclude that the property of positive-or-negative is reflected in distance vectors more strongly than ill co-occurrence vectors. Tile sparseness l)roblem is supposed to be a major factor in this case. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "vsa (20M words) by cir,.tes (o o o).",
"sec_num": "1987"
},
{
"text": "In the experiments discussed above, the corpus size for co-occurrence vectors was set to 20M words ('87 WSJ) and the vector dimension for both co-occurrence and distance vectors wins set to 1000. llere we show some supplementary data that support these parameter settings.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Supplementary Data",
"sec_num": null
},
{
"text": "a. Corpus size (for co-occurrence vectors) Figure 6 (next page) shows the dependence of disambiguation precision on the vector dimension for (i) co-occurrence and (ii) distance vectors. As for cooccurrence vectors, the precision levels off near a dimension of 100. Therefore, a dimension size of 1000 is suflicient or cvcn redumlant. IIowever, in the distance vector's case, it is not clear whether the precision is leveling or still increasing around 1000 dimension.",
"cite_spans": [],
"ref_spans": [
{
"start": 43,
"end": 51,
"text": "Figure 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "Supplementary Data",
"sec_num": null
},
{
"text": "\u2022 A comparison was nlade of co-occnrrence vectors from large text corpora and of distance vectors from dictionary delinitions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
},
{
"text": "\u2022 For tile word sense disambiguation based on the context simihtrity, co-occurrence vectors fl'om tile 1987 Wall Street Journal (20M total words) was advantageous over distance vectors from the Collins l,;nglish Dictionary (60K head words + 1.6M definition words).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
},
{
"text": "\u2022 For learning positive or negalive meanings from example words, distance vectors gave remarkably higher precision than co-occurrence vectors. This suggests, though further investigation is required, that distance w:ctors contain some different semantic information from co-occurrence vectors. vector diln(msion I)ependence on vector dimension for (i) cooccurrence veetors and (ii) distance vectors. context size: 10, examples: 10/sense, corpus size for co-oe, vectors: 20M word.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Word association norms, mutual information, and lexicography",
"authors": [
{
"first": "W",
"middle": [],
"last": "Kenneth",
"suffix": ""
},
{
"first": "Patrick",
"middle": [],
"last": "Church",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Llanks",
"suffix": ""
}
],
"year": 1989,
"venue": "Proceedings of lhe 27th Annual Meeting of the Association for Computalional Ling,istics",
"volume": "",
"issue": "",
"pages": "76--83",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kenneth W. Church and Patrick llanks. 1989. Word association norms, mutual information, and lexi- cography. In Proceedings of lhe 27th Annual Meet- ing of the Association for Computalional Ling,is- tics, pages 76-83, Vancouver, Canada.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Lexieal disambiguation using simulated .:mtwaling",
"authors": [
{
"first": "Jim",
"middle": [],
"last": "Cowie",
"suffix": ""
},
{
"first": "Joe",
"middle": [],
"last": "Guthrie",
"suffix": ""
},
{
"first": "Louise",
"middle": [],
"last": "Guthrie",
"suffix": ""
}
],
"year": 1992,
"venue": "Proceedings of COI, ING-92",
"volume": "",
"issue": "",
"pages": "1--59",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jim Cowie, Joe Guthrie, and Louise Guthrie. 1992. Lexieal disambiguation using simulated .:mtwal- ing. In Proceedings of COI, ING-92, pages 1/59-365, Nantes, France.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Contextual word similarity and estimation from sparse data",
"authors": [
{
"first": "Shaul",
"middle": [],
"last": "Ido Dagan",
"suffix": ""
},
{
"first": "Shaul",
"middle": [],
"last": "Marcus",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Markovitch",
"suffix": ""
}
],
"year": 1993,
"venue": "Proceedings of Ihe 31st Annual Meeting of the Association for Compulational Linguist&s",
"volume": "",
"issue": "",
"pages": "164--171",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ido Dagan, Shaul Marcus, and Shaul Markovitch. 1993. Contextual word similarity and estimation from sparse data. In Proceedings of Ihe 31st An- nual Meeting of the Association for Compulational Linguist&s, pages 164-171, Columbus, Ohio.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "On the structure of associative meaning",
"authors": [
{
"first": "James",
"middle": [],
"last": "Deese",
"suffix": ""
}
],
"year": 1962,
"venue": "Psychological Review",
"volume": "69",
"issue": "3",
"pages": "16--175",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "James Deese. 1962. On the structure of associative meaning. Psychological Review, 69(3):16F 175.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Noun homograph disambignation using local context in large text eorl)ora",
"authors": [
{
"first": "Marti",
"middle": [
"A"
],
"last": "Iiearst",
"suffix": ""
}
],
"year": 1991,
"venue": "Proceedings of lhe 71h Annum Confercncc of Ihe Universily of Walerloo Center for lhc New OEI) and Text Research",
"volume": "",
"issue": "",
"pages": "1--22",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marti A. IIearst. 1991. Noun homograph disambigna- tion using local context in large text eorl)ora. In Proceedings of lhe 71h Annum Confercncc of Ihe Universily of Walerloo Center for lhc New OEI) and Text Research, pages 1-22, Oxford.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Similarity between words computed by spreading actiw~tion on an english dictionary",
"authors": [
{
"first": "Teiji",
"middle": [],
"last": "Llideki Kozima",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Furugori",
"suffix": ""
}
],
"year": 1993,
"venue": "Proceedings of I'7A CL-93",
"volume": "",
"issue": "",
"pages": "232--239",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "llideki Kozima and Teiji Furugori. 1993. Similarity between words computed by spreading actiw~tion on an english dictionary. In Proceedings of I'7A CL- 93, pages 232--239, Utrecht, the Netherlands.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "CD-ROM L Associario, for Comlmtational I,inguistics Data Collection Initiative",
"authors": [],
"year": 1991,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mark Liberman, editor. 1991. CD-ROM L Associa- rio,, for Comlmtational I,inguistics Data Collection Initiative, University of Pennsylvania.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "The referential structure, of the word definitions in ordinary dictionaries, h, Proceedings of lhe Workshop on rite Aspects of Lexicon for Natural Language Processing, LNL88-8, JSSST', pages I-21",
"authors": [
{
"first": "Yoshihiko",
"middle": [],
"last": "Nitta",
"suffix": ""
}
],
"year": 1988,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yoshihiko Nitta. 1988. The referential structure, of the word definitions in ordinary dictionaries, h, Pro- ceedings of lhe Workshop on rite Aspects of Lex- icon for Natural Language Processing, LNL88-8, JSSST', pages I-21, Fukuoka University, Japan. (i\" ,1 apanese).",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Refi.'rential structure. -a nmchanism for giving word-delinition in ordinary lexicons",
"authors": [
{
"first": "Yoshihiko",
"middle": [],
"last": "Nitta",
"suffix": ""
}
],
"year": 1993,
"venue": "Language, Information and Computation",
"volume": "",
"issue": "",
"pages": "99--100",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yoshihiko Nitta. 1993. Refi.'rential structure. -a nmchanism for giving word-delinition in ordinary lexicons. In C. Lee and II. Kant, editors, Lan- guage, Information and Computation, pages 99- 1 t0. Thaehaksa, Seoul.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Distance vector representation o[' words, derived from refe.rence networks i,t ordinary dictionaries. MCCS 93-253, (;Oml)l,ting ll.esearch I,aboratory",
"authors": [
{
"first": "Yoshiki",
"middle": [],
"last": "Niwa",
"suffix": ""
},
{
"first": "Yoshihiko",
"middle": [],
"last": "Nitta",
"suffix": ""
}
],
"year": 1993,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yoshiki Niwa and Yoshihiko Nitta. 1993. Distance vector representation o[' words, derived from refe.r- ence networks i,t ordinary dictionaries. MCCS 93- 253, (;Oml)l,ting ll.esearch I,aboratory, New Mex- ico State University, l,as Cruces.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Tantmnl)anln. 1957. 7'he Measurement of Meaning",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "C. 1';. Osgood, (l. F. Such, and P. II. Tantmnl)anln. 1957. 7'he Measurement of Meaning. University of Illinois Press, Urlmna.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "1993. l)istributional clustering of english words",
"authors": [
{
"first": "Fernando",
"middle": [],
"last": "Pereira",
"suffix": ""
},
{
"first": "Naftali",
"middle": [],
"last": "Tishby",
"suffix": ""
},
{
"first": "Iailian",
"middle": [],
"last": "Lee",
"suffix": ""
}
],
"year": null,
"venue": "lit Proceedings of the 31st Annval Meeting of the Association for Computational Lin:luislics, pages I 8;I 190",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Fernando Pereira, Naftali Tishby, and IAIlian Lee. 1993. l)istributional clustering of english words. lit Proceedings of the 31st Annval Meeting of the Association for Computational Lin:luislics, pages I 8;I 190, Colmnlms, Ohio.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Longman Dictionary of Contemporary lCnglish (LI)OCE). Long]nan, liarlow",
"authors": [
{
"first": "",
"middle": [],
"last": "Procter",
"suffix": ""
}
],
"year": 1978,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "I'aul Procter, e<lit.or. 1978. Longman Dictionary of Contemporary lCnglish (LI)OCE). Long]nan, liar- low, Essex, tirst edition.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Advances in Neural Information lb'ocessing \u00a3'ystems",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "llinrich Sch/itze. 1993. Word space. % J. D. Cowan ,q. J. llanson an(I C. L. C, iles, editors, Advances in Neural Information lb'ocessing \u00a3'ystems, pages 8!)5 902. Morgan Kaufinann, San Mateo, Califof ,lia.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Collins COBUILD English Language l)iclionary. Collins and t.he Uni-w~rslty of llirmingham",
"authors": [],
"year": 1987,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "John Sinclair, editor. 1987. Collins COBUILD En- glish Language l)iclionary. Collins and t.he Uni- w~rslty of llirmingham, London.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Word sense disambiguation with very large neural networks extracted from machine readable dictionaries",
"authors": [
{
"first": "Jean",
"middle": [],
"last": "Ve",
"suffix": ""
},
{
"first": "'",
"middle": [],
"last": "",
"suffix": ""
},
{
"first": "Nancy",
"middle": [
"M"
],
"last": "",
"suffix": ""
}
],
"year": null,
"venue": "Proceedings of COLING-90",
"volume": "",
"issue": "",
"pages": "389--394",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jean Ve'ro,fis and Nancy M. [de. 1990. Word sense disambiguation with very large neural net- works extracted from machine readable dictionar- ies. In Proceedings of COLING-90, pages 389-394, llelsinki.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Providing machine tractable dictionary tools",
"authors": [
{
"first": "Yorick",
"middle": [],
"last": "Wilks",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Fass",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Cheng Mint Guo",
"suffix": ""
}
],
"year": 1990,
"venue": "MeDolmhl, Tony Plate, and Ilrian M. Slator",
"volume": "5",
"issue": "",
"pages": "99--154",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yorick Wilks, I)a,, Fass, Cheng mint Guo, James 1\". MeDolmhl, Tony Plate, and Ilrian M. Slator. 1990. Providing machine tractable dictionary tools. Ma- chine Translation, 5(2):99 154.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Word-sense disambigualion using statistieal models of roget's categories trained on large corpora",
"authors": [],
"year": 1992,
"venue": "Proceedings of COLING-92",
"volume": "",
"issue": "",
"pages": "454--460",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "l)avid Yarowsky. 1992. Word-sense disambigua- lion using statistieal models of roget's categories trained on large corpora. In Proceedings of COLING-92, pages 454-460, Nantes, France.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"num": null,
"text": "Portion of a reference network.",
"type_str": "figure",
"uris": null
},
"FIGREF1": {
"num": null,
"text": "Links between two words.",
"type_str": "figure",
"uris": null
},
"FIGREF2": {
"num": null,
"text": "Normalizationl)istance vectors ;ire norrrialized by first changing each coordinal,e into its deviation in the coordin;tLe:where a i and o i are the average and the standaM deviation of the distances fi'om the i-th origin. Next, each",
"type_str": "figure",
"uris": null
},
"FIGREF3": {
"num": null,
"text": "C2) sim(C~,C.~) = lv(C~)l IV(C2)l' Let word w have senses sl, s2, ..., sll I } a|'ld each sells(; have the following context examples.",
"type_str": "figure",
"uris": null
},
"FIGREF4": {
"num": null,
"text": "Another experiment using the same two vector representations was done to measure tile learning of positive or negative meanings. 1,'igure 4 shows tile changes in the precision (the percentage of agreement with the authors' combined judgement). The x-axis indicates tile nunll)er of example words for each positive or ~tegalive pair. Judgement w~s again done by using the nearest example. The example and test words are shown in Tables 1 and 2, respectively.",
"type_str": "figure",
"uris": null
},
"FIGREF5": {
"num": null,
"text": "hy using ro-ot:rm'rence vectors(ooo) m,l hy using distance w.*ctors(--,). (The number of examples is 10 ['or each sense.) of posiffve-or-negative.",
"type_str": "figure",
"uris": null
},
"FIGREF6": {
"num": null,
"text": "shows the change in disambiguation pre-eision as the corpus size for co-occurrence statistics increases from 200 words to 20M words. (The words are suit, issue and race, the context size is 10, and the number of examples per sense is 10.) These three graphs level off after around IM words. Therefore, a corpus size of 20M words is not too small. Dependence of the disambiguation precision on the corpus size for c.o-occurrence vectors. context size: 10, number of examples: 10/sense, vector dimension: 1000. l). Vector Dimension",
"type_str": "figure",
"uris": null
},
"FIGREF7": {
"num": null,
"text": "Fig. 6",
"type_str": "figure",
"uris": null
},
"TABREF1": {
"num": null,
"html": null,
"content": "<table><tr><td/><td colspan=\"4\">positive negative</td><td/><td>positive</td><td>negative</td></tr><tr><td>1</td><td>true</td><td/><td>false</td><td colspan=\"2\">16</td><td>l)roperly</td><td>crime</td></tr><tr><td>2</td><td>new</td><td/><td>wrong</td><td/><td>17</td><td>succeed</td><td>(lie</td></tr><tr><td>3</td><td>better</td><td/><td colspan=\"2\">disease</td><td>18</td><td>worth</td><td>violent</td></tr><tr><td>,l</td><td>clear</td><td/><td>angry</td><td/><td>19</td><td>friendly</td><td>hurt</td></tr><tr><td colspan=\"3\">5 pleasure</td><td>noise</td><td/><td colspan=\"2\">20 useful</td><td>punishment,</td></tr><tr><td colspan=\"3\">6 correct</td><td>pain</td><td/><td colspan=\"2\">21 success</td><td>poor</td></tr><tr><td colspan=\"3\">7 pleasant</td><td>lose</td><td/><td colspan=\"2\">22 intcrestlng badly</td></tr><tr><td colspan=\"3\">8 snltable</td><td colspan=\"2\">destroy</td><td colspan=\"2\">23 active</td><td>fail</td></tr><tr><td colspan=\"2\">9 clean</td><td/><td colspan=\"4\">dangerous 2,1 polite</td><td>suffering</td></tr><tr><td colspan=\"4\">10 advantage harm</td><td/><td colspan=\"2\">25 win</td><td>enemy</td></tr><tr><td colspan=\"2\">11 love</td><td/><td>kill</td><td/><td colspan=\"2\">26 improve</td><td>rude</td></tr><tr><td colspan=\"2\">12 best</td><td/><td>fear</td><td/><td colspan=\"2\">27 favour</td><td>danger</td></tr><tr><td>13</td><td colspan=\"3\">snccessfld war</td><td/><td colspan=\"2\">28 development anger</td></tr><tr><td>1,1</td><td colspan=\"3\">attractive ill</td><td/><td colspan=\"2\">29 happy</td><td>waste</td></tr><tr><td>15</td><td colspan=\"6\">powerful foolish 30 praise</td><td>doubt</td></tr><tr><td/><td/><td/><td colspan=\"4\">Table 2 Test words.</td></tr><tr><td colspan=\"5\">positive (20 words)</td><td/></tr><tr><td colspan=\"6\">balanced elaborate elation</td><td>eligible enjoy</td></tr><tr><td colspan=\"2\">fluent</td><td colspan=\"5\">honorary Imnourable hopeful</td><td>hopefully</td></tr><tr><td colspan=\"6\">influential interested legible</td><td>lustre</td><td>normal</td></tr><tr><td colspan=\"4\">recreation replete</td><td colspan=\"2\">resilient</td><td>restorative sincere</td></tr><tr><td/><td colspan=\"4\">negative (30 words)</td><td/></tr><tr><td colspan=\"4\">conflmion cuckold</td><td/><td>dally</td><td>daumation dull</td></tr><tr><td colspan=\"2\">ferocious</td><td/><td>flaw</td><td/><td colspan=\"2\">hesitate hostage</td><td>huddle</td></tr><tr><td colspan=\"4\">inattentive liverlsh</td><td/><td colspan=\"2\">lowly</td><td>mock</td><td>neglect</td></tr><tr><td colspan=\"2\">queer</td><td/><td>rape</td><td/><td colspan=\"2\">ridiculous savage</td><td>scanty</td></tr><tr><td colspan=\"2\">sceptical</td><td/><td colspan=\"3\">schizophrenia scoff</td><td>scrnffy</td><td>shipwreck</td></tr><tr><td colspan=\"5\">superstition sycophant</td><td colspan=\"2\">trouble</td><td>wicked worthless</td></tr><tr><td>4.3</td><td/><td/><td/><td/><td/></tr></table>",
"text": "Example pairs.",
"type_str": "table"
}
}
}
}