ACL-OCL / Base_JSON /prefixE /json /E93 /E93-1028.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "E93-1028",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T10:54:40.328515Z"
},
"title": "Similarity between Words Computed by Spreading Activation on an English Dictionary",
"authors": [
{
"first": "Hideki",
"middle": [],
"last": "Kozima",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Electro-Communications",
"location": {
"addrLine": "1-5-1",
"postCode": "182",
"settlement": "Chofugaoka, Chofu, Tokyo",
"country": "Japan"
}
},
"email": "xkozima@phaeton.cs.uec.ac.j"
},
{
"first": "Teiji",
"middle": [],
"last": "Furugori",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Electro",
"location": {
"addrLine": "-Communications 1-5-1, Chofugaoka",
"postCode": "182",
"settlement": "Chofu, Tokyo",
"country": "Japan"
}
},
"email": "furugori@phaeton.cs.uec.ac.jp"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "This paper proposes a method for measuring semantic similarity between words as a new tool for text analysis. The similarity is measured on a semantic network constructed systematically from a subset of the English dictionary, LDOCE (Longman Dictionary of Contemporary English). Spreading activation on the network can directly compute the similarity between any two words in the Longman Defining Vocabulary, and indirectly the similarity of all the other words in LDOCE. The similarity represents the strength of lexical cohesion or semantic relation, and also provides valuable information about similarity and coherence of texts.",
"pdf_parse": {
"paper_id": "E93-1028",
"_pdf_hash": "",
"abstract": [
{
"text": "This paper proposes a method for measuring semantic similarity between words as a new tool for text analysis. The similarity is measured on a semantic network constructed systematically from a subset of the English dictionary, LDOCE (Longman Dictionary of Contemporary English). Spreading activation on the network can directly compute the similarity between any two words in the Longman Defining Vocabulary, and indirectly the similarity of all the other words in LDOCE. The similarity represents the strength of lexical cohesion or semantic relation, and also provides valuable information about similarity and coherence of texts.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "A text is not just a sequence of words, but it also has coherent structure. The meaning of each word in a text depends on the structure of the text. Recognizing the structure of text is an essential task in text understanding. [Grosz and Sidner, 1986] One of the valuable indicators of the structure of text is lexical cohesion. [Halliday and Hasan, 1976] Lexical cohesion is the relationship between words, classified as follows:",
"cite_spans": [
{
"start": 227,
"end": 251,
"text": "[Grosz and Sidner, 1986]",
"ref_id": null
},
{
"start": 329,
"end": 355,
"text": "[Halliday and Hasan, 1976]",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Molly likes cats. She keeps a cat. Reiteration of words is easy to capture by morphological analysis. Semantic relation between words, which is the focus of this paper, is hard to recognize by computers.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Reiteration:",
"sec_num": "1."
},
{
"text": "We consider lexical cohesion as semantic similarity between words. Similarity is Computed by spreading activation (or association) [Waltz and Pollack, 1985] on a semantic network constructed systematically from an English dictionary. Whereas it is edited by some lexicographers, a dictionary is a set of associative relation shared by the people in a linguistic community.",
"cite_spans": [
{
"start": 131,
"end": 156,
"text": "[Waltz and Pollack, 1985]",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Reiteration:",
"sec_num": "1."
},
{
"text": "The similarity between words is a mapping a: Lx L ---* [0, 1], where L is a set of words (or lexicon). The following examples suggest the feature of the similarity: a(cat, pet) = 0.133722 (similar), a(cat, mat) = 0.002692 (dissimilar).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Reiteration:",
"sec_num": "1."
},
{
"text": "The value of a(w, w') increases with strength of semantic relation between w and w'. The following section examines related work in order to clarify the nature of the semantic similarity. Section 3 describes how the semantic network is systematically constructed from the English dictionary. Section 4 explains how to measure the similarity by spreading activation on the semantic network. Section 5 shows applications of the similarity measure -computing similarity between texts, and measuring coherence of a text. Section 6 discusses the theoretical aspects of the similarity.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Reiteration:",
"sec_num": "1."
},
{
"text": "Words in a language are organized by two kinds of relationship. One is a syntagmatic relation: how the words are arranged in sequential texts. The other is a Figure 1 . A psycholinguistic measurement (semantic differential [Osgood, 1952] ).",
"cite_spans": [
{
"start": 223,
"end": 237,
"text": "[Osgood, 1952]",
"ref_id": null
}
],
"ref_spans": [
{
"start": 158,
"end": 166,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Similarity",
"sec_num": null
},
{
"text": "paradigmatic relation: how the words are associated with each other. Similarity between words can be defined by either a syntagmatic or a paradigmatic relation. Syntagmatic similarity is based on co-occurrence data extracted from corpora [Church and Hanks, 1990] , definitions in dictionaries [Wilks etal., 1989] , and so on. Paradigmatic similarity is based on association data extracted from thesauri [Morris and Hirst, 1991] , psychological experiments [Osgood, 1952] , and so on. This paper concentrates on paradigmatic similarity, because a paradigmatic relation can be established both inside a sentence and across sentence boundaries, while syntagmatic relations can be seen mainly inside a sentence --like syntax deals with sentence structure. The rest of this section focuses on two related works on measuring paradigmatic similarity --a psycholinguistic approach and a thesaurus-based approach.",
"cite_spans": [
{
"start": 238,
"end": 262,
"text": "[Church and Hanks, 1990]",
"ref_id": null
},
{
"start": 293,
"end": 312,
"text": "[Wilks etal., 1989]",
"ref_id": null
},
{
"start": 403,
"end": 427,
"text": "[Morris and Hirst, 1991]",
"ref_id": "BIBREF7"
},
{
"start": 456,
"end": 470,
"text": "[Osgood, 1952]",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Similarity",
"sec_num": null
},
{
"text": "Psycholinguists have been proposed methods for measuring similarity. One of the pioneering works is 'semantic differential' [Osgood, 1952] which analyses meaning of words into a range of different dimensions with the opposed adjectives at both ends (see Figure 1) , and locates the words in the semantic space.",
"cite_spans": [
{
"start": 124,
"end": 138,
"text": "[Osgood, 1952]",
"ref_id": null
}
],
"ref_spans": [
{
"start": 254,
"end": 263,
"text": "Figure 1)",
"ref_id": null
}
],
"eq_spans": [],
"section": "A Psycholinguistic Approach",
"sec_num": "2.1"
},
{
"text": "Recent works on knowledge representation are somewhat related to Osgood's semantic differential. Most of them describe meaning of words using special symbols like microfeatures [Waltz and Pollack, 1985; Hendler, 1989 ] that correspond to the semantic dimensions.",
"cite_spans": [
{
"start": 177,
"end": 202,
"text": "[Waltz and Pollack, 1985;",
"ref_id": null
},
{
"start": 203,
"end": 216,
"text": "Hendler, 1989",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "A Psycholinguistic Approach",
"sec_num": "2.1"
},
{
"text": "However, the following problems arise from the semantic differential procedure as measurement of meaning. The procedure is not based on the denotative meaning of a word, but only on the connotative emotions attached to the word; it is difficult to choose the relevant dimensions, i.e. the dimensions required for the sufficient semantic space. Morris and Hirst [1991] used Roget's thesaurus as knowledge base for determining whether or not two words are semantically related. For example, the semantic relation of truck/car and drive/car are captured in the following way: This method can capture Mmost all types of semantic relations (except emotional and situational relation), such as paraphrasing by superordinate (ex. cat/pet), systematic relation (ex. north/east), and non-systematic relation (ex. theatre/fi]~).",
"cite_spans": [
{
"start": 344,
"end": 367,
"text": "Morris and Hirst [1991]",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "A Psycholinguistic Approach",
"sec_num": "2.1"
},
{
"text": "However, thesauri provide neither information about semantic difference between words juxtaposed in a category, nor about strength of the semantic relation between words --both are to be dealt in this paper. The reason is that thesauri axe designed to help writers find relevant words, not to provide the meaning of words.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Thesaurus-based Approach",
"sec_num": "2.2"
},
{
"text": "We analyse word meaning in terms of the semantic space defined by a semantic network, called Paradigme. Paradigme is systematically constructed from Gloss~me, a subset of an English dictionary.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Paradigme: A Field for Measuring Similarity",
"sec_num": "3"
},
{
"text": "English A dictionary is a closed paraphrasing system of natural language. Each of its headwords is defined by a phrase which is composed of the headwords and their derivations. A dictionary, viewed as a whole, looks like a tangled network of words.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Gloss~me --A Closed Subsystem of",
"sec_num": "3.1"
},
{
"text": "We adopted Longman Dictionary of Contemporary English (LDOCE) [1987] as such a closed system of English. LDOCE has a unique feature that each of its 56,000 headwords is defined by using the words in Longman Defining Vocabulary (hereafter, LDV) and their derivations. LDV consists of 2,851 words (as the headwords in LDOCE) based on the survey of restricted vocabulary [West, 1953] .",
"cite_spans": [
{
"start": 62,
"end": 68,
"text": "[1987]",
"ref_id": null
},
{
"start": 368,
"end": 380,
"text": "[West, 1953]",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Gloss~me --A Closed Subsystem of",
"sec_num": "3.1"
},
{
"text": "We made a reduced version of LDOCE, called Glossdme. Gloss~me has every entry of LDOCE whose headword is included in LDV. Thus, LDVis defined by Gloss~me, and Glossdme is composed of ...... LDV. Gloss~me is a closed subsystem of English.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Gloss~me --A Closed Subsystem of",
"sec_num": "3.1"
},
{
"text": "GIoss~me has 2,851 entries that consist of 101,861 words (35.73 words/entry on the average). An item of Gloss~me has a headword, a word-class, and one or more units corresponding to numbered definitions in the entry of LDOCE. Each unit has one headpart and several det-parts. The head-part is the first phrase in the definition, which describes the broader red t /red/ adj -dd-1 of the colour of blood or fire: a red rose~dress [ We painted the door red. --see also like a red rag to a bull (RAG 1) 2 (of human hair) of a bright brownish orange or copper colour 3 (of the human skin) pink, usa. for a short time: I turned red with embarrassment~anger. I The child's eye (= the skin round the eyes) were red from crying. 4 (of wine) of a dark pink to dark purple colour -~n~.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Gloss~me --A Closed Subsystem of",
"sec_num": "3.1"
},
{
"text": "[U]",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Gloss~me --A Closed Subsystem of",
"sec_num": "3.1"
},
{
"text": "(red adj ((of the colour) (of blood or fire) ) ((of a bright brownish (of human hair) ) (pink (usu for a short time) ;; referent ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Gloss~me --A Closed Subsystem of",
"sec_num": "3.1"
},
{
"text": "We then translated Gloss~me into a semantic network Paradigme. Each entry in Gloss~me is mapped onto a node in Paradigme. Paradigme has 2,851 nodes and 295,914 unnamed links between the nodes (103.79 links/node on the average). Figure 3 shows a sample node red_l. Each node consists of a headword, a word-class, an activity-value, and two sets of links: a rdf4rant and a rdfdrd. a(w, w') on Paradigme.",
"cite_spans": [],
"ref_spans": [
{
"start": 228,
"end": 236,
"text": "Figure 3",
"ref_id": "FIGREF3"
},
{
"start": 379,
"end": 387,
"text": "a(w, w')",
"ref_id": null
}
],
"eq_spans": [],
"section": "Paradlgme --A Semantic Network",
"sec_num": "3.2"
},
{
"text": "(1) Start activating w. 2Produce an activated pattern. 3Observe activity of w'. T (steps) Figure 5 . An activated pattern produced from red (changing of activity values of 10 nodes holding highest activity at T= 10).",
"cite_spans": [],
"ref_spans": [
{
"start": 90,
"end": 98,
"text": "Figure 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Paradlgme --A Semantic Network",
"sec_num": "3.2"
},
{
"text": "Similarity between words is computed by spreading activation on Paradigme. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Computing Similarity between Words",
"sec_num": "4"
},
{
"text": "Activating a node for a certain period of time causes the activity to spread over Paradigme and produce an activated pattern on it. The activated pattern approximately gets equilibrium after 10 steps, whereas it will never reach the actual equilibrium. The pattern thus produced represents the meaning of the node or of the words related to the node by morphological analysis 1. The activated pattern, produced from a word w, suggests similarity between w and any headword in where s(w) is significance of the word w. Then, an activated pattern P(w) is produced on Paradigmc. 3. Observe a(P(w), w') --an activity value of the node w' in P(w).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Measuring Similarity",
"sec_num": "4.1"
},
{
"text": "Then, a(w, w') is s(w').a(P(w), w'). The word significance s(w) E [0, 1] is defined as the normalized information of the word w in the corpus [West, 1953] . For example, the word red appears 2,308 times in the 5,487,056-word corpus, and the word and appears 106,064 times. So, s(red) and s(and) are computed as follows:",
"cite_spans": [
{
"start": 142,
"end": 154,
"text": "[West, 1953]",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Measuring Similarity",
"sec_num": "4.1"
},
{
"text": "log(230S/5487056) s(red) = --1og(1/5487056) --0.500955, -1og(106064/5487056) s(and) = --1og(1/5487056) = 0.254294. We estimated the significance of the words excluded from the word list [West, 1953] at the average significance of their word classes. This interpolation virtually enlarged West's 5,000,000-word corpus.",
"cite_spans": [
{
"start": 186,
"end": 198,
"text": "[West, 1953]",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Measuring Similarity",
"sec_num": "4.1"
},
{
"text": "For example, let us consider the similarity between red and orange. First, we produce an activated pattern P(red) on Paradigrae. (See ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Measuring Similarity",
"sec_num": "4.1"
},
{
"text": "Note that a(w, w') has direction (from w to w'), so that a(w, w') may not be equal to a(w', Note that the reflective similarity a(w,w) also depends on the significance s(w), so that cr(w,w) < 1:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "CO-",
"sec_num": null
},
{
"text": "a(waiter, waiter)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "CO-",
"sec_num": null
},
{
"text": "= 0.596803 , er(of, of) = 0.045256.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "CO-",
"sec_num": null
},
{
"text": "The similarity of words in LDV and their derivations is measured directly on Paradigme; the similarity of extra words is measured indirectly on Paradigme by treating an extra word as a word list W = {Wl,..., wn} of its definition in LDOCE. (Note that each wi E W is included in LDV or their derivations.) The similarity between the word lists W, W ~ is defined as follows. (See aiso Figure 6 .) or(W, W') = \u00a2 (~t0'ew' s(w').a(P(W),w')), As shown in Figure 7 , bottle_l and wine_l have high activity in the pattern produced from the phrase \"red alcoholic drink\". So, we may say that the overlapped pattern implies % bottle of wine\". For example, the similarity between linguistics and stylistics, both are the extra words, is computed as follows: {the, study, of, language, in, general, and, of, particular, languages, and, their, structure, and, grammar, and, history}, {the, study, of, style, in, written, or, spoken, language} ) = 0.140089.",
"cite_spans": [
{
"start": 746,
"end": 930,
"text": "{the, study, of, language, in, general, and, of, particular, languages, and, their, structure, and, grammar, and, history}, {the, study, of, style, in, written, or, spoken, language} )",
"ref_id": null
}
],
"ref_spans": [
{
"start": 383,
"end": 391,
"text": "Figure 6",
"ref_id": null
},
{
"start": 449,
"end": 457,
"text": "Figure 7",
"ref_id": null
}
],
"eq_spans": [],
"section": "Similarity of Extra Words",
"sec_num": "4.3"
},
{
"text": "~(linguistics, stylistics) = o(",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Similarity of Extra Words",
"sec_num": "4.3"
},
{
"text": "Obviously, both ~r(W,w) and a(w, W), where W is an extra word and w is not, are also computable. Therefore, we can compute the similarity between any two headwords in LDOCE and their derivations. (recalling the most similar episode in memory).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Similarity of Extra Words",
"sec_num": "4.3"
},
{
"text": "This section shows the application of the similarity between words to text analysis --measuring similarity between texts, and measuring text coherence.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Applications of the Similarity",
"sec_num": "5"
},
{
"text": "Suppose a text is a word list without syntactic structure. Then, the similarity ~r(X,X') between two texts X, X' can be computed as the similarity of extra words described above.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Measuring Similarity between Texts",
"sec_num": "5.1"
},
{
"text": "The following examples suggest that the similarity between texts indicates the strength of coherence relation between them: \"Where do you live?\" ) = 0.007676 . It is worth noting that meaningless iteration of words (especially, of function words) has less influence on the text similarity: a(\"It is a dog.\", \"That must be your dog.\")= 0.252536, ff(\"It is a doE.\", \"It is a log.\" ) = 0.053261 . The text similarity provides a semantic space for text retrieval --to recall the most similar text in X' { 1,\"\" X'} to the given text X. Once the activated pattern P(X) of the text X is produced on Paradigms, we can compute and compare the similarity a(X, XI), .-., a(X, X') immediately. (See ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Measuring Similarity between Texts",
"sec_num": "5.1"
},
{
"text": "Let us consider the reflective similarity a(X, X) of a text X, and use the notation c(X) for a(X, X). Then, c(X) can be computed as follows: = \u00a2 (E. x , (,O,(P(X) .,,,)).",
"cite_spans": [
{
"start": 153,
"end": 162,
"text": "(,O,(P(X)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Measuring Text Coherence",
"sec_num": "5.2"
},
{
"text": "The activated pattern P(X), as shown in Figure 7, represents the average meaning of wl @ X. So, c(X) represents cohesiveness of X --or semantic closeness of w 6 X, or semantic compactness of X. (It is also closely related to distortion in clustering.)",
"cite_spans": [],
"ref_spans": [
{
"start": 40,
"end": 49,
"text": "Figure 7,",
"ref_id": null
}
],
"eq_spans": [],
"section": "Measuring Text Coherence",
"sec_num": "5.2"
},
{
"text": "The following examples suggest that c(X) indicates the strength of coherence of X:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Measuring Text Coherence",
"sec_num": "5.2"
},
{
"text": "c (\"She opened the world with her typewriter. Her work was typing.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Measuring Text Coherence",
"sec_num": "5.2"
},
{
"text": "But She did not type quickly.\" ) = 0.502510 (coherent), c (\"Put on your clothes at once. I can not walk ten miles.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Measuring Text Coherence",
"sec_num": "5.2"
},
{
"text": "There is no one here but me.\" ) = 0.250840 (incoherent).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Measuring Text Coherence",
"sec_num": "5.2"
},
{
"text": "However, a cohesive text can be incoherent; the following example shows cohesiveness of the incoherent text --three sentences randomly selected from LDOCE: c (\"I saw a lion.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Measuring Text Coherence",
"sec_num": "5.2"
},
{
"text": "A lion belongs to the cat family. My family keeps a pet.\" ) = 0.560172 (incoherent, but cohesive).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Measuring Text Coherence",
"sec_num": "5.2"
},
{
"text": "Thus, c(X) can not capture all the aspects of text coherence. This is because c(X) is based only on the lexical cohesion of the words in X.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Measuring Text Coherence",
"sec_num": "5.2"
},
{
"text": "The structure of Paradigme represents the knowledge system of English, and an activated state produced on it represents word meaning. This section discusses the nature of the structure and states of Paradigms, and also the nature of the similarity computed on it.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "6"
},
{
"text": "The set of all the possible activated patterns produced on Paradigms can be considered as a semantic space where each state is represented as a point. The semantic space is a 2,851-dimensional hypercube; each of its edges corresponds to a word in LDV.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Paradigms and Semantic Space",
"sec_num": "6.1"
},
{
"text": "LDV is selected according to the following information: the word frequency in written English, and the range of contexts in which each word appears. So, LDV has a potential for covering all the concepts commonly found in the world.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Paradigms and Semantic Space",
"sec_num": "6.1"
},
{
"text": "This implies the completeness of LDV as dimensions of the semantic space. Osgood's semantic differential procedure [1952] used 50 adjective dimensions; our semantic measurement uses 2,851 dimensions with completeness and objectivity.",
"cite_spans": [
{
"start": 115,
"end": 121,
"text": "[1952]",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Paradigms and Semantic Space",
"sec_num": "6.1"
},
{
"text": "Our method can be applied to construct a semantic network from an ordinary dictionary whose defining vocabulary is not restricted. Such a network, however, is too large to spread activity over it. Paradigme is the small and complete network for measuring the similarity.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Paradigms and Semantic Space",
"sec_num": "6.1"
},
{
"text": "The proposed similarity is based only on the denotational and intensional definitions in the dictionary LDOCE. Lack of the connotational and extensional knowledge causes some unexpected results of measuring the similarity. For example, consider the following similarity: ~(tree, leaf) = 0.008693. This is due to the nature of the dictionary definitions-they only indicate sufficient conditions of the headword. For example, the definition of tree in LDOCE tells nothing about leaves: tree n 1 a tall plant with a wooden trunk and branches, that lives for many years 2 a bush or other plant with a treelike form 3 a drawing with a branching form, esp. as used for showing family relationships",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Connotation and Extension of Words",
"sec_num": "6.2"
},
{
"text": "However, the definition is followed by pictures of leafy trees providing readers with connotational and extensional stereotypes of trees.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Connotation and Extension of Words",
"sec_num": "6.2"
},
{
"text": "In the proposed method, the definitions in LDOCE are treated as word lists, though they are phrases with syntactic structures. Let us consider the following definition of lift:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Paradigmatic and Syntagmatic Similarity",
"sec_num": "6.3"
},
{
"text": "llft v 1 to bring from a lower to a higher level; raise 2 (of movable parts) to be able to be lifted 3 ---Anyone can imagine that something is moving upward. But, such a movement can not be represented in the activated pattern produced from the phrase. The meaning of a phrase, sentence, or text should be represented as pattern changing in time, though what we need is static and paradigmatic relation. This paradox also arises in measuring the similarity between texts and the text coherence. As we have seen in Section 5, there is a difference between the similarity of texts and the similarity of word lists, and also between the coherence of a text and cohesiveness of a word list.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Paradigmatic and Syntagmatic Similarity",
"sec_num": "6.3"
},
{
"text": "However, so far as the similarity between words is concerned, we assume that activated patterns on Paradigme will approximate the meaning of words, like a still picture can express a story.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Paradigmatic and Syntagmatic Similarity",
"sec_num": "6.3"
},
{
"text": "We described measurement of semantic similarity between words. The similarity between words is computed by spreading activation on the semantic net-work Paradigme which is systematically constructed from a subset of the English dictionary LDOCE.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "7"
},
{
"text": "Paradigme can directly compute the similarity between any two words in LDV, and indirectly the similarity of all the other words in LDOCE.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "7"
},
{
"text": "The similarity between words provides a new method for analysing the structure of text. It can be applied to computing the similarity between texts, and measuring the cohesiveness of a text which suggests coherence of the text, as we have seen in Section 5. And, we are now applying it to text segmentation [Grosz and Sidner, 1986; Youmans, 1991] , i.e. to capture the shifts of coherent scenes in a story.",
"cite_spans": [
{
"start": 307,
"end": 331,
"text": "[Grosz and Sidner, 1986;",
"ref_id": null
},
{
"start": 332,
"end": 346,
"text": "Youmans, 1991]",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "7"
},
{
"text": "In future research, we intend to deal with syntagmatic relations between words. Meaning of a text lies in the texture of paradigmatic and syntagmatic relations between words [Hjelmslev, 1943] . Paradigme provides the former dimension --an associative system of words --as a screen onto which the meaning of a word is projected like a still picture. The latter dimension --syntactic process --will be treated as a film projected dynamically onto Paradigme. This enables us to measure the similarity between texts as a syntactic process, not as word lists.",
"cite_spans": [
{
"start": 174,
"end": 191,
"text": "[Hjelmslev, 1943]",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "7"
},
{
"text": "We regard Paradigme as a field for the interaction between text and episodes in memory --the interaction between what one is hearing or reading and what one knows [Schank, 1990] . The meaning of words, sentences, or even texts can be projected in a uniform way on Paradigme, as we have seen in Section 4 and 5. Similarly, we can project text and episodes, and recall the most relevant episode for interpretation of the text.",
"cite_spans": [
{
"start": 163,
"end": 177,
"text": "[Schank, 1990]",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "7"
}
],
"back_matter": [
{
"text": "Step 1. For each entry Gi in Glossdme, map each unit uij in Gi onto a subr6f~rant sij of the corresponding node Pi in Paradigme. Each word wij,, E uij is mapped onto a link or links in sij, in the following way:1. Let t, be the reciprocal of the number of appearance of wij, (as its root form) in GIoss~me.2. If wij, is in a head-part, let t, be doubled.3. Find nodes {Pnl,P,~,\"'} corresponds to wlj, where t~k is thickness of the/~-th link ofri, and a~k is activity (at time T) of the node referred by the k-th link of ri.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "annex",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Step 2. For each node P/, compute thickness hij of each subr~f&ant sij in the following way: 1. Let m/be the number of subr~f~rants of P/. 2. Let hij be 2ml-1-j",
"authors": [],
"year": null,
"venue": "thickness of the links as ~\"~k tlp",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "thickness of the links as ~\"~k tlp, = 1, in each Step 2. For each node P/, compute thickness hij of each subr~f&ant sij in the following way: 1. Let m/be the number of subr~f~rants of P/. 2. Let hij be 2ml-1-j. (Note that hll : h/,n = 2 : 1.)",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Normalize thickness hij as ~\"~j h/j = 1",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Normalize thickness hij as ~\"~j h/j = 1, in each P,.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Generate r~f~r6 of each node in Paradigme, in the following way: 1. For each node P/in Paradigme, let its r~f~r~ ri be an empty set",
"authors": [],
"year": null,
"venue": "Step 3",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Step 3. Generate r~f~r6 of each node in Paradigme, in the following way: 1. For each node P/in Paradigme, let its r~f~r~ ri be an empty set.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "For each P~, for each subr~f~rant sij of Pi, for each link lijk in sij: a. Let Pii~ be the node referred by i/i~, and let t~i~ be thickness of Ilia",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "For each P~, for each subr~f~rant sij of Pi, for each link lijk in sij: a. Let Pii~ be the node referred by i/i~, and let t~i~ be thickness of Ilia.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Add a new link ! ~ to r~f~r~ of Pi~, where ! ~ is a link to P/with thickness t' = h~i",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "b. Add a new link ! ~ to r~f~r~ of Pi~, where ! ~ is a link to P/with thickness t' = h~i .t~j~.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "}, where 11i is a link with thickness t~-. Then, normalize thickness of the links as tij-1, in each ri. References [Church and Hanks, 1990] K. Church and P. Hanks. Word association norms, mutual information, and lexicography",
"authors": [
{
"first": "M",
"middle": [
"A K"
],
"last": "Halliday",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Hasan",
"suffix": ""
}
],
"year": 1976,
"venue": "Thus, each r~ becomes a set of links: {l'x, its",
"volume": "16",
"issue": "",
"pages": "175--204",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thus, each r~ becomes a set of links: {l'x, its,..-}, where 11i is a link with thickness t~-. Then, normalize thickness of the links as tij-1, in each ri. References [Church and Hanks, 1990] K. Church and P. Hanks. Word association norms, mutual information, and lexicography. Computational Linguistics, 16:22- 29, 1990. [Grosz and Sidner, 1986] B. J. Grosz and C. L. Sid- ner. Attention, intentions, and the structure of discourse. Computational Linguistics, 12:175-204, 1986. [Halliday and Hasan, 1976] M. A. K. Halliday and R. Hasan. Cohesion in English. Longrnan, Harlow, Essex, 1976.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Marker-passing over microfeatures: Towards a hybrid symbolic / connectionist model",
"authors": [
{
"first": ";",
"middle": [
"J A"
],
"last": "Hendler",
"suffix": ""
},
{
"first": "; L",
"middle": [],
"last": "Hendler",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Hjelmslev",
"suffix": ""
}
],
"year": 1943,
"venue": "Omkring Sprogteoriens Grundl~eggelse. Akademisk Forlag, Kcbenhavn",
"volume": "13",
"issue": "",
"pages": "79--106",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hendler, 1989] J. A. Hendler. Marker-passing over microfeatures: Towards a hybrid symbolic / con- nectionist model. Cognitive Science, 13:79-106, 1989. [Hjelmslev, 1943] L. Hjelmslev. Omkring Sprogteo- riens Grundl~eggelse. Akademisk Forlag, Kcben- havn, 1943.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Lexical cohesion computed by thesaural relations as an indicator of the structure of text",
"authors": [
{
"first": "J",
"middle": [],
"last": "Morris",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Hirst",
"suffix": ""
}
],
"year": 1952,
"venue": "Computational Linguistics",
"volume": "17",
"issue": "",
"pages": "197--237",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "[LDO, 1987] Longman Dictionary of Contemporary English. Longman, Harlow, Essex, new edition, 1987. [Morris and Hirst, 1991] J. Morris and G. Hirst. Lexical cohesion computed by thesaural relations as an indicator of the structure of text. Computa- tional Linguistics, 17:21-48, 1991. [Osgood, 1952] C. E. Osgood. The nature and measurement of meaning. Psychological Bulletin, 49:197-237, 1952.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Tell Me a Story: A New Look at Real and Artificial Memory",
"authors": [
{
"first": "; ] R",
"middle": [
"C"
],
"last": "Schank",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Schank",
"suffix": ""
}
],
"year": 1990,
"venue": "Massively parallel parsing: A strongly interactive model of natural language interpretation",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Schank, 1990] R. C. Schank. Tell Me a Story: A New Look at Real and Artificial Memory. Scribner, New York, 1990. [Waltz and Pollack, 1985] D. L. Waltz and J. B. Pol- lack. Massively parallel parsing: A strongly inter- active model of natural language interpretation.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "A tractable machine dictionary as a resource for computational semantics",
"authors": [
{
"first": "M",
"middle": [],
"last": "West",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Wilks",
"suffix": ""
}
],
"year": 1953,
"venue": "Computational Lexicography for Natural Language Processing",
"volume": "67",
"issue": "",
"pages": "763--789",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": ", 1953] M. West. A General Service List of En- glish Words. Longman, Harlow, Essex, 1953. [Wilks et al., 1989] Y. Wilks, D. Fass, C. M. Guo, J. McDonald, T. Plate, and B. Slator. A tractable machine dictionary as a resource for computa- tional semantics. In B. Boguraev and E. J. Briscoe, editors, Computational Lexicography for Natural Language Processing. Longman, Harlow, Essex, 1989. [Youmans, 1991] G. Youmans. A new tool for dis- course analysis: The vocabulary-management pro- file. Language, 67:763-789, 1991.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"uris": null,
"num": null,
"type_str": "figure",
"text": "Semantic relation: a. Desmond saw a cat. It was Molly's pet. b. Molly goes to the north. Not east. c. Desmond goes to a theatre. He likes films."
},
"FIGREF2": {
"uris": null,
"num": null,
"type_str": "figure",
"text": "A sample entry of LDOCE and a corresponding entry of Glosseme (in S-expression).(red_l (adj) 0.000000 ;;"
},
"FIGREF3": {
"uris": null,
"num": null,
"type_str": "figure",
"text": "A sample node of Paradigme (in S-expression). of the headword. The det-parts restrict the meaning of the head-part. (SeeFigure 2.)"
},
"FIGREF4": {
"uris": null,
"num": null,
"type_str": "figure",
"text": "r~f~rant of a node consists of several subrdfdrants correspond to the units of Giossdme. As shown in"
},
"FIGREF5": {
"uris": null,
"num": null,
"type_str": "figure",
"text": "and 3, a morphological analysis maps the word bromlish in the second unit onto a link to the node broom_l, and the word colour onto two links to colour_l (adjective) and colour.2 (noun).A rdfdrd of a node p records the nodes referring to p. For example, the rdf6rd of red_l is a set of links to nodes (ex. apple_l) that have a link to red_t in their rdf~rants. The rdf6rd provides information about the extension of red_l, not the intension shown in the rdf6rant.Each link has thickness tk, which is computed from the frequency of the word wk in Gloss~me and other information, and normalized as )-~tk = 1 in each subrdf6rant or r6f~rd. Each subrdf~rant also has thickness (for example, 0.333333 in the first subrdf6rant of red_l), which is computed by the order of the units which represents significance of the definitions. Appendix A describes the structure of Paradigme in detail. Process of measuring the similarity"
},
"FIGREF7": {
"uris": null,
"num": null,
"type_str": "figure",
"text": "Each of its nodes can hold activity, and it moves through the links. Each node computes its activity value vi(T+ 1) at time T+ 1 as follows:v(T+l) = \u00a2 (Ri(T), R~(T), e,(T)),where Rd(T) and R~(T) are the sum of weighted activity (at time T) of the nodes referred in the r6f6rant and r~f6r6 respectively. And, ei(T) is activity given from outside (at time T); to 'activate a node' is to let ei(T) > 0. The output function \u00a2 sums up three activity values in appropriate proportion and limits the output value to [0,1]. Appendix B gives the details of the spreading activation."
},
"FIGREF8": {
"uris": null,
"num": null,
"type_str": "figure",
"text": "LDV. The similarity a(w, w') E [0, 1] is computed in the following way. (See also Figure 4.) 1. Reset activity of all nodes in Paradigme. 2. Activate w with strength s(w) for 10 steps,"
},
"FIGREF9": {
"uris": null,
"num": null,
"type_str": "figure",
"text": "Figure 5.) In this case, both of the nodes red..1 (adjective) and red_,?. (noun) are activated with strength s(red)= 0.500955. Next, we compute s(oraage)= 0.676253, and observe a(P(red),orange) = 0.390774. Then, the similarity between red and orange is obtained as follows: in LDV onto their root forms (i.e. headwotds of LDOCE).4.2 Examples of Similarity between WordsThe procedure described above can compute the similarity a(w, w I) between any two words w, w I in LDV and their derivations. Computer programs of this procedure-spreading activation (in C), morphological analysis and others (in Common Lisp) --can compute a(w, w') within 2.5 seconds on a workstation (SPARCstation 2).The similarity \u00a2r between words works as an indicator of the lexical cohesion. The following examples illustrate that a increases with the strength of semantic relation: also increases with the occurrence tendency of words, for example: a(waiter, restaurant) = 0.175699, a(computer, restaurant) = 0.003268, a(red,"
},
"FIGREF10": {
"uris": null,
"num": null,
"type_str": "figure",
"text": "have higher similarity; meaningless words (especially, function words)should have lower similarity. The similarity a(w, w') increases with the significance s(w) and s(w') that represent meaningfulness of w and w"
},
"FIGREF11": {
"uris": null,
"num": null,
"type_str": "figure",
"text": "Measuring similarity of entra words as the similarity between word fists. An activated pattern produced from the word list: {red, alcoholic, drink}.where P(W) is the activated pattern produced from W by activating each wi E W with strength s(wl)2/~ s(wk) for 10 steps. And, \u00a2 is an output function which limits the value to [0,1]."
},
"FIGREF12": {
"uris": null,
"num": null,
"type_str": "figure",
"text": "Figure 8. Episode association on Paradigrae"
},
"FIGREF14": {
"uris": null,
"num": null,
"type_str": "figure",
"text": "Figure 8.)"
}
}
}
}