File size: 49,473 Bytes
6fa4bc9
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
{
    "paper_id": "P91-1019",
    "header": {
        "generated_with": "S2ORC 1.0.0",
        "date_generated": "2023-01-19T09:03:20.983036Z"
    },
    "title": "SUBJECT-DEPENDENT CO-OCCURRENCE AND WORD SENSE DISAMBIGUATION",
    "authors": [
        {
            "first": "Joe",
            "middle": [
                "A"
            ],
            "last": "Guthrie",
            "suffix": "",
            "affiliation": {},
            "email": ""
        },
        {
            "first": "Louise",
            "middle": [],
            "last": "Guthrie",
            "suffix": "",
            "affiliation": {},
            "email": ""
        },
        {
            "first": "Yorick",
            "middle": [],
            "last": "Wilks",
            "suffix": "",
            "affiliation": {},
            "email": ""
        },
        {
            "first": "Homa",
            "middle": [],
            "last": "Aidinejad",
            "suffix": "",
            "affiliation": {},
            "email": ""
        }
    ],
    "year": "",
    "venue": null,
    "identifiers": {},
    "abstract": "We describe a method for obtaining subject-dependent word sets relative to some (subjecO domain. Using the subject classifications given in the machine-readable version of Longman's Dictionary of Contemporary English, we established subject-dependent cooccurrence links between words of the defining vocabulary to construct these \"neighborhoods\". Here, we describe the application of these neighborhoods to information retrieval, and present a method of word sense disambiguation based on these co-occurrences, an extension of previous work.",
    "pdf_parse": {
        "paper_id": "P91-1019",
        "_pdf_hash": "",
        "abstract": [
            {
                "text": "We describe a method for obtaining subject-dependent word sets relative to some (subjecO domain. Using the subject classifications given in the machine-readable version of Longman's Dictionary of Contemporary English, we established subject-dependent cooccurrence links between words of the defining vocabulary to construct these \"neighborhoods\". Here, we describe the application of these neighborhoods to information retrieval, and present a method of word sense disambiguation based on these co-occurrences, an extension of previous work.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Abstract",
                "sec_num": null
            }
        ],
        "body_text": [
            {
                "text": "Word associations have been studied for some time in the fields of psycholinguistics (by testing human subjects on words), linguistics (where meaning is often based on how words co-occur with each other), and more recently, by researchers in natural language processing (Church and Hanks, 1990; Hindle and Rooth, 1990; Dagan, 1990; Wilks et al., 1990) using statistical measures to identify sets of associated words for use in various natural language processing tasks.",
                "cite_spans": [
                    {
                        "start": 270,
                        "end": 294,
                        "text": "(Church and Hanks, 1990;",
                        "ref_id": "BIBREF0"
                    },
                    {
                        "start": 295,
                        "end": 318,
                        "text": "Hindle and Rooth, 1990;",
                        "ref_id": null
                    },
                    {
                        "start": 319,
                        "end": 331,
                        "text": "Dagan, 1990;",
                        "ref_id": "BIBREF1"
                    },
                    {
                        "start": 332,
                        "end": 351,
                        "text": "Wilks et al., 1990)",
                        "ref_id": null
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "INTRODUCTION",
                "sec_num": null
            },
            {
                "text": "One of the tasks where the statistical data on associated words has been used with some success is lexical disambiguation. However, associated word sets gathered from a general corpus may contain words that are associated with many different senses. For example, vocabulary associated with the word \"bank\" includes \"money\", \"rob\", \"river\" and \"sand\". In this paper, we describe a method for obtaining subjectdependent associated word sets, or \"neighborhoods\" of a given word, relative to a particular (subject) domain. Using the subject classifications of Longman's Dictionary of Contemporary English (LDOCE), we have established subject-dependent co-occurrence finks between words of the defining vocabulary to construct these neighborhoods. We will describe the application of these neighborhoods to information reuieval, and present a method of word sense disambiguation based on these co-occurrences, an extension of previous work.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "INTRODUCTION",
                "sec_num": null
            },
            {
                "text": "Words which occur frequently with a given word may be thought of as forming a \"neighborhood\" of that word. If we can determine which words (i.e. spelling forms) co-occur frequently with each word sense, we can use these neighborhoods to disambiguate the word in a given text.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "CO-OCCURRENCE NEIGHBORHOODS",
                "sec_num": null
            },
            {
                "text": "Assume that we know of only two of the classic senses of the word bank: 1) A repository for money, and",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "CO-OCCURRENCE NEIGHBORHOODS",
                "sec_num": null
            },
            {
                "text": "A pile of earth on the edge of a river.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "2)",
                "sec_num": null
            },
            {
                "text": "We can expect the \"money\" sense of bank to co-occur frequently with such words as \"money\", \"loan\", and \"robber\", while the \"fiver\" sense would be more frequently associated with \"river\", \"bridge\", and \"earth\". In order to disambiguate \"bank\" in a text, we would produce neighborhoods for each sense, and intersect them with the text, our assumption being that the neighborhood which shared more words with the text would determine the correct sense. Variations of this idea appear in (l.,esk, 1986; Wilks, 1987; Veronis and Ide, 1990 ).",
                "cite_spans": [
                    {
                        "start": 484,
                        "end": 498,
                        "text": "(l.,esk, 1986;",
                        "ref_id": null
                    },
                    {
                        "start": 499,
                        "end": 511,
                        "text": "Wilks, 1987;",
                        "ref_id": "BIBREF7"
                    },
                    {
                        "start": 512,
                        "end": 533,
                        "text": "Veronis and Ide, 1990",
                        "ref_id": null
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "2)",
                "sec_num": null
            },
            {
                "text": "Previously, Schvaneveldt, 1990) used the LDOCE definitions as their text, in order to generate co-occurrence data for the 2,187 words in the LDOCE control (defining) vocabulary. They used various methods to apply this data to the problem of disambiguating control vocabulary words as they appear in the LDOCE example sentences. In every case however, the neighborhood of a given word was a co-occurrence neighborhood for its spelling form over all the definitions in the dictionary. Distinct neighborhoods corresponding to distinct senses had to be obtained by using the words in the sense definition as a core for the neighborhood, and expanding it by combining it with additional words from the cooccurrence neighborhoods of the core words.",
                "cite_spans": [
                    {
                        "start": 12,
                        "end": 31,
                        "text": "Schvaneveldt, 1990)",
                        "ref_id": "BIBREF4"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "2)",
                "sec_num": null
            },
            {
                "text": "The study of word co-occurrence in a text is based on the cliche that \"one (a word) is known by the company one keeps\". We hold that it also makes a difference where that company is kept: since a word may occur with different sets of words in different contexts, we construct word neighborhoods which depend on the subject of the text in question. We call these, naturally enough, \"subject-dependent neighborhoods\".",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "SUBJECT-DEPENDENT NEIGHBORHOODS",
                "sec_num": null
            },
            {
                "text": "A unique feature of the electronic version of LDOCE is that many of the word sense definitions are marked with a subject field code which tells us which subject area the sense pertains to. For example, the \"money\"-related senses of bank are marked EC (Economics), and for each such main subject heading, we consider the subset of LDOCE definitions that consists of those sense definitions which sham that subject code. These definitions are then collected into one file, and co-occurrence data for their defining vocabulary is generated. Word x is said to co-occur with word y if x and y appear in the same sense definition; the total number of times they co-occur is denoted as",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "SUBJECT-DEPENDENT NEIGHBORHOODS",
                "sec_num": null
            },
            {
                "text": "We then construct a 2,187 x 2,187 matrix in which each row and column corresponds to one word of the defining vocabulary, and the entry in the xth row and yth column represents the number of times the xth word co-occurred with the yth word. (This is a symmetric matrix, and therefore it is only necessary to maintain half of it.) We denote by f, the total number of times word x appeared. While many statistics may be used to measure the relatedness of words x and y, we used the function",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "SUBJECT-DEPENDENT NEIGHBORHOODS",
                "sec_num": null
            },
            {
                "text": "r (x,y ) = f x~ .",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "SUBJECT-DEPENDENT NEIGHBORHOODS",
                "sec_num": null
            },
            {
                "text": "in this study. We choose a co-occurrence neighborhood of a word x from a set of closely related words. We may choose the ten words with the highest relatedness statistic, for instance.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "SUBJECT-DEPENDENT NEIGHBORHOODS",
                "sec_num": null
            },
            {
                "text": "Neighborhoods of the word \"metal\" in the category \"Economics\" and \"Business\" are presented below: In this example, the ~ghborhoods reflect a fundamental difference between the two subject areas. Economics is a more theoretical subject, and therefore its neighborhood contains words like \"idea\", \"gold\", \"silver\", and \"real\", while in the more practical domain of Business, we find the words \"brass\", \"apparatus\", \"spring\", and \"plate\".",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "SUBJECT-DEPENDENT NEIGHBORHOODS",
                "sec_num": null
            },
            {
                "text": "We can expect the contrast between subject neighborhoods to be especially great for words with senses that fall into different subject areas. Consider the actual neighborhoods of our original example, bank. Notice that even though we included the twenty most closely related words in each neighborhood, they are still unrelated or disjoint, although many of the words which appear in the lists are indeed suggestive of the sense or senses which fall under that subject category. In LDOCE, three of the eleven senses of bank are marked with the code EC for Economics, and these represent the \"money\" senses of the word. It is a quirk of the classification in LDOCE that the \"river\" senses of bank are not marked with a subject code.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "SUBJECT-DEPENDENT NEIGHBORHOODS",
                "sec_num": null
            },
            {
                "text": "This lack of a subject code for a word sense in LDOCE is not uncommon, however, and as was the case with bank, some word senses may have subject codes, while others do not. We label this lack of a subject code the \"null code\", and form a neighborhood of this type of sense by using all sense definitions without code as text. This \"null code neighborhood\" can reveal the common, or \"generic\" sense of the word.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "SUBJECT-DEPENDENT NEIGHBORHOODS",
                "sec_num": null
            },
            {
                "text": "The twenty most frequently occurring words with bank in definitions with the null subject code form the following neighborhood: It is obvious that approximately half of these words are associated with our two main senses of bank-but a new element has crept in: the appearance of four out of eight words which refer to the money sense (\"rob\", \"criminal\", \"police\", and \"thief\") reveal a sense of bank which did not appear in the EC neighborhood.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "SUBJECT-DEPENDENT NEIGHBORHOODS",
                "sec_num": null
            },
            {
                "text": "In the null code definitions, there are quite a few references to the potential for a bank to be robbed.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "SUBJECT-DEPENDENT NEIGHBORHOODS",
                "sec_num": null
            },
            {
                "text": "Finally, for comparison, consider a neighborhood for bank which uses all the LDOCE definitions (see Schvaneveldt, 1990; Wilks et al., 1990 ): Only four of these words (\"bank\", \"cam\", \"sand\", and \"thief\") are not found in the other three neighborhoods, and the number of words in the intersection of this neighborhood with the Economics, Engineering, and Null neighborhoods are: six, four, and eleven, respectively. Recalling that the Economics and Engineering neighborhoods are disjoint, this data supports our hypothesis that the subject-dependent neighborhoods help us to distinguish senses more easily than neighborhoods which are extracted from the whole dictionary.",
                "cite_spans": [
                    {
                        "start": 100,
                        "end": 119,
                        "text": "Schvaneveldt, 1990;",
                        "ref_id": "BIBREF4"
                    },
                    {
                        "start": 120,
                        "end": 138,
                        "text": "Wilks et al., 1990",
                        "ref_id": null
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "SUBJECT-DEPENDENT NEIGHBORHOODS",
                "sec_num": null
            },
            {
                "text": "There are over a hundred main subject field codes in LDOCE, and over threehundred sub-divisions within these. For example, \"medicine-and-biology\" is a main subject field (coded \"MD\"), and has twentytwo sub-divisions such as \"anatomy\" and \"biochemistry\". These main codes and their sub-divisions constitute the only two levels in the LDOCE subject code hierarchy, and main codes such as \"golf' and \"sports\" are not related to each other. Cknrently, we use only the main codes when we are constructing a subject-dependent neighborhood. But even this division of the definition text is fine enough so that, given a word and a subject code, the word may not appear in the definitions which have that subject code at all.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "SUBJECT-DEPENDENT NEIGHBORHOODS",
                "sec_num": null
            },
            {
                "text": "To overcome this problem, we have adopted a restructured hierarchy of the subject codes, as developed b~y Slator (1988) . This tree structure has a node at the top, representing all the definitions. At the next level are six fundamental categories such as \"science\" and \"transportation\", as well as the null code. These clusters are further subdivided so that some main codes become sub-divisions of others (\"golf' becomes a sub-division of \"sports\", etc.). The maximum depth of this tree is five levels.",
                "cite_spans": [
                    {
                        "start": 106,
                        "end": 119,
                        "text": "Slator (1988)",
                        "ref_id": null
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "SUBJECT-DEPENDENT NEIGHBORHOODS",
                "sec_num": null
            },
            {
                "text": "If the word for which we want to produce a neighborhood appears too infrequently in definitions with a given code, we travel up the hierarchy and expand the text under consideration until we have reached a point where the word appears frequently enough to allow the neighborhood to be constructed. The worst case scenario would be one in which we had traveled all the way to the top of the hierarchy and used all the definitions as the text, only to wind up with the same co-occurrence neighborhoods as did McDonald and Plate (Schvaneveldt, 1990; Wilks et al., 1990 )! There are certain drawbacks in using LDOCE to construct the subject-dependent neighborhoods, however, the amount of text in LDOCE about any one subject area is rather limited, is comprised of a control vocabulary for dictionary definitions only, and uses sample sentences which were concocted with non-native English speakers in mind.",
                "cite_spans": [
                    {
                        "start": 507,
                        "end": 546,
                        "text": "McDonald and Plate (Schvaneveldt, 1990;",
                        "ref_id": "BIBREF4"
                    },
                    {
                        "start": 547,
                        "end": 565,
                        "text": "Wilks et al., 1990",
                        "ref_id": null
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "SUBJECT-DEPENDENT NEIGHBORHOODS",
                "sec_num": null
            },
            {
                "text": "In the next phase of our research, large corpora consisting of actual documents from a given subject area will be used, in order to obtain neighborhoods which more accurately reflect the sorts of texts which will be used in applications. In the future, these neighborhoods may replace those constructed from LDOCE, while leaving the subject code hierarchy and various applications intact.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "SUBJECT-DEPENDENT NEIGHBORHOODS",
                "sec_num": null
            },
            {
                "text": "In this section, we describe an application of subject-dependent co-occurrence neighborhoods to the problem of word sense disambiguation. The subject-dependent cooccurrence neighborhoods are used as building blocks for the neighborhoods used in disambiguation. For each of the subject codes (including the null code) which appear with a word sense to be disambiguated, we intersect the corresponding subjectdependent co-occurrence neighborhood with the text being considered (the size of text can vary from a sentence to a paragraph). The intersection must contain a pre-selected minimum number of words to be considered. But if none of the neighborhoods intersect at greater than this threshold level, we replace the neighborhood N by the neighborhood N(1), which consists of N together with the first word from each neighborhood of words in N, using the same subject code. If necessary, we add the second most strongly associated word for each of the words in the original neighborhood N, forming the neighbor-hood N(2). We continue this process until a subject-dependent co-occurrence neighborhood has intersection above the threshold level. Then, the sense or senses with this subject code is selected. If more than one sense has the selected code, we use their definitions as cores to build distinguishing neighborhoods for them. These are again intersected with the text to determine the correct sense.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "WORD SENSE DISAMBIGUATION",
                "sec_num": null
            },
            {
                "text": "The following two examples illustrate this method. Note that some of the neighborhoods differ from those given earlier since the text used to construct these neighborhoods includes any example sentences which may occur in the sense definitions. Those neighborhoods presented earlier ignored the example sentences. In each example, we attempt to disambiguate the word \"bank\" in a sentence which appears as an example sentence in the Collins COBUILD English Language Dictionary. The disambiguation consists of choosing the correct sense of \"bank\" from among the thirteen senses given in LDOCE. These senses are summarized below. Example 1. The sentence is 'Whe aircraft turned, banking slightly.\" The neighborhoods of \"bank\" for the five relevant subject codes are given below. The AU neighborhood contains two words, \"aircraft\" and \"turn\", which also appear in the sentence. Note that we consider all forms of tum (tumed, tuming, etc.) to match \"turn\". Since none of the other neighborhoods have any words in common with the sentence, and since our threshold value for this short sentence is 2, AU is selected as the subject code. We must now decide between the two senses which have this code.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "WORD SENSE DISAMBIGUATION",
                "sec_num": null
            },
            {
                "text": "At this point we remove the function words from the sense definitions and replace each remaining word by its root form. We obtain the following neighborhoods. Since bank(4) has no words in common with the sentence, and bank(6) has two Ctum\" and \"aircraft\"), bank(6) is selected. This is indeed the sense of \"bank\" used in the sentence.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "WORD SENSE DISAMBIGUATION",
                "sec_num": null
            },
            {
                "text": "Example 2. The sentence is \"We got a bank loan to buy a car.\" The original neighborhoods of \"bank\" are, of course, the same as in Example 1. The threshold is again 2. None of the neighborhoods has more than one word in common with the sentence, so the iterative process of enlarging the neighborhoods is used. The AU neighborhood is expanded to include \"engine\" since it is the first word in the AU neighborhood of \"make\". The first word in the AU neighborhood of \"up\" is \"increase\", so \"increase\" is added to the neighborhood. If the word to be added already appears in the neighborhood of \"bank\", no word is added.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "WORD SENSE DISAMBIGUATION",
                "sec_num": null
            },
            {
                "text": "On the fifteenth iteration, the EC neighborhood contains \"get\" and \"buy\". None of the other neighborhoods have more than one word in common with the sentence, so EC is selected as the subject code. Definitions 8, 12, and 13 of bank all have the EC subject code, so their definitions are used as cores to build neighborhoods to allow us to choose one of them. After twenty-three iterations, bank(8) is selected.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "WORD SENSE DISAMBIGUATION",
                "sec_num": null
            },
            {
                "text": "Experiments are underway to test this method and variations of it on large numbers of sentences so that its effectiveness may be compared with other disambiguation techniques. Results of these experiments will be reported elsewhere.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "WORD SENSE DISAMBIGUATION",
                "sec_num": null
            },
            {
                "text": "applications of subjectdependent neighborhoods in addition to word-sense disambiguation are being pursued, as well. For information retrieval, previously constructed neighborhoods relevant to the subject area can be used to expand a query and the target (titles, key words, etc.) to include more words in the intersection, and improve both recall and precision. Another application is the determination of the subject area of a text. Since the effectiveness of searching for key words to determine the topic of a text is limited by the choice of the particular list of key words, and the fact that the text may use synonyms or refer to the concept the key word represents without using it (for example by using a pronoun in its place), we could look for word associations (thereby involving more words in the process and making it less vulnerable to the above problems), rather than simply searching for key words indicative of a topic. Neighborhoods of words in the text could be constructed for each of the six fundamental categories, and intersected with the surrounding words in the text. After choosing the category with the greatest intersection, we would then traverse the subject code tree downward to arrive at a more specific code, stopping at any point where there is not enough data to allow us to choose one code over the others at that level. Once a subject code is selected for a text, it could be used as a context for word-sense disambiguation.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Several",
                "sec_num": null
            },
            {
                "text": "Although the words in the LDOCE definitions constitute a small text (almost one million words, compared with the mega-texts used in other co-occurrence studies), the unique feature of subject codes which can be used to distinguish many definitions, and LDOCE's small control vocabulary (2,187 words) make it a useful corpus for obtaining co-occurrence data. The development of techniques for information retrieval and word-sense disambiguation based on these subject-dependent cooccurrence neighborhoods is very promising indeed.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "CONCLUSION",
                "sec_num": null
            }
        ],
        "back_matter": [
            {
                "text": "This research was supported by the New Mexico State University Computing Research Laboratory through NSF Grant No. IRI-8811108. Grateful acknowledgement is accorded to all the members of the CRL Natural Language Group for their comments and suggestions.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "ACKNOWLEDGEMENTS",
                "sec_num": null
            }
        ],
        "bib_entries": {
            "BIBREF0": {
                "ref_id": "b0",
                "title": "Word Association Norms, Mutual Information, and Lexicography",
                "authors": [
                    {
                        "first": "Kenneth",
                        "middle": [
                            "W"
                        ],
                        "last": "Church",
                        "suffix": ""
                    },
                    {
                        "first": "Patrick",
                        "middle": [],
                        "last": "Hanks",
                        "suffix": ""
                    }
                ],
                "year": 1990,
                "venue": "Computational Linguistics",
                "volume": "16",
                "issue": "1",
                "pages": "22--29",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Church, Kenneth W., and Patrick Hanks (1990). Word Association Norms, Mutual Infor- mation, and Lexicography. Computational Linguistics, 16, 1, pp.22-29.",
                "links": null
            },
            "BIBREF1": {
                "ref_id": "b1",
                "title": "Processing Large Corpora for Reference Resolution",
                "authors": [
                    {
                        "first": "Ido",
                        "middle": [],
                        "last": "Dagan",
                        "suffix": ""
                    },
                    {
                        "first": "Alon",
                        "middle": [],
                        "last": "Itai",
                        "suffix": ""
                    }
                ],
                "year": 1990,
                "venue": "Proceedings of the 13th International Conference on Computational Linguistics (COLING-90)",
                "volume": "3",
                "issue": "",
                "pages": "330--332",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Dagan, Ido, and Alon Itai (1990). Process- ing Large Corpora for Reference Resolution. Proceedings of the 13th International Conference on Computational Linguistics (COLING-90), Helsinki, Finland, 3, pp.330-332.",
                "links": null
            },
            "BIBREF3": {
                "ref_id": "b3",
                "title": "Structural Ambiguity and Lexical Relations. Proceedings of the DARPA Speech and Natural Language Workshop",
                "authors": [],
                "year": null,
                "venue": "",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Structural Ambiguity and Lexical Relations. Proceedings of the DARPA Speech and Natural Language Workshop.",
                "links": null
            },
            "BIBREF4": {
                "ref_id": "b4",
                "title": "Automatic Sense Disambiguation Using Machine Readable Dictionaries: How to Tell a Pine Cone from an Ice Cream Cone",
                "authors": [
                    {
                        "first": "Michael",
                        "middle": [
                            "E"
                        ],
                        "last": "Lesk",
                        "suffix": ""
                    },
                    {
                        "first": "James",
                        "middle": [
                            "E"
                        ],
                        "last": "Mcdonald",
                        "suffix": ""
                    },
                    {
                        "first": "Tony",
                        "middle": [],
                        "last": "Plate",
                        "suffix": ""
                    },
                    {
                        "first": "Roger",
                        "middle": [
                            "W"
                        ],
                        "last": "Schvaneveldt",
                        "suffix": ""
                    }
                ],
                "year": 1986,
                "venue": "Pathfinder Associative Networks: Studies in Knowledge Organization. New Jersey: Ablex. Slator, Brian M. 0988)",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Lesk, Michael E. (1986). Automatic Sense Disambiguation Using Machine Readable Dic- tionaries: How to Tell a Pine Cone from an Ice Cream Cone. Proceedings of the ACM SIGDOC Conference, Toronto, Ontario. McDonald, James E., Tony Plate, and Roger W. Schvaneveldt (1990). Using Pathfinder to extract semantic information from text. In R. W. Schvaneveldt (ed.), Pathfinder Associative Networks: Studies in Knowledge Organization. Norwood, NJ: Ablex. Schvaneveldt, Roger W. (1990). Path- finder Associative Networks: Studies in Knowledge Organization. New Jersey: Ablex. Slator, Brian M. 0988). Constructing",
                "links": null
            },
            "BIBREF5": {
                "ref_id": "b5",
                "title": "Contextually Organized Lexical Semantic Knowledge-bases",
                "authors": [],
                "year": 1990,
                "venue": "Proceedings of the Third Annual Rocky Mountain Conference on Artificial Intelligence (RMCAI-88)",
                "volume": "",
                "issue": "",
                "pages": "142--148",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Contextually Organized Lexical Semantic Knowledge-bases. Proceedings of the Third Annual Rocky Mountain Conference on Artificial Intelligence (RMCAI-88), Denver, CO, pp.142- 148. Veronis, Jean., Nancy Ide (1990). Very",
                "links": null
            },
            "BIBREF6": {
                "ref_id": "b6",
                "title": "Large Neural Networks for Word-sense Disambiguation",
                "authors": [],
                "year": null,
                "venue": "COLING '90",
                "volume": "",
                "issue": "",
                "pages": "389--394",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Large Neural Networks for Word-sense Disambi- guation. COLING '90, 389-394.",
                "links": null
            },
            "BIBREF7": {
                "ref_id": "b7",
                "title": "A Tractable Machine Dictionary as a Resource for Computational Semantics",
                "authors": [
                    {
                        "first": "Yorick",
                        "middle": [
                            "A"
                        ],
                        "last": "Wilks",
                        "suffix": ""
                    },
                    {
                        "first": "C",
                        "middle": [],
                        "last": "Dan",
                        "suffix": ""
                    },
                    {
                        "first": "Chengming",
                        "middle": [],
                        "last": "Fass",
                        "suffix": ""
                    },
                    {
                        "first": "James",
                        "middle": [
                            "E"
                        ],
                        "last": "Guo",
                        "suffix": ""
                    },
                    {
                        "first": "Tony",
                        "middle": [],
                        "last": "Mcdonald",
                        "suffix": ""
                    },
                    {
                        "first": "Brian",
                        "middle": [
                            "M"
                        ],
                        "last": "Plate",
                        "suffix": ""
                    },
                    {
                        "first": "",
                        "middle": [],
                        "last": "Slator",
                        "suffix": ""
                    }
                ],
                "year": 1987,
                "venue": "Computational IJ.xicography for Natural Language Processing",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Wilks, Yorick A., Dan C. Fass, Cheng- ming Guo, James E. McDonald, Tony Plate, and Brian M. Slator (1987). A Tractable Machine Dictionary as a Resource for Computational Semantics. Memorandum in Computer and Cog- nitive Science, MCCS-87-105, Computing Research Laboratory, New Mexico State Univer- sity. In Branimir Boguraev and Ted Briscoe (eds.), Computational IJ.xicography for Natural Language Processing. Harlow, Essex, England: Longman Group Limited.",
                "links": null
            },
            "BIBREF8": {
                "ref_id": "b8",
                "title": "Journal of Machine Translation, 2. Also to appear in Theoretical and Computational Issues in Lexical Semantics",
                "authors": [
                    {
                        "first": "",
                        "middle": [],
                        "last": "Wilk",
                        "suffix": ""
                    },
                    {
                        "first": "A",
                        "middle": [],
                        "last": "Ymick",
                        "suffix": ""
                    },
                    {
                        "first": "C",
                        "middle": [],
                        "last": "Dan",
                        "suffix": ""
                    },
                    {
                        "first": "Chengming",
                        "middle": [],
                        "last": "Fass",
                        "suffix": ""
                    },
                    {
                        "first": "James",
                        "middle": [
                            "E"
                        ],
                        "last": "Guo",
                        "suffix": ""
                    },
                    {
                        "first": "Tony",
                        "middle": [],
                        "last": "Mcdonald",
                        "suffix": ""
                    },
                    {
                        "first": "Brian",
                        "middle": [
                            "M"
                        ],
                        "last": "Plate",
                        "suffix": ""
                    },
                    {
                        "first": "",
                        "middle": [],
                        "last": "Slator",
                        "suffix": ""
                    }
                ],
                "year": 1990,
                "venue": "J",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Wilk.% Ymick A., Dan C. Fass, Cheng- ming Guo, James E. McDonald, Tony Plate, and Brian M. Slator (1990). Prodding Machine Tractable Dictionary Tools. Journal of Machine Translation, 2. Also to appear in Theoretical and Computational Issues in Lexical Semantics , J. Pnstejovsky (~!.)",
                "links": null
            }
        },
        "ref_entries": {
            "FIGREF0": {
                "text": "bank(l) : [ ] : land along the side of a fiver, lake, etc. bank(2) : [ ] : earth which is heaped up in a field or garden. bank(3) : [ ] : a mass of snow, clouds, mud, etc. bank(4) : [AU] : a slope made at bends in a road or race-track. bank(5) : [ ] : a sandbank in a river, etc. bank(6) : [ALl] : to move a ear or aircraft with one side higher than the other. bank('/) : [ ] : a row, especially of oars in an ancient boat or keys on a typewriter. bank(8) : [EC] : a place in which money is kept and paid out on demand. bank(9) : [MD] : a place where something is held ready for use, such as blood. bank(10) : [GB] : (a person who keeps) a supply of money or pieces for payment in a gambling game. bank(ll) : [ ] : break the bank is to win all the money in bank(10). bank(12) : [EC] : to put or keep (money) in a bank. bank(13) : [EC] : to keep ones money in a bank.",
                "num": null,
                "type_str": "figure",
                "uris": null
            },
            "TABREF0": {
                "text": "Economics neighborhood of metal",
                "num": null,
                "html": null,
                "type_str": "table",
                "content": "<table><tr><td/><td colspan=\"3\">Subject Code EC ffi Economics</td></tr><tr><td>metal</td><td colspan=\"2\">idea coin</td><td>them</td><td>silver</td></tr><tr><td/><td>w, al</td><td colspan=\"3\">should pocket gold</td></tr><tr><td/><td>well</td><td>him</td><td/></tr></table>"
            },
            "TABREF1": {
                "text": "Business neighborhood of recta/",
                "num": null,
                "html": null,
                "type_str": "table",
                "content": "<table><tr><td/><td colspan=\"3\">Subject Code BU = Business</td><td/></tr><tr><td>metal</td><td>bear</td><td>apparatus</td><td>mouth</td><td>inside</td></tr><tr><td/><td>spring</td><td>entrance</td><td>plate</td><td>brags</td></tr><tr><td/><td>tight</td><td>sheet</td><td/><td/></tr></table>"
            },
            "TABREF2": {
                "text": "Economics neighborhood of bank",
                "num": null,
                "html": null,
                "type_str": "table",
                "content": "<table><tr><td/><td colspan=\"3\">Subject Code EC = Economies</td><td/></tr><tr><td>bank</td><td colspan=\"4\">account cheque money by</td></tr><tr><td/><td>into</td><td>have</td><td>keep</td><td>order</td></tr><tr><td/><td>out</td><td>pay</td><td>at</td><td>put</td></tr><tr><td/><td>from</td><td>draw</td><td>an</td><td>busy</td></tr><tr><td/><td>more</td><td>supply</td><td>it</td><td>safe</td></tr><tr><td colspan=\"5\">Table 4. Engineering neighborhood of bank</td></tr><tr><td/><td colspan=\"3\">Subject Code EG = Engineering</td><td/></tr><tr><td>bank</td><td>river</td><td>wall</td><td>flood</td><td>thick</td></tr><tr><td/><td>earth</td><td colspan=\"3\">prevent opposite chair</td></tr><tr><td/><td colspan=\"2\">hurry paste</td><td>spread</td><td>overflow</td></tr><tr><td/><td>walk</td><td>help</td><td>we</td><td>throw</td></tr><tr><td/><td>clay</td><td>then</td><td>wide</td><td>level</td></tr></table>"
            },
            "TABREF3": {
                "text": "Null Code neighborhood of bank",
                "num": null,
                "html": null,
                "type_str": "table",
                "content": "<table><tr><td colspan=\"4\">Subject Code NULL = no code assigned</td></tr><tr><td>bank rob</td><td>river</td><td colspan=\"2\">account lend</td></tr><tr><td colspan=\"2\">overflow flood</td><td>money</td><td>criminal</td></tr><tr><td>lake</td><td>flow</td><td>snow</td><td>cliff</td></tr><tr><td>police</td><td>shore</td><td>heap</td><td>thief</td></tr><tr><td>borrow</td><td colspan=\"2\">along steep</td><td>earth</td></tr></table>"
            },
            "TABREF4": {
                "text": "Unrestricted neighborhood of bank",
                "num": null,
                "html": null,
                "type_str": "table",
                "content": "<table><tr><td colspan=\"3\">Subject Code All</td><td/></tr><tr><td>bank account</td><td colspan=\"2\">bank busy</td><td>cheque</td></tr><tr><td>criminal</td><td>earn</td><td colspan=\"2\">flood flow</td></tr><tr><td>interest</td><td>lake</td><td>lend</td><td>money</td></tr><tr><td colspan=\"2\">overflow pay</td><td>river</td><td>rob</td></tr><tr><td>safes</td><td>and</td><td>thief</td><td>wall</td></tr></table>"
            },
            "TABREF5": {
                "text": "Automotive neighborhood of bank",
                "num": null,
                "html": null,
                "type_str": "table",
                "content": "<table><tr><td/><td colspan=\"4\">Subject Code ALl = Automotive</td></tr><tr><td colspan=\"2\">bank make</td><td>go</td><td>up</td><td>move</td></tr><tr><td/><td>so</td><td>they</td><td>high</td><td>also</td></tr><tr><td/><td>round</td><td>car</td><td>side</td><td>turn</td></tr><tr><td/><td>road</td><td colspan=\"3\">aircraft slope bend</td></tr><tr><td/><td>safe</td><td/><td/><td/></tr><tr><td colspan=\"5\">Table 8. Economics neighborhood of bank</td></tr><tr><td/><td colspan=\"3\">Subject Code EC = Economics</td><td/></tr><tr><td>bank</td><td>have</td><td>it</td><td>person</td><td>out</td></tr><tr><td/><td>into</td><td>take</td><td>money</td><td>put</td></tr><tr><td/><td>write</td><td>keep</td><td>pay</td><td>order</td></tr><tr><td/><td>another</td><td colspan=\"2\">paper draw</td><td>supply</td></tr><tr><td/><td colspan=\"2\">account safe</td><td>sum</td><td>cheque</td></tr><tr><td colspan=\"5\">Table 9. Gambling neighborhood of bank</td></tr><tr><td/><td colspan=\"3\">Subject Code GB = Gambling</td><td/></tr><tr><td>bank</td><td>person</td><td>use</td><td>money</td><td>piece</td></tr><tr><td/><td>play</td><td>keep</td><td>pay</td><td>game</td></tr><tr><td/><td colspan=\"3\">various supply chance</td><td/></tr><tr><td colspan=\"5\">Table 10. Medical neighborhood of bank</td></tr><tr><td colspan=\"5\">Subject Code MD -Medicine and Biology</td></tr><tr><td>bank</td><td colspan=\"2\">something use</td><td>place</td><td>hold</td></tr><tr><td/><td>medicine</td><td>ready</td><td>blood</td><td>human</td></tr><tr><td/><td>origin</td><td>organ</td><td>store</td><td>hospital</td></tr><tr><td/><td>tream~ent</td><td colspan=\"2\">product comb</td><td/></tr></table>"
            },
            "TABREF6": {
                "text": "Null Code neighborhood of bank",
                "num": null,
                "html": null,
                "type_str": "table",
                "content": "<table><tr><td/><td colspan=\"4\">Subject Code NULL = No code assigned</td></tr><tr><td>bank</td><td>game</td><td>earth</td><td>stone</td><td>boat</td></tr><tr><td/><td>fiver</td><td>bar</td><td>snow</td><td>lake</td></tr><tr><td/><td>sand</td><td>shore</td><td>mud</td><td>framework</td></tr><tr><td/><td>flood</td><td>cliff</td><td>heap</td><td>harbor</td></tr><tr><td/><td colspan=\"2\">ocean parallel</td><td colspan=\"2\">overflow clerk</td></tr></table>"
            },
            "TABREF7": {
                "text": "Words in sense 4 of bank",
                "num": null,
                "html": null,
                "type_str": "table",
                "content": "<table><tr><td/><td colspan=\"3\">Definition bank(4)</td><td/></tr><tr><td colspan=\"5\">slope make bend road so</td></tr><tr><td>they</td><td>safe</td><td>car</td><td>go</td><td>round</td></tr><tr><td colspan=\"5\">Table 13. Words in sense 6 of bank</td></tr><tr><td/><td colspan=\"3\">Definition bank(6)</td><td/></tr><tr><td>car</td><td/><td>aircraft</td><td colspan=\"2\">move side</td></tr><tr><td colspan=\"2\">high</td><td>make</td><td>turn</td><td/></tr></table>"
            }
        }
    }
}