File size: 79,132 Bytes
6fa4bc9
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
1001
1002
1003
1004
1005
1006
1007
1008
1009
1010
1011
1012
1013
1014
1015
1016
1017
1018
1019
1020
1021
1022
1023
1024
1025
1026
1027
1028
1029
1030
1031
1032
1033
1034
1035
1036
1037
1038
1039
1040
1041
1042
1043
1044
1045
1046
1047
1048
1049
1050
1051
1052
1053
1054
1055
1056
1057
1058
1059
1060
1061
1062
1063
1064
1065
1066
1067
1068
1069
1070
1071
1072
1073
1074
1075
1076
1077
1078
1079
1080
1081
1082
1083
1084
1085
1086
1087
1088
1089
1090
1091
1092
1093
1094
1095
1096
1097
1098
1099
1100
1101
1102
1103
1104
1105
1106
1107
1108
1109
1110
1111
1112
1113
1114
1115
1116
1117
1118
1119
1120
1121
1122
1123
1124
1125
1126
1127
1128
1129
1130
1131
1132
1133
1134
1135
1136
1137
1138
1139
1140
1141
1142
1143
1144
1145
1146
1147
1148
1149
1150
1151
1152
1153
1154
1155
1156
1157
1158
1159
1160
1161
1162
1163
1164
1165
1166
1167
1168
1169
1170
1171
1172
1173
1174
1175
1176
1177
1178
1179
1180
1181
1182
1183
1184
1185
1186
1187
1188
1189
1190
1191
1192
1193
1194
1195
1196
1197
1198
1199
1200
1201
1202
1203
1204
1205
1206
1207
1208
1209
1210
1211
1212
1213
1214
1215
1216
1217
1218
1219
1220
1221
1222
1223
1224
1225
1226
1227
1228
1229
1230
1231
1232
1233
1234
1235
1236
1237
1238
1239
1240
1241
1242
1243
1244
1245
1246
1247
1248
1249
1250
1251
1252
1253
1254
1255
1256
1257
1258
1259
1260
1261
1262
1263
1264
1265
1266
1267
1268
1269
1270
1271
1272
1273
1274
1275
1276
1277
1278
1279
1280
1281
1282
1283
1284
1285
1286
1287
1288
1289
1290
1291
1292
1293
1294
1295
1296
1297
1298
1299
1300
1301
1302
1303
1304
1305
1306
1307
1308
1309
1310
1311
1312
1313
1314
1315
1316
1317
1318
1319
1320
1321
1322
1323
1324
1325
1326
1327
1328
1329
1330
1331
1332
1333
1334
1335
1336
1337
1338
1339
1340
1341
1342
1343
1344
1345
1346
1347
1348
1349
1350
1351
1352
1353
1354
1355
1356
1357
1358
1359
1360
1361
1362
1363
1364
{
    "paper_id": "P92-1014",
    "header": {
        "generated_with": "S2ORC 1.0.0",
        "date_generated": "2023-01-19T08:12:11.382953Z"
    },
    "title": "INFORMATION RETRIEVAL USING ROBUST NATURAL LANGUAGE PROCESSING",
    "authors": [
        {
            "first": "Tomek",
            "middle": [],
            "last": "Strzalkowski",
            "suffix": "",
            "affiliation": {
                "laboratory": "",
                "institution": "Courant Institute of Mathematical Sciences New York University",
                "location": {
                    "addrLine": "715 Broadway, rm. 704",
                    "postCode": "10003",
                    "settlement": "New York",
                    "region": "NY"
                }
            },
            "email": ""
        },
        {
            "first": "Barbara",
            "middle": [],
            "last": "Vauthey1",
            "suffix": "",
            "affiliation": {
                "laboratory": "",
                "institution": "Courant Institute of Mathematical Sciences New York University",
                "location": {
                    "addrLine": "715 Broadway, rm. 704",
                    "postCode": "10003",
                    "settlement": "New York",
                    "region": "NY"
                }
            },
            "email": ""
        }
    ],
    "year": "",
    "venue": null,
    "identifiers": {},
    "abstract": "We developed a prototype information retrieval system which uses advanced natural language processing techniques to enhance the effectiveness of traditional keyword based document retrieval. The backbone of our system is a statistical retrieval engine which performs automated indexing of documents, then search and ranking in response to user queries. This core architecture is augmented with advanced natural language processing tools which are both robust and efficient. In early experiments, the augmented system has displayed capabilities that appear to make it superior to the purely statistical base.",
    "pdf_parse": {
        "paper_id": "P92-1014",
        "_pdf_hash": "",
        "abstract": [
            {
                "text": "We developed a prototype information retrieval system which uses advanced natural language processing techniques to enhance the effectiveness of traditional keyword based document retrieval. The backbone of our system is a statistical retrieval engine which performs automated indexing of documents, then search and ranking in response to user queries. This core architecture is augmented with advanced natural language processing tools which are both robust and efficient. In early experiments, the augmented system has displayed capabilities that appear to make it superior to the purely statistical base.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Abstract",
                "sec_num": null
            }
        ],
        "body_text": [
            {
                "text": "A typical information retrieval fiR) task is to select documents from a database in response to a user's query, and rank these documents according to relevance. This has been usually accomplished using statistical methods (often coupled with manual encoding), but it is now widely believed that these traditional methods have reached their limits. 1 These limits are particularly acute for text databases, where natural language processing (NLP) has long been considered necessary for further progress. Unfortunately, the difficulties encountered in applying computational linguistics technologies to text processing have contributed to a wide-spread belief that automated NLP may not be suitable in IR. These difficulties included inefficiency, limited coverage, and prohibitive cost of manual effort required to build lexicons and knowledge bases for each new text domain. On the other hand, while numerous experiments did not establish the usefulness of NLP, they cannot be considered conclusive because of their very limited scale.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "INTRODUCTION",
                "sec_num": null
            },
            {
                "text": "Another reason is the limited scale at which NLP was used. Syntactic parsing of the database contents, for example, has been attempted in order to extract linguistically motivated \"syntactic phrases\", which presumably were better indicators of contents than \"statistical phrases\" where words were grouped solely on the basis of physical proximity (eg. \"college junior\" is not the same as \"junior college\"). These intuitions, however, were not confirmed by experiments; worse still, statistical phrases regularly outperformed syntactic phrases (Fagan, 1987) . Attempts to overcome the poor statistical behavior of syntactic phrases has led to various clustering techniques that grouped synonymous or near synonymous phrases into \"clusters\" and replaced these by single \"metaterms\". Clustering techniques were somewhat successful in upgrading overall system performance, but their effectiveness was diminished by frequently poor quality of syntactic analysis. Since full-analysis wide-coverage syntactic parsers were either unavailable or inefficient, various partial parsing methods have been used. Partial parsing was usually fast enough, but it also generated noisy data_\" as many as 50% of all generated phrases could be incorrect (Lewis and Croft, 1990) . Other efforts concentrated on processing of user queries (eg. Spack Jones and Tait, 1984; Smeaton and van Rijsbergen, 1988) . Since queries were usually short and few, even relatively inefficient NLP techniques could be of benefit to the system. None of these attempts proved conclusive, and some were never properly evaluated either. t Current address: Laboratoire d'lnformatique, Unlversite de Fribourg, ch. du Musee 3, 1700 Fribourg, Switzerland; vauthey@cfmniSl.bitnet.",
                "cite_spans": [
                    {
                        "start": 543,
                        "end": 556,
                        "text": "(Fagan, 1987)",
                        "ref_id": "BIBREF3"
                    },
                    {
                        "start": 1233,
                        "end": 1256,
                        "text": "(Lewis and Croft, 1990)",
                        "ref_id": "BIBREF12"
                    },
                    {
                        "start": 1327,
                        "end": 1348,
                        "text": "Jones and Tait, 1984;",
                        "ref_id": "BIBREF20"
                    },
                    {
                        "start": 1349,
                        "end": 1382,
                        "text": "Smeaton and van Rijsbergen, 1988)",
                        "ref_id": "BIBREF17"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "INTRODUCTION",
                "sec_num": null
            },
            {
                "text": "i As far as the aut~natic document retrieval is concerned. Techniques involving various forms of relevance feedback are usually far more effective, but they require user's manual intervention in the retrieval process. In this paper, we are concerned with fully automated retrieval only.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "INTRODUCTION",
                "sec_num": null
            },
            {
                "text": "2 Standard IR benchmark collections are statistically too small and the experiments can easily produce counterintuitive results. For example, Cranfield collection is only approx. 180,000 English words, while CACM-3204 collection used in the present experiments is approx. 200,000 words.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "INTRODUCTION",
                "sec_num": null
            },
            {
                "text": "We believe that linguistic processing of both the database and the user's queries need to be done for a maximum benefit, and moreover, the two processes must be appropriately coordinated. This prognosis is supported by the experiments performed by the NYU group Grishman and Strzalkowski, 1991) , and by the group at the University of Massachussetts (Croft et al., 1991) . We explore this possibility further in this paper.",
                "cite_spans": [
                    {
                        "start": 262,
                        "end": 294,
                        "text": "Grishman and Strzalkowski, 1991)",
                        "ref_id": "BIBREF5"
                    },
                    {
                        "start": 350,
                        "end": 370,
                        "text": "(Croft et al., 1991)",
                        "ref_id": "BIBREF1"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "INTRODUCTION",
                "sec_num": null
            },
            {
                "text": "Our information retrieval system consists of a traditional statistical backbone (Harman and Candela, 1989) augmented with various natural language processing components that assist the system in database processing (stemming, indexing, word and phrase clustering, selectional restrictions), and translate a user's information request into an effective query. This design is a careful compromise between purely statistical non-linguistic approaches and those requiring rather accomplished (and expensive) semantic analysis of data, often referred to as 'conceptual retrieval'. The conceptual retrieval systems, though quite effective, are not yet mature enough to be considered in serious information retrieval applications, the major problems being their extreme inefficiency and the need for manual encoding of domain knowledge (Mauldin, 1991) .",
                "cite_spans": [
                    {
                        "start": 80,
                        "end": 106,
                        "text": "(Harman and Candela, 1989)",
                        "ref_id": "BIBREF7"
                    },
                    {
                        "start": 829,
                        "end": 844,
                        "text": "(Mauldin, 1991)",
                        "ref_id": "BIBREF13"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "OVERALL DESIGN",
                "sec_num": null
            },
            {
                "text": "In our system the database text is first processed with a fast syntactic parser. Subsequently certain types of phrases are extracted from the parse trees and used as compound indexing terms in addition to single-word terms. The extracted phrases are statistically analyzed as syntactic contexts in order to discover a variety of similarity links between smaller subphrases and words occurring in them. A further filtering process maps these similarity links onto semantic relations (generalization, specialization, synonymy, etc.) after which they are used to transform user's request into a search query.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "OVERALL DESIGN",
                "sec_num": null
            },
            {
                "text": "The user's natural language request is also parsed, and all indexing terms occurring in them are identified. Next, certain highly ambiguous (usually single-word) terms are dropped, provided that they also occur as elements in some compound terms. For example, \"natural\" is deleted from a query already containing \"natural language\" because \"natural\" occurs in many unrelated contexts: \"natural number\", \"natural logarithm\", \"natural approach\", etc. At the same time, other terms may be added, namely those which are linked to some query term through admissible similarity relations. For example, \"fortran\" is added to a query containing the compound term \"program language\" via a specification link. After the final query is constructed, the database search follows, and a ranked list of documents is returned.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "OVERALL DESIGN",
                "sec_num": null
            },
            {
                "text": "It should be noted that all the processing steps, those performed by the backbone system, and these performed by the natural language processing components, are fully automated, and no human intervention or manual encoding is required.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "OVERALL DESIGN",
                "sec_num": null
            },
            {
                "text": "TIP flagged Text Parser) is based on the Linguistic String Grammar developed by Sager (1981) . Written in Quintus Prolog, the parser currently encompasses more than 400 grammar productions. It produces regularized parse tree representations for each sentence that reflect the sentence's logical structure. The parser is equipped with a powerful skip-and-fit recovery mechanism that allows it to operate effectively in the face of illformed input or under a severe time pressure. In the recent experiments with approximately 6 million words of English texts, 3 the parser's speed averaged between 0.45 and 0.5 seconds per sentence, or up to 2600 words per minute, on a 21 MIPS SparcStation ELC. Some details of the parser are discussed below .4 TIP is a full grammar parser, and initially, it attempts to generate a complete analysis for each sentence. However, unlike an ordinary parser, it has a built-in timer which regulates the amount of time allowed for parsing any one sentence. If a parse is not returned before the allotted time elapses, the parser enters the skip-and-fit mode in which it will try to \"fit\" the parse. While in the skip-and-fit mode, the parser will attempt to forcibly reduce incomplete constituents, possibly skipping portions of input in order to restart processing at a next unattempted constituent. In other words, the parser will favor reduction to backtracking while in the skip-and-fit mode. The result of this strategy is an approximate parse, partially fitted using top-down predictions. The flagments skipped in the first pass are not thrown out, instead they are analyzed by a simple phrasal parser that looks for noun phrases and relative clauses and then attaches the recovered material to the main parse structure. As an illustration, consider the following sentence taken from the CACM-3204 corpus:",
                "cite_spans": [
                    {
                        "start": 80,
                        "end": 92,
                        "text": "Sager (1981)",
                        "ref_id": "BIBREF14"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "FAST PARSING WITH TI'P PARSER",
                "sec_num": null
            },
            {
                "text": "The method is illustrated by the automatic construction of beth recursive and iterative programs opera~-tg on natural numbers, lists, and trees, in order to construct a program satisfying certain specifications a theorem induced by those specifications is proved, and the desired program is extracted from the proof.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "FAST PARSING WITH TI'P PARSER",
                "sec_num": null
            },
            {
                "text": "The italicized fragment is likely to cause additional complications in parsing this lengthy string, and the parser may be better off ignoring this fragment altogether. To do so successfully, the parser must close the currently open constituent (i.e., reduce a program satisfying certain specifications to NP), and possibly a few of its parent constituents, removing corresponding productions from further consideration, until an appropriate production is reactivated. In this case, TIP may force the following reductions: SI ---> to V NP; SA --~ SI; S -~ NP V NP SA, until the production S --+ S and S is reached. Next, the parser skips input to lind and, and resumes normal processing.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "FAST PARSING WITH TI'P PARSER",
                "sec_num": null
            },
            {
                "text": "As may be expected, the skip-and-fit strategy will only be effective if the input skipping can be performed with a degree of determinism. This means that most of the lexical level ambiguity must be removed from the input text, prior to parsing. We achieve this using a stochastic parts of speech tagger 5 to preprocess the text.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "FAST PARSING WITH TI'P PARSER",
                "sec_num": null
            },
            {
                "text": "Word stemming has been an effective way of improving document recall since it reduces words to their common morphological root, thus allowing more successful matches. On the other hand, stemming tends to decrease retrieval precision, if care is not taken to prevent situations where otherwise unrelated words are reduced to the same stem. In our system we replaced a traditional morphological stemmer with a conservative dictionary-assisted suffix trimmer. 6 The suffix trimmer performs essentially two tasks: (1) it reduces inflected word forms to their root forms as specified in the dictionary, and (2) it converts nominalized verb forms (eg. \"implementation\", \"storage\") to the root forms of corresponding verbs (i.e., \"implement\", \"store\"). This is accomplished by removing a standard suffix, eg. \"stor+age\", replacing it with a standard root ending (\"+e\"), and checking the newly created word against the dictionary, i.e., we check whether the new root (\"store\") is indeed a legal word, and whether the original root (\"storage\") s Courtesy of Bolt Beranek and Newman. We use Oxford Advanced Learner's Dictionary (OALD).",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "WORD SUFFIX TRIMMER",
                "sec_num": null
            },
            {
                "text": "is defined using the new root (\"store\") or one of its standard inflexional forms (e.g., \"storing\"). For example, the following definitions are excerpted from the Oxford Advanced Learner's Dictionary (OALD): storage n [U] (space used for, money paid for) the storing of goods ... diversion n [U] diverting ... procession n [C] number of persons, vehicles, ete moving forward and following each other in an orderly way. Therefore, we can reduce \"diversion\" to \"divert\" by removing the suffix \"+sion\" and adding root form suffix \"+t\". On the other hand, \"process+ion\" is not reduced to \"process\".",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "WORD SUFFIX TRIMMER",
                "sec_num": null
            },
            {
                "text": "Experiments with CACM-3204 collection show an improvement in retrieval precision by 6% to 8% over the base system equipped with a standard morphological stemmer (in our case, the SMART stemmer).",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "WORD SUFFIX TRIMMER",
                "sec_num": null
            },
            {
                "text": "Syntactic phrases extracted from TIP parse trees are head-modifier pairs: from simple word pairs to complex nested structures. The head in such a pair is a central element of a phrase (verb, main noun, etc.) while the modifier is one of the adjunct arguments of the head. 7 For example, the phrase fast algorithm for parsing context-free languages yields the following pairs: algorithm+fast, algorithm+parse, parse+language, language+context.free. The following types of pairs were considered: (1) a head noun and its left adjective or noun adjunct, (2) a head noun and the head of its right adjunct, (3) the main verb of a clause and the head of its object phrase, and (4) the head of the subject phrase and the main verb, These types of pairs account for most of the syntactic variants for relating two words (or simple phrases) into pairs carrying compatible semantic content. For example, the pair retrieve+information is extracted from any of the following fragments: information retrieval system; retrieval of information from databases; and information that can be retrieved by a user-controlled interactive search process. An example is shown in Figure 1 . g One difficulty in obtaining head-modifier 7 In the experiments reported here we extracted headmodifier word pairs only. CACM collection is too small to warrant generation of larger compounds, because of their low frequencies.",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 1154,
                        "end": 1162,
                        "text": "Figure 1",
                        "ref_id": null
                    }
                ],
                "eq_spans": [],
                "section": "HEAD-MODIFIER STRUCTURES",
                "sec_num": null
            },
            {
                "text": "s Note that working with the parsed text ensures a high degree of precision in capturing the meaningful phrases, which is especially evident when compared with the results usually obtained from either unprocessed or only partially processed text (Lewis and Croft, 1990) .",
                "cite_spans": [
                    {
                        "start": 246,
                        "end": 269,
                        "text": "(Lewis and Croft, 1990)",
                        "ref_id": "BIBREF12"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "HEAD-MODIFIER STRUCTURES",
                "sec_num": null
            },
            {
                "text": "The techniques are discussed and related to a general tape manipulation routine.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "SENTENCE:",
                "sec_num": null
            },
            {
                "text": "[[be], [[verb,[and,[discuss] Since our parser has no knowledge about the text domain, and uses no semantic preferences, it does not attempt to guess any internal associations within such phrases. Instead, this task is passed to the pair extractor module which processes ambiguous parse structures in two phases. In phase one, all and only unambiguous head-modifier pairs are extracted, and frequencies of their occurrence are recorded. In phase two, frequency information of pairs generated in the first pass is used to form associations from ambiguous structures. For example, if language+natural has occurred unambiguously a number times in contexts such as parser for natural language, while processing+natural has occurred significantly fewer times or perhaps none at all, then we will prefer the former association as valid.",
                "cite_spans": [
                    {
                        "start": 7,
                        "end": 28,
                        "text": "[[verb,[and,[discuss]",
                        "ref_id": null
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "PARSE STRUCTURE:",
                "sec_num": null
            },
            {
                "text": "Head-modifier pairs form compound terms used in database indexing. They also serve as occurrence contexts for smaller terms, including single-word terms. In order to determine whether such pairs signify any important association between terms, we calculate the value of the Informational Contribution (IC) function for each element in a pair.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "TERM CORRELATIONS FROM TEXT",
                "sec_num": null
            },
            {
                "text": "Higher values indicate stronger association, and the element having the largest value is considered semantically dominant.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "TERM CORRELATIONS FROM TEXT",
                "sec_num": null
            },
            {
                "text": "The connection between the terms cooccurrences and the information they are transmitting (or otherwise, their meaning) was established and discussed in detail by Harris (1968 Harris ( , 1982 Harris ( , 1991 as fundamental for his mathematical theory of language. This theory is related to mathematical information theory, which formalizes the dependencies between the information and the probability distribution of the given code (alphabet or language). As stated by Shannon (1948) , information is measured by entropy which gives the capacity of the given code, in terms of the probabilities of its particular signs, to transmit information. It should be emphasized that, according to the information theory, there is no direct relation between information and meaning, entropy giving only a measure of what possible choices of messages are offered by a particular language. However, it offers theoretic foundations of the correlation between the probability of an event and transmitted information, and it can be further developed in order to capture the meaning of a message. There is indeed an inverse relation between information contributed by a word and its probability of occurrence p, that is, rare words carry more information than common ones. This relation can be given by the function -log p (x) which corresponds to information which a single word is contributing to the entropy of the entire language.",
                "cite_spans": [
                    {
                        "start": 162,
                        "end": 174,
                        "text": "Harris (1968",
                        "ref_id": "BIBREF10"
                    },
                    {
                        "start": 175,
                        "end": 190,
                        "text": "Harris ( , 1982",
                        "ref_id": "BIBREF9"
                    },
                    {
                        "start": 191,
                        "end": 206,
                        "text": "Harris ( , 1991",
                        "ref_id": "BIBREF8"
                    },
                    {
                        "start": 468,
                        "end": 482,
                        "text": "Shannon (1948)",
                        "ref_id": "BIBREF16"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "TERM CORRELATIONS FROM TEXT",
                "sec_num": null
            },
            {
                "text": "In contrast to information theory, the goal of the present study is not to calculate informational capacities of a language, but to measure the relative strength of connection between the words in syntactic pairs. This connection corresponds to Harris' likelihood constraint, where the likelihood of an operator with respect to its argument words (or of an argument word in respect to different operators) is defined using word-combination frequencies within the linguistic dependency structures. Further, the likelihood of a given word being paired with another word, within one operator-argument structure, can be expressed in statistical terms as a conditional probability. In our present approach, the required measure had to be uniform for all word occurrences, covering a number of different operator-argument structures. This is reflected by an additional dispersion parameter, introduced to evaluate the heterogeneity of word So defined, IC function is asymmetric, a properry found desirable by Wilks et al. (1990) in their study of word co-occurrences in the Longman dictionary. In addition, IC is stable even for relatively low frequency words, which can be contrasted with Fano's mutual information formula recently used by Church and Hanks (1990) to compute word cooccurrence patterns in a 44 million word corpus of Associated Press news stories. They noted that while generally satisfactory, the mutual information formula often produces counterintuitive results for lowfrequency data. This is particularly worrisome for relatively smaller IR collections since many important indexing terms would be eliminated from consideration. A few examples obtained from CACM-3204 corpus are listed in Table 1 . IC values for terms become the basis for calculating term-to-term similarity coefficients. If two terms tend to be modified with a number of common modifiers and otherwise appear in few distinct contexts, we assign them a similarity coefficient, a real number between 0 and 1. The similarity is determined by comparing distribution characteristics for both terms within the corpus: how much information contents do they carry, do their information contribution over contexts vary greatly, are the common contexts in which these terms occur specific enough? In general we will credit high-contents terms appearing in identical contexts, especially if these contexts are not too commonplace. 9 The relative similarity between two words Xl and x2 is obtained using the following formula (a is a large constant): l0",
                "cite_spans": [
                    {
                        "start": 1003,
                        "end": 1022,
                        "text": "Wilks et al. (1990)",
                        "ref_id": "BIBREF24"
                    },
                    {
                        "start": 1235,
                        "end": 1258,
                        "text": "Church and Hanks (1990)",
                        "ref_id": "BIBREF0"
                    }
                ],
                "ref_spans": [
                    {
                        "start": 1704,
                        "end": 1711,
                        "text": "Table 1",
                        "ref_id": null
                    }
                ],
                "eq_spans": [],
                "section": "TERM CORRELATIONS FROM TEXT",
                "sec_num": null
            },
            {
                "text": "SIM (x l ,x2) = log (or ~, simy(x t ,x2)) y where simy(x 1 ,x2) = MIN (IC (x 1, [x I ,Y ]),IC (x2, [x 2,Y ])) * (IC(y, [xt,y]) +IC(,y, [x2,y]))",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "TERM CORRELATIONS FROM TEXT",
                "sec_num": null
            },
            {
                "text": "The similarity function is further normalized with respect to SIM (xl,xl) . It may be worth pointing out that the similarities are calculated using term co-9 It would not be appropriate to predict similarity between language and logarithm on the basis of their co-occurrence with naturaL to This is inspired by a formula used by Hindie (1990) , and subsequently modified to take into account the asymmetry of IC occurrences in syntactic rather than in document-size contexts, the latter being the usual practice in nonlinguistic clustering (eg. Sparck Jones and Barber, 1971; Crouch, 1988; Lewis and Croft, 1990) . Although the two methods of term clustering may be considered mutually complementary in certain situations, we believe that more and stronger associations can be obtained through syntactic-context clustering, given sufficient amount of data and a reasonably accurate syntactic parser. ~",
                "cite_spans": [
                    {
                        "start": 66,
                        "end": 73,
                        "text": "(xl,xl)",
                        "ref_id": null
                    },
                    {
                        "start": 329,
                        "end": 342,
                        "text": "Hindie (1990)",
                        "ref_id": null
                    },
                    {
                        "start": 552,
                        "end": 575,
                        "text": "Jones and Barber, 1971;",
                        "ref_id": "BIBREF19"
                    },
                    {
                        "start": 576,
                        "end": 589,
                        "text": "Crouch, 1988;",
                        "ref_id": "BIBREF2"
                    },
                    {
                        "start": 590,
                        "end": 612,
                        "text": "Lewis and Croft, 1990)",
                        "ref_id": "BIBREF12"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "TERM CORRELATIONS FROM TEXT",
                "sec_num": null
            },
            {
                "text": "Similarity relations are used to expand user queries with new terms, in an attempt to make the n Non-syntactic contexts cross sentence boundaries with no fuss, which is helpful with short, succinct documc~nts (such as CACM abstracts), but less so with longer texts; sec also (Grishman et al., 1986) .",
                "cite_spans": [
                    {
                        "start": 275,
                        "end": 298,
                        "text": "(Grishman et al., 1986)",
                        "ref_id": "BIBREF4"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "QUERY EXPANSION",
                "sec_num": null
            },
            {
                "text": "final search query more comprehensive (adding synonyms) and/or more pointed (adding specializations). 12 It follows that not all similarity relations will be equally useful in query expansion, for instance, complementary relations like the one between algol and fortran may actually harm system's performance, since we may end up retrieving many irrelevant documents. Similarly, the effectiveness of a query containing fortran is likely to diminish if we add a similar but far more general term such as language.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "QUERY EXPANSION",
                "sec_num": null
            },
            {
                "text": "On the other hand, database search is likely to miss relevant documents if we overlook the fact that for. tran is a programming language, or that interpolate is a specification of approximate. We noted that an average set of similarities generated from a text corpus contains about as many \"good\" relations (synonymy, specialization) as \"bad\" relations (antonymy, complementation, generalization), as seen from the query expansion viewpoint. Therefore any attempt to separate these two classes and to increase the proportion of \"good\" relations should result in improved retrieval. This has indeed been confirmed in our experiments where a relatively crude filter has visibly increased retrieval precision.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "QUERY EXPANSION",
                "sec_num": null
            },
            {
                "text": "In order to create an appropriate filter, we expanded the IC function into a global specificity measure called the cumulative informational contribution function (ICW). ICW is calculated for each term across all contexts in which it occurs. The general philosophy here is that a more specific word/phrase would have a more limited use, i.e., would appear in fewer distinct contexts. ICW is similar to the standard inverted document frequency (idf) measure except that term frequency is measured over syntactic units rather than document size units./3 Terms with higher ICW values are generally considered more specific, but the specificity comparison is only meaningful for terms which are already known to be similar. The new function is calculated according to the following formula:",
                "cite_spans": [
                    {
                        "start": 442,
                        "end": 447,
                        "text": "(idf)",
                        "ref_id": null
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "QUERY EXPANSION",
                "sec_num": null
            },
            {
                "text": "ICt. (w) if both exist ICR(w) ICW(w)=I~R (w) otherwiseif\u00b0nly ICR(w)exists n Query expansion (in the sense considered here, though not quite in the same way) has been used in information retrieval research before (eg. Sparck Jones and Tait, 1984; Harman, 1988) , usually with mixed results. An alternative is to use tenm clusters to create new terms, \"metaterms\", and use them to index the database instead (eg. Crouch, 1988; Lewis and Croft, 1990) . We found that the query expansion approach gives the system more flexibility, for instance, by making room for hypertext-style topic exploration via user feedback.",
                "cite_spans": [
                    {
                        "start": 5,
                        "end": 8,
                        "text": "(w)",
                        "ref_id": null
                    },
                    {
                        "start": 41,
                        "end": 44,
                        "text": "(w)",
                        "ref_id": null
                    },
                    {
                        "start": 224,
                        "end": 245,
                        "text": "Jones and Tait, 1984;",
                        "ref_id": "BIBREF20"
                    },
                    {
                        "start": 246,
                        "end": 259,
                        "text": "Harman, 1988)",
                        "ref_id": "BIBREF6"
                    },
                    {
                        "start": 411,
                        "end": 424,
                        "text": "Crouch, 1988;",
                        "ref_id": "BIBREF2"
                    },
                    {
                        "start": 425,
                        "end": 447,
                        "text": "Lewis and Croft, 1990)",
                        "ref_id": "BIBREF12"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "QUERY EXPANSION",
                "sec_num": null
            },
            {
                "text": "t3 We believe that measuring term specificity over document-size contexts (eg. Sparck Jones, 1972) ",
                "cite_spans": [
                    {
                        "start": 79,
                        "end": 98,
                        "text": "Sparck Jones, 1972)",
                        "ref_id": "BIBREF18"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "QUERY EXPANSION",
                "sec_num": null
            },
            {
                "text": "The preliminary series of experiments with the CACM-3204 collection of computer science abstracts showed a consistent improvement in performance: the average precision increased from 32.8% to 37.1% (a 13% increase), while the normalized recall went from 74.3% to 84.5% (a 14% increase), in comparison with the statistics of the base NIST system. This improvement is a combined effect of the new stemmer, compound terms, term selection in queries, and query expansion using filtered similarity relations. The choice of similarity relation filter has been found critical in improving retrieval precision through query expansion. It should also be pointed out that only about 1.5% of all similarity relations originally generated from CACM-3204 were found processing texts without any internal document structure. Table 2 . Filtered word similarities (* indicates the more specific term). admissible after filtering, contributing only 1.2 expansion on average per query. It is quite evident significantly larger corpora are required to produce more dramatic results. 15 ~6 A detailed summary is given in Table 3 below.",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 811,
                        "end": 818,
                        "text": "Table 2",
                        "ref_id": null
                    },
                    {
                        "start": 1101,
                        "end": 1108,
                        "text": "Table 3",
                        "ref_id": null
                    }
                ],
                "eq_spans": [],
                "section": "SUMMARY OF RESULTS",
                "sec_num": null
            },
            {
                "text": "These results, while quite modest by IR stundards, are significant for another reason as well. They were obtained without any manual intervention into the database or queries, and without using any other ts KL Kwok (private communication) has suggested that the low percentage of admissible relations might be similar to the phenomenon of 'tight dusters' which while meaningful are so few that their impact is small.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "SUMMARY OF RESULTS",
                "sec_num": null
            },
            {
                "text": ":s A sufficiently large text corpus is 20 million words or more. This has been paRially confirmed by experiments performed at the University of Massachussetts (B. Croft, private comrnunicadon Table 3 . Recall/precision statistics for CACM-3204 information about the database except for the text of the documents (i.e., not even the hand generated keyword fields enclosed with most documents were used). Lewis and Croft (1990) , and Croft et al. (1991) report results similar to ours but they take advantage of Computer Reviews categories manually assigned to some documents. The purpose of this research is to explore the potential of automated NLP in dealing with large scale IR problems, and not necessarily to obtain the best possible results on any particular data collection. One of our goals is to point a feasible direction for integrating NLP into the traditional IR.",
                "cite_spans": [
                    {
                        "start": 403,
                        "end": 425,
                        "text": "Lewis and Croft (1990)",
                        "ref_id": "BIBREF12"
                    },
                    {
                        "start": 432,
                        "end": 451,
                        "text": "Croft et al. (1991)",
                        "ref_id": "BIBREF1"
                    }
                ],
                "ref_spans": [
                    {
                        "start": 192,
                        "end": 199,
                        "text": "Table 3",
                        "ref_id": null
                    }
                ],
                "eq_spans": [],
                "section": "SUMMARY OF RESULTS",
                "sec_num": null
            },
            {
                "text": "These include CACM-3204, MUC-3, and a selection of nearly 6,000 technical articles extracted from Computer Library database (a Ziff Communications Inc. CD-ROM).4 A complete description can be found in(Strzalkowski, 1992).",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "",
                "sec_num": null
            },
            {
                "text": "The filter was most effective at o = 0.57.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "",
                "sec_num": null
            }
        ],
        "back_matter": [
            {
                "text": "We would like to thank Donna Harman of NIST for making her IR system available to us. We would also like to thank Ralph Weischedel, Marie Meteer and Heidi Fox of BBN for providing and assisting in the use of the part of speech tagger. KL Kwok has offered many helpful comments on an earlier draft of this paper. In addition, ACM has generously provided us with text data from the Computer Library database distributed by Ziff Communications Inc. This paper is based upon work suppened by the Defense Advanced Research Project Agency under Contract N00014-90-J-1851 from the Office of Naval Research, the National Science Foundation under Grant 1RI-89-02304, and a grant from the Swiss National Foundation for Scientific Research. We also acknowledge a support from Canadian Institute for Robotics and Intelligent Systems (IRIS).",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "ACKNOWLEDGEMENTS",
                "sec_num": null
            }
        ],
        "bib_entries": {
            "BIBREF0": {
                "ref_id": "b0",
                "title": "Word association norms, mutual information, and lexicography",
                "authors": [
                    {
                        "first": "Kenneth",
                        "middle": [],
                        "last": "Church",
                        "suffix": ""
                    },
                    {
                        "first": "",
                        "middle": [],
                        "last": "Ward",
                        "suffix": ""
                    },
                    {
                        "first": "Patrick",
                        "middle": [],
                        "last": "Hanks",
                        "suffix": ""
                    }
                ],
                "year": 1990,
                "venue": "Computational Linguistics",
                "volume": "16",
                "issue": "1",
                "pages": "22--29",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Church, Kenneth Ward and Hanks, Patrick. 1990. \"Word association norms, mutual informa- tion, and lexicography.\" Computational Linguistics, 16(1), MIT Press, pp. 22-29.",
                "links": null
            },
            "BIBREF1": {
                "ref_id": "b1",
                "title": "The Use of Phrases and Structured Queries in Information Retrieval",
                "authors": [
                    {
                        "first": "W",
                        "middle": [],
                        "last": "Croft",
                        "suffix": ""
                    },
                    {
                        "first": "Howard",
                        "middle": [
                            "R"
                        ],
                        "last": "Bruce",
                        "suffix": ""
                    },
                    {
                        "first": "David",
                        "middle": [
                            "D"
                        ],
                        "last": "Turtle",
                        "suffix": ""
                    },
                    {
                        "first": "",
                        "middle": [],
                        "last": "Lewis",
                        "suffix": ""
                    }
                ],
                "year": 1991,
                "venue": "Proceedings of ACM SIGIR-91",
                "volume": "",
                "issue": "",
                "pages": "32--45",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Croft, W. Bruce, Howard R. Turtle, and David D. Lewis. 1991. \"The Use of Phrases and Struc- tured Queries in Information Retrieval.\" Proceedings of ACM SIGIR-91, pp. 32-45.",
                "links": null
            },
            "BIBREF2": {
                "ref_id": "b2",
                "title": "A cluster-based approach to thesaurus construction",
                "authors": [
                    {
                        "first": "Carolyn",
                        "middle": [
                            "J"
                        ],
                        "last": "Crouch",
                        "suffix": ""
                    }
                ],
                "year": 1988,
                "venue": "Proceedings of ACM SIGIR-88",
                "volume": "",
                "issue": "",
                "pages": "309--320",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Crouch, Carolyn J. 1988. \"A cluster-based approach to thesaurus construction.\" Proceedings of ACM SIGIR-88, pp. 309-320.",
                "links": null
            },
            "BIBREF3": {
                "ref_id": "b3",
                "title": "Experiments in Automated Phrase Indexing for Document Retrieval: A Comparison of Syntactic and Non-Syntactic Methods",
                "authors": [
                    {
                        "first": "Joel",
                        "middle": [
                            "L"
                        ],
                        "last": "Fagan",
                        "suffix": ""
                    }
                ],
                "year": 1987,
                "venue": "",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Fagan, Joel L. 1987. Experiments in Automated Phrase Indexing for Document Retrieval: A Comparison of Syntactic and Non-Syntactic Methods. Ph.D. Thesis, Department of Com- puter Science, CorneU University.",
                "links": null
            },
            "BIBREF4": {
                "ref_id": "b4",
                "title": "Discovery procedures for sublanguage selectional patterns: initial experiments",
                "authors": [
                    {
                        "first": "Ralph",
                        "middle": [],
                        "last": "Grishman",
                        "suffix": ""
                    },
                    {
                        "first": "Lynette",
                        "middle": [],
                        "last": "Hirschman",
                        "suffix": ""
                    },
                    {
                        "first": "Ngo",
                        "middle": [
                            "T"
                        ],
                        "last": "Nhan",
                        "suffix": ""
                    }
                ],
                "year": 1986,
                "venue": "ComputationalLinguistics",
                "volume": "12",
                "issue": "3",
                "pages": "205--215",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Grishman, Ralph, Lynette Hirschman, and Ngo T. Nhan. 1986. \"Discovery procedures for sub- language selectional patterns: initial experi- ments\". ComputationalLinguistics, 12(3), pp. 205-215.",
                "links": null
            },
            "BIBREF5": {
                "ref_id": "b5",
                "title": "Position paper at the workshop on Future Directions in Natural Language Processing in Information Retrieval",
                "authors": [
                    {
                        "first": "Ralph",
                        "middle": [],
                        "last": "Grishman",
                        "suffix": ""
                    },
                    {
                        "first": "Tomek",
                        "middle": [],
                        "last": "Strzalkowski",
                        "suffix": ""
                    }
                ],
                "year": 1991,
                "venue": "",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Grishman, Ralph and Tomek Strzalkowski. 1991. \"Information Retrieval and Natural Language Processing.\" Position paper at the workshop on Future Directions in Natural Language Pro- cessing in Information Retrieval, Chicago.",
                "links": null
            },
            "BIBREF6": {
                "ref_id": "b6",
                "title": "Towards interactive query expansion",
                "authors": [
                    {
                        "first": "Donna",
                        "middle": [],
                        "last": "Harman",
                        "suffix": ""
                    }
                ],
                "year": 1988,
                "venue": "Proceedings of ACM SIGIR-88",
                "volume": "",
                "issue": "",
                "pages": "321--331",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Harman, Donna. 1988. \"Towards interactive query expansion.\" Proceedings of ACM SIGIR-88, pp. 321-331.",
                "links": null
            },
            "BIBREF7": {
                "ref_id": "b7",
                "title": "Retrieving Records from a Gigabyte of text on a Minicomputer Using Statistical Ranking",
                "authors": [
                    {
                        "first": "Donna",
                        "middle": [],
                        "last": "Harman",
                        "suffix": ""
                    },
                    {
                        "first": "Gerald",
                        "middle": [],
                        "last": "Candela",
                        "suffix": ""
                    }
                ],
                "year": 1989,
                "venue": "Journal of the American Society for Information Science",
                "volume": "41",
                "issue": "8",
                "pages": "581--589",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Harman, Donna and Gerald Candela. 1989. \"Retrieving Records from a Gigabyte of text on a Minicomputer Using Statistical Rank- ing.\" Journal of the American Society for Information Science, 41(8), pp. 581-589.",
                "links": null
            },
            "BIBREF8": {
                "ref_id": "b8",
                "title": "A Theory of language and Information",
                "authors": [
                    {
                        "first": "Zelig",
                        "middle": [
                            "S"
                        ],
                        "last": "Harris",
                        "suffix": ""
                    }
                ],
                "year": 1991,
                "venue": "A Mathematical Approach",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Harris, Zelig S. 1991. A Theory of language and Information. A Mathematical Approach.",
                "links": null
            },
            "BIBREF9": {
                "ref_id": "b9",
                "title": "A Grammar of English on Mathematical Principles",
                "authors": [
                    {
                        "first": "Zelig",
                        "middle": [
                            "S"
                        ],
                        "last": "Harris",
                        "suffix": ""
                    }
                ],
                "year": 1982,
                "venue": "",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Harris, Zelig S. 1982. A Grammar of English on Mathematical Principles. Wiley.",
                "links": null
            },
            "BIBREF10": {
                "ref_id": "b10",
                "title": "Mathematical Structures of Language",
                "authors": [
                    {
                        "first": "Zelig",
                        "middle": [
                            "S"
                        ],
                        "last": "Harris",
                        "suffix": ""
                    }
                ],
                "year": 1968,
                "venue": "",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Harris, Zelig S. 1968. Mathematical Structures of Language. Wiley.",
                "links": null
            },
            "BIBREF11": {
                "ref_id": "b11",
                "title": "Noun classification from predicate-argument structures",
                "authors": [
                    {
                        "first": "Donald",
                        "middle": [],
                        "last": "Hindle",
                        "suffix": ""
                    }
                ],
                "year": 1990,
                "venue": "Proc. 28 Meeting of the ACL",
                "volume": "",
                "issue": "",
                "pages": "268--275",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Hindle, Donald. 1990. \"Noun classification from predicate-argument structures.\" Proc. 28 Meeting of the ACL, Pittsburgh, PA, pp. 268- 275.",
                "links": null
            },
            "BIBREF12": {
                "ref_id": "b12",
                "title": "Term Clustering of Syntactic Phrases",
                "authors": [
                    {
                        "first": "David",
                        "middle": [
                            "D"
                        ],
                        "last": "Lewis",
                        "suffix": ""
                    },
                    {
                        "first": "W",
                        "middle": [
                            "Bruce"
                        ],
                        "last": "Croft",
                        "suffix": ""
                    }
                ],
                "year": 1990,
                "venue": "Proceedings of ACM SIGIR-90",
                "volume": "",
                "issue": "",
                "pages": "385--405",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Lewis, David D. and W. Bruce Croft. 1990. \"Term Clustering of Syntactic Phrases\". Proceedings of ACM SIGIR-90, pp. 385-405.",
                "links": null
            },
            "BIBREF13": {
                "ref_id": "b13",
                "title": "Retrieval Performance in Ferret: A Conceptual Information Retrieval System",
                "authors": [
                    {
                        "first": "Michael",
                        "middle": [],
                        "last": "Mauldin",
                        "suffix": ""
                    }
                ],
                "year": 1991,
                "venue": "Proceedings of ACM SIGIR-91",
                "volume": "",
                "issue": "",
                "pages": "347--355",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Mauldin, Michael. 1991. \"Retrieval Performance in Ferret: A Conceptual Information Retrieval System.\" Proceedings of ACM SIGIR-91, pp. 347-355.",
                "links": null
            },
            "BIBREF14": {
                "ref_id": "b14",
                "title": "Natural Language Information Processing",
                "authors": [
                    {
                        "first": "Naomi",
                        "middle": [],
                        "last": "Sager",
                        "suffix": ""
                    }
                ],
                "year": 1981,
                "venue": "",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Sager, Naomi. 1981. Natural Language Information Processing. Addison-Wesley.",
                "links": null
            },
            "BIBREF15": {
                "ref_id": "b15",
                "title": "Automatic Text Processing: the transformation, analysis, and retrieval of information by computer",
                "authors": [
                    {
                        "first": "Gerard",
                        "middle": [],
                        "last": "Salton",
                        "suffix": ""
                    }
                ],
                "year": 1989,
                "venue": "",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Salton, Gerard. 1989. Automatic Text Processing: the transformation, analysis, and retrieval of information by computer. Addison-Wesley, Reading, MA.",
                "links": null
            },
            "BIBREF16": {
                "ref_id": "b16",
                "title": "A mathematical theory of communication",
                "authors": [
                    {
                        "first": "C",
                        "middle": [
                            "E"
                        ],
                        "last": "Shannon",
                        "suffix": ""
                    }
                ],
                "year": 1948,
                "venue": "Bell System Technical Journal",
                "volume": "27",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Shannon, C. E. 1948. \"A mathematical theory of communication.\" Bell System Technical Journal, vol. 27, July-October.",
                "links": null
            },
            "BIBREF17": {
                "ref_id": "b17",
                "title": "Experiments on incorporating syntactic processing of user queries into a document retrieval strategy",
                "authors": [
                    {
                        "first": "A",
                        "middle": [
                            "F"
                        ],
                        "last": "Smeaton",
                        "suffix": ""
                    },
                    {
                        "first": "C",
                        "middle": [
                            "J"
                        ],
                        "last": "Van Rijsbergen",
                        "suffix": ""
                    }
                ],
                "year": 1988,
                "venue": "Proceedings of ACM SIGlR-88",
                "volume": "",
                "issue": "",
                "pages": "31--51",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Smeaton, A. F. and C. J. van Rijsbergen. 1988. \"Experiments on incorporating syntactic pro- cessing of user queries into a document retrieval strategy.\" Proceedings of ACM SIGlR-88, pp. 31-51.",
                "links": null
            },
            "BIBREF18": {
                "ref_id": "b18",
                "title": "Statistical interpretation of term specificity and its application in retrieval",
                "authors": [
                    {
                        "first": "Sparck",
                        "middle": [],
                        "last": "Jones",
                        "suffix": ""
                    },
                    {
                        "first": "Karen",
                        "middle": [],
                        "last": "",
                        "suffix": ""
                    }
                ],
                "year": 1972,
                "venue": "Journal of Documentation",
                "volume": "28",
                "issue": "1",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Sparck Jones, Karen. 1972. \"Statistical interpreta- tion of term specificity and its application in retrieval.\" Journal of Documentation, 28(1), pp. ll-20.",
                "links": null
            },
            "BIBREF19": {
                "ref_id": "b19",
                "title": "What makes automatic keyword classification effecfive?",
                "authors": [
                    {
                        "first": "Sparck",
                        "middle": [],
                        "last": "Jones",
                        "suffix": ""
                    },
                    {
                        "first": "K",
                        "middle": [],
                        "last": "",
                        "suffix": ""
                    },
                    {
                        "first": "E",
                        "middle": [
                            "O"
                        ],
                        "last": "Barber",
                        "suffix": ""
                    }
                ],
                "year": 1971,
                "venue": "Journal of the American Society for Information Science",
                "volume": "",
                "issue": "",
                "pages": "166--175",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Sparck Jones, K. and E. O. Barber. 1971. \"What makes automatic keyword classification effec- five?\" Journal of the American Society for Information Science, May-June, pp. 166-175.",
                "links": null
            },
            "BIBREF20": {
                "ref_id": "b20",
                "title": "Automatic search term variant generation",
                "authors": [
                    {
                        "first": "Sparck",
                        "middle": [],
                        "last": "Jones",
                        "suffix": ""
                    },
                    {
                        "first": "K",
                        "middle": [],
                        "last": "",
                        "suffix": ""
                    },
                    {
                        "first": "J",
                        "middle": [
                            "I"
                        ],
                        "last": "Tait",
                        "suffix": ""
                    }
                ],
                "year": 1984,
                "venue": "Journal of Documentation",
                "volume": "40",
                "issue": "1",
                "pages": "50--66",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Sparck Jones, K. and J. I. Tait. 1984. \"Automatic search term variant generation.\" Journal of Documentation, 40(1), pp. 50-66.",
                "links": null
            },
            "BIBREF21": {
                "ref_id": "b21",
                "title": "Fast Text Processing for Information Retrieval",
                "authors": [
                    {
                        "first": "Tomek",
                        "middle": [],
                        "last": "Strzalkowski",
                        "suffix": ""
                    },
                    {
                        "first": "Barbara",
                        "middle": [],
                        "last": "Vauthey",
                        "suffix": ""
                    }
                ],
                "year": 1991,
                "venue": "Proceedings of the 4th DARPA Speech and Natural Language Workshop",
                "volume": "",
                "issue": "",
                "pages": "346--351",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Strzalkowski, Tomek and Barbara Vauthey. 1991. \"Fast Text Processing for Information Retrieval.'\" Proceedings of the 4th DARPA Speech and Natural Language Workshop, Morgan-Kaufman, pp. 346-351.",
                "links": null
            },
            "BIBREF22": {
                "ref_id": "b22",
                "title": "Natural Language Processing in Automated Information Retrieval",
                "authors": [
                    {
                        "first": "Tomek",
                        "middle": [],
                        "last": "Strzalkowski",
                        "suffix": ""
                    },
                    {
                        "first": "Barbara",
                        "middle": [],
                        "last": "Vauthey",
                        "suffix": ""
                    }
                ],
                "year": 1991,
                "venue": "Proteus Project Memo #",
                "volume": "42",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Strzalkowski, Tomek and Barbara Vauthey. 1991. \"'Natural Language Processing in Automated Information Retrieval.\" Proteus Project Memo #42, Courant Institute of Mathematical Science, New York University.",
                "links": null
            },
            "BIBREF23": {
                "ref_id": "b23",
                "title": "TYP: A Fast and Robust Parser for Natural Language",
                "authors": [
                    {
                        "first": "Tomek",
                        "middle": [],
                        "last": "Strzalkowski",
                        "suffix": ""
                    }
                ],
                "year": 1992,
                "venue": "Proceedings of the 14th International Conference on Computational Linguistics (COL-ING)",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Strzalkowski, Tomek. 1992. \"TYP: A Fast and Robust Parser for Natural Language.\" Proceedings of the 14th International Confer- ence on Computational Linguistics (COL- ING), Nantes, France, July 1992.",
                "links": null
            },
            "BIBREF24": {
                "ref_id": "b24",
                "title": "Providing machine tractable dictionary tools",
                "authors": [
                    {
                        "first": "Yorick",
                        "middle": [
                            "A"
                        ],
                        "last": "Wilks",
                        "suffix": ""
                    },
                    {
                        "first": "Dan",
                        "middle": [],
                        "last": "Fass",
                        "suffix": ""
                    },
                    {
                        "first": "Cheng-Ming",
                        "middle": [],
                        "last": "Guo",
                        "suffix": ""
                    },
                    {
                        "first": "James",
                        "middle": [
                            "E"
                        ],
                        "last": "Mcdonald",
                        "suffix": ""
                    },
                    {
                        "first": "Tony",
                        "middle": [],
                        "last": "Plate",
                        "suffix": ""
                    },
                    {
                        "first": "Brian",
                        "middle": [
                            "M"
                        ],
                        "last": "Slator",
                        "suffix": ""
                    }
                ],
                "year": 1990,
                "venue": "Machine Translation",
                "volume": "5",
                "issue": "",
                "pages": "99--154",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Wilks, Yorick A., Dan Fass, Cheng-Ming Guo, James E. McDonald, Tony Plate, and Brian M. Slator. 1990. \"Providing machine tractable dictionary tools.\" Machine Translation, 5, pp. 99-154.",
                "links": null
            }
        },
        "ref_entries": {
            "FIGREF0": {
                "num": null,
                "type_str": "figure",
                "text": "associations. The resulting new formula IC (x,[x,y ])    is based on (an estimate of) the conditional probability of seeing a word y to the right of the word x, modified with a dispersion parameter for x.lC(x, [x,y ]) -f~'Y nx + dz -1where f~,y is the frequency of[x,y ]  in the corpus, n~ is the number of pairs in which x occurs at the same position as in [x,y], and d(x) is the dispersion parameter understood as the number of distinct wordswith which x is paired. WhenIC(x, [x,y ]) = 0, x and y never occur together (i.e., f~.y=0); whenIC(x, [x,y ]) = 1, x occurs only with y (i.e., fx,y = n~ and dx = 1).",
                "uris": null
            },
            "TABREF2": {
                "text": "Therefore interpolate can be used to specialize approximate, while language cannot be used to expand algol. Note that if 8 is well chosen (we used 8=10), then the above filter will also help to reject antonymous and complementary relations, such as SIM~o,~(pl_i, cobol)=0.685 with ICW (pl_i)=O.O175 and ICW(cobol)=O.0289.",
                "type_str": "table",
                "content": "<table><tr><td colspan=\"2\">where (with n~, d~ &gt; 0): 14</td><td/></tr><tr><td/><td>n~</td><td/></tr><tr><td colspan=\"3\">ICL(W) = IC ([w,_ ]) -d~(n~+d~-l)</td></tr><tr><td/><td>n~</td><td/></tr><tr><td colspan=\"3\">ICR(w) = IC ([_,w ]) = d~(n~+d~-l)</td></tr><tr><td colspan=\"3\">For any two terms wl and w2, and a constant 8 &gt; 1,</td></tr><tr><td colspan=\"3\">if ICW(w2)&gt;8* ICW(wl) then w2 is considered</td></tr><tr><td>more</td><td>specific than ' wl.</td><td>In addition, if</td></tr><tr><td colspan=\"3\">SIMno,,(wl,w2)=\u00a2~&gt; O, where 0 is an empirically</td></tr><tr><td colspan=\"3\">established threshold, then w2 can be added to the</td></tr><tr><td colspan=\"3\">query containing term wl with weight ~.14 In the</td></tr><tr><td colspan=\"2\">CACM-3204 collection:</td><td/></tr><tr><td/><td>ICW (algol)</td><td>= 0.0020923</td></tr><tr><td/><td>ICW(language)</td><td>= 0.0000145</td></tr><tr><td/><td colspan=\"2\">ICW(approximate) = 0.0000218</td></tr><tr><td/><td colspan=\"2\">ICW (interpolate) = 0.0042410</td></tr><tr><td/><td colspan=\"2\">We continue working to</td></tr><tr><td colspan=\"3\">develop more effective filters. Examples of filtered</td></tr><tr><td colspan=\"3\">similarity relations obtained from CACM-3204</td></tr><tr><td colspan=\"3\">corpus (and their sim values): abstract graphical</td></tr><tr><td colspan=\"3\">0.612; approximate interpolate 0.655; linear ordi-</td></tr><tr><td colspan=\"3\">nary 0.743; program translate 0.596; storage buffer</td></tr><tr><td colspan=\"3\">0.622. Some (apparent?) failures: active digital</td></tr><tr><td colspan=\"3\">0.633; efficient new 0.580; gamma beta 0.720. More</td></tr><tr><td colspan=\"3\">similarities are listed in Table 2.</td></tr><tr><td>may not be ap-</td><td/><td/></tr><tr><td>propriate in this case. In particular, syntax-based contexts allow for</td><td/><td/></tr></table>",
                "num": null,
                "html": null
            },
            "TABREF4": {
                "text": ").",
                "type_str": "table",
                "content": "<table><tr><td>Tests</td><td>base</td><td>surf.trim</td><td>query exp.</td></tr><tr><td>Recall</td><td/><td>Precision</td><td/></tr><tr><td>0.00</td><td>0.764</td><td>0.775</td><td>0.793</td></tr><tr><td>0.10</td><td>0.674</td><td>0.688</td><td>0.700</td></tr><tr><td>0.20</td><td>0.547</td><td>0.547</td><td>0.573</td></tr><tr><td>0.30</td><td>0.449</td><td>0.479</td><td>0.486</td></tr><tr><td>0.40</td><td>0.387</td><td>0A21</td><td>0.421</td></tr><tr><td>0.50</td><td>0.329</td><td>0.356</td><td>0.372</td></tr><tr><td>0.60</td><td>0.273</td><td>0.280</td><td>0.304</td></tr><tr><td>0.70</td><td>0.198</td><td>0.222</td><td>0.226</td></tr><tr><td>0.80</td><td>0.146</td><td>0.170</td><td>0.174</td></tr><tr><td>0.90</td><td>0.093</td><td>0.112</td><td>0.114</td></tr><tr><td>1.00</td><td>0.079</td><td>0.087</td><td>0.090</td></tr><tr><td>Avg. Prec.</td><td>0.328</td><td>0.356</td><td>0.371</td></tr><tr><td>% change</td><td/><td>8.3</td><td>13.1</td></tr><tr><td>Norm Rec.</td><td>0.743</td><td>0.841</td><td>0.842</td></tr><tr><td>Queries</td><td>50</td><td>50</td><td>50</td></tr></table>",
                "num": null,
                "html": null
            }
        }
    }
}