File size: 75,137 Bytes
6fa4bc9
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
1001
1002
1003
1004
1005
1006
1007
1008
1009
1010
1011
1012
1013
1014
1015
1016
1017
1018
1019
1020
1021
1022
1023
1024
1025
1026
1027
1028
1029
1030
1031
1032
1033
1034
1035
1036
1037
1038
1039
1040
1041
1042
1043
1044
1045
1046
1047
1048
1049
1050
1051
1052
1053
1054
1055
1056
1057
1058
1059
1060
1061
1062
1063
1064
1065
1066
1067
1068
1069
1070
1071
1072
1073
1074
1075
1076
1077
1078
1079
1080
1081
1082
1083
1084
1085
1086
1087
1088
1089
1090
1091
1092
1093
1094
1095
1096
1097
1098
1099
1100
1101
1102
1103
1104
1105
1106
1107
1108
1109
1110
1111
1112
1113
1114
1115
1116
1117
1118
1119
1120
1121
1122
1123
1124
1125
1126
1127
1128
1129
1130
1131
1132
1133
1134
1135
1136
1137
1138
1139
1140
1141
1142
1143
1144
1145
1146
1147
1148
1149
1150
1151
1152
1153
1154
1155
1156
1157
1158
1159
1160
1161
1162
1163
1164
1165
1166
1167
1168
1169
1170
1171
1172
1173
1174
1175
1176
1177
1178
1179
1180
1181
1182
1183
1184
1185
1186
1187
1188
1189
1190
1191
1192
1193
1194
1195
1196
1197
1198
1199
1200
1201
1202
1203
1204
1205
1206
1207
1208
1209
1210
1211
1212
1213
1214
1215
1216
1217
1218
1219
1220
1221
1222
1223
1224
1225
1226
1227
1228
1229
1230
1231
1232
1233
1234
1235
1236
1237
1238
1239
1240
1241
1242
1243
1244
1245
1246
1247
1248
1249
1250
1251
1252
1253
1254
1255
1256
1257
1258
1259
1260
1261
1262
1263
1264
1265
1266
1267
1268
1269
1270
1271
1272
1273
1274
1275
1276
1277
1278
1279
1280
1281
{
    "paper_id": "P03-1018",
    "header": {
        "generated_with": "S2ORC 1.0.0",
        "date_generated": "2023-01-19T09:14:46.441187Z"
    },
    "title": "Orthogonal Negation in Vector Spaces for Modelling Word-Meanings and Document Retrieval",
    "authors": [
        {
            "first": "Dominic",
            "middle": [],
            "last": "Widdows",
            "suffix": "",
            "affiliation": {
                "laboratory": "",
                "institution": "Stanford University",
                "location": {}
            },
            "email": "dwiddows@csli.stanford.edu"
        }
    ],
    "year": "",
    "venue": null,
    "identifiers": {},
    "abstract": "Standard IR systems can process queries such as \"web NOT internet\", enabling users who are interested in arachnids to avoid documents about computing. The documents retrieved for such a query should be irrelevant to the negated query term. Most systems implement this by reprocessing results after retrieval to remove documents containing the unwanted string of letters. This paper describes and evaluates a theoretically motivated method for removing unwanted meanings directly from the original query in vector models, with the same vector negation operator as used in quantum logic. Irrelevance in vector spaces is modelled using orthogonality, so query vectors are made orthogonal to the negated term or terms. As well as removing unwanted terms, this form of vector negation reduces the occurrence of synonyms and neighbours of the negated terms by as much as 76% compared with standard Boolean methods. By altering the query vector itself, vector negation removes not only unwanted strings but unwanted meanings.",
    "pdf_parse": {
        "paper_id": "P03-1018",
        "_pdf_hash": "",
        "abstract": [
            {
                "text": "Standard IR systems can process queries such as \"web NOT internet\", enabling users who are interested in arachnids to avoid documents about computing. The documents retrieved for such a query should be irrelevant to the negated query term. Most systems implement this by reprocessing results after retrieval to remove documents containing the unwanted string of letters. This paper describes and evaluates a theoretically motivated method for removing unwanted meanings directly from the original query in vector models, with the same vector negation operator as used in quantum logic. Irrelevance in vector spaces is modelled using orthogonality, so query vectors are made orthogonal to the negated term or terms. As well as removing unwanted terms, this form of vector negation reduces the occurrence of synonyms and neighbours of the negated terms by as much as 76% compared with standard Boolean methods. By altering the query vector itself, vector negation removes not only unwanted strings but unwanted meanings.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Abstract",
                "sec_num": null
            }
        ],
        "body_text": [
            {
                "text": "Vector spaces enjoy widespread use in information retrieval (Salton and McGill, 1983; Baeza-Yates and Ribiero-Neto, 1999) , and from this original application vector models have been applied to semantic tasks such as word-sense acquisition (Landauer and Dumais, 1997; Widdows, 2003) and disambiguation (Sch\u00fctze, 1998) . One benefit of these models is that the similarity between pairs of terms or between queries and documents is a continuous function, automatically ranking results rather than giving just a YES/NO judgment. In addition, vector models can be freely built from unlabelled text and so are both entirely unsupervised, and an accurate reflection of the way words are used in practice.",
                "cite_spans": [
                    {
                        "start": 60,
                        "end": 85,
                        "text": "(Salton and McGill, 1983;",
                        "ref_id": "BIBREF8"
                    },
                    {
                        "start": 86,
                        "end": 121,
                        "text": "Baeza-Yates and Ribiero-Neto, 1999)",
                        "ref_id": "BIBREF0"
                    },
                    {
                        "start": 240,
                        "end": 267,
                        "text": "(Landauer and Dumais, 1997;",
                        "ref_id": "BIBREF6"
                    },
                    {
                        "start": 268,
                        "end": 282,
                        "text": "Widdows, 2003)",
                        "ref_id": "BIBREF13"
                    },
                    {
                        "start": 302,
                        "end": 317,
                        "text": "(Sch\u00fctze, 1998)",
                        "ref_id": "BIBREF10"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": "1"
            },
            {
                "text": "In vector models, terms are usually combined to form more complicated query statements by (weighted) vector addition. Because vector addition is commutative, terms are combined in a \"bag of words\" fashion. While this has proved to be effective, it certainly leaves room for improvement: any genuine natural language understanding of query statements cannot rely solely on commutative addition for building more complicated expressions out of primitives.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": "1"
            },
            {
                "text": "Other algebraic systems such as Boolean logic and set theory have well-known operations for building composite expressions out of more basic ones. Settheoretic models for the logical connectives 'AND', 'NOT' and 'OR' are completely understood by most researchers, and used by Boolean IR systems for assembling the results to complicated queries. It is clearly desirable to develop a calculus which combines the flexible ranking of results in a vector model with the crisp efficiency of Boolean logic, a goal which has long been recognised and attempted mainly for conjunction and disjunction. This paper proposes such a scheme for negation, based upon well-known linear algebra, and which also implies a vector form of disjunction. It turns out that these vector connectives are precisely those used in quantum logic (Birkhoff and von Neumann, 1936) , a development which is discussed in much more detail in (Widdows and Peters, 2003) . Because of its simplicity, our model is easy to understand and to implement.",
                "cite_spans": [
                    {
                        "start": 817,
                        "end": 849,
                        "text": "(Birkhoff and von Neumann, 1936)",
                        "ref_id": "BIBREF1"
                    },
                    {
                        "start": 908,
                        "end": 934,
                        "text": "(Widdows and Peters, 2003)",
                        "ref_id": "BIBREF12"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": "1"
            },
            {
                "text": "Vector negation is based on the intuition that unrelated meanings should be orthogonal to one another, which is to say that they should have no features in common at all. Thus vector negation generates a 'meaning vector' which is completely orthogonal to the negated term. Document retrieval experiments demonstrate that vector negation is not only effective at removing unwanted terms: it is also more effective than other methods at removing their synonyms and related terms. This justifies the claim that, by producing a single query vector for \"a NOT b\", we remove not only unwanted strings but also unwanted meanings.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": "1"
            },
            {
                "text": "We describe the underlying motivation behind this model and define the vector negation and disjunction operations in Section 2. In Section 3 we review other ways negation is implemented in Information Retrieval, comparing and contrasting with vector negation. In Section 4 we describe experiments demonstrating the benefits and drawbacks of vector negation compared with two other methods for negation.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": "1"
            },
            {
                "text": "In this section we use well-known linear algebra to define vector negation in terms of orthogonality and disjunction as the linear sum of subspaces. The mathematical apparatus is covered in greater detail in (Widdows and Peters, 2003) . If A is a set (in some universe of discourse U ), then 'NOT A' corresponds to the complement A \u22a5 of the set A in U (by definition). By a simple analogy, let A be a vector subspace of a vector space V (equipped with a scalar product). Then the concept 'NOT A' should correspond to the orthogonal complement A \u22a5 of A under the scalar product (Birkhoff and von Neumann, 1936, \u00a76) . If we think of a basis for V as a set of features, this says that 'NOT A' refers to the subspace of V which has no features in common with A. We make the following definitions. Let V be a (real) vector space equipped with a scalar product. We will use the notation A \u2264 V to mean \"A is a vector subspace of V .\" For A \u2264 V , define the orthogonal subspace A \u22a5 to be the subspace",
                "cite_spans": [
                    {
                        "start": 208,
                        "end": 234,
                        "text": "(Widdows and Peters, 2003)",
                        "ref_id": "BIBREF12"
                    },
                    {
                        "start": 577,
                        "end": 613,
                        "text": "(Birkhoff and von Neumann, 1936, \u00a76)",
                        "ref_id": null
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Negation and Disjunction in Vector Spaces",
                "sec_num": "2"
            },
            {
                "text": "A \u22a5 \u2261 {v \u2208 V : \u2200a \u2208 A, a \u2022 v = 0}.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Negation and Disjunction in Vector Spaces",
                "sec_num": "2"
            },
            {
                "text": "For the purposes of modelling word-meanings, we might think of 'orthogonal' as a model for 'completely unrelated' (having similarity score zero). This makes perfect sense for information retrieval, where we assume (for example) that if two words never occur in the same document then they have no features in common.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Negation and Disjunction in Vector Spaces",
                "sec_num": "2"
            },
            {
                "text": "Definition 1 Let a, b \u2208 V and A, B \u2264 V . By NOT A we mean A \u22a5 and by NOT a, we mean a \u22a5 , where a = {\u03bba : \u03bb \u2208 R} is the 1-dimensional subspace subspace generated by a. By a NOT B we mean the projection of a onto B \u22a5 and by a NOT b we mean the projection of a onto b \u22a5 .",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Negation and Disjunction in Vector Spaces",
                "sec_num": "2"
            },
            {
                "text": "We now show how to use these notions to perform calculations with individual term or query vectors in a form which is simple to program and efficient to run.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Negation and Disjunction in Vector Spaces",
                "sec_num": "2"
            },
            {
                "text": "Theorem 1 Let a, b \u2208 V . Then a NOT b is represented by the vector",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Negation and Disjunction in Vector Spaces",
                "sec_num": "2"
            },
            {
                "text": "a NOT b \u2261 a \u2212 a \u2022 b |b| 2 b.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Negation and Disjunction in Vector Spaces",
                "sec_num": "2"
            },
            {
                "text": "where",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Negation and Disjunction in Vector Spaces",
                "sec_num": "2"
            },
            {
                "text": "|b| 2 = b \u2022 b is the modulus of b.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Negation and Disjunction in Vector Spaces",
                "sec_num": "2"
            },
            {
                "text": "Proof. A simple proof is given in (Widdows and Peters, 2003) .",
                "cite_spans": [
                    {
                        "start": 34,
                        "end": 60,
                        "text": "(Widdows and Peters, 2003)",
                        "ref_id": "BIBREF12"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Negation and Disjunction in Vector Spaces",
                "sec_num": "2"
            },
            {
                "text": "For normalised vectors, Theorem 1 takes the particularly simple form",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Negation and Disjunction in Vector Spaces",
                "sec_num": "2"
            },
            {
                "text": "EQUATION",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [
                    {
                        "start": 0,
                        "end": 8,
                        "text": "EQUATION",
                        "ref_id": "EQREF",
                        "raw_str": "a NOT b = a \u2212 (a \u2022 b)b,",
                        "eq_num": "(1)"
                    }
                ],
                "section": "Negation and Disjunction in Vector Spaces",
                "sec_num": "2"
            },
            {
                "text": "which in practice is then renormalised for consistency. One computational benefit is that Theorem 1 gives a single vector for a NOT b, so finding the similarity between any other vector and a NOT b is just a single scalar product computation. Disjunction is also simple to envisage, the expression b 1 OR . . . OR b n being modelled by the subspace",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Negation and Disjunction in Vector Spaces",
                "sec_num": "2"
            },
            {
                "text": "B = {\u03bb 1 b 1 + . . . + \u03bb n b n : \u03bb i \u2208 R}.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Negation and Disjunction in Vector Spaces",
                "sec_num": "2"
            },
            {
                "text": "Theoretical motivation for this formulation can be found in (Birkhoff and von Neumann, 1936, \u00a71, \u00a76) and (Widdows and Peters, 2003) : for example, B is the smallest subspace of V which contains the set {b j }.",
                "cite_spans": [
                    {
                        "start": 60,
                        "end": 100,
                        "text": "(Birkhoff and von Neumann, 1936, \u00a71, \u00a76)",
                        "ref_id": null
                    },
                    {
                        "start": 105,
                        "end": 131,
                        "text": "(Widdows and Peters, 2003)",
                        "ref_id": "BIBREF12"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Negation and Disjunction in Vector Spaces",
                "sec_num": "2"
            },
            {
                "text": "Computing the similarity between a vector a and this subspace B is computationally more expensive than for the negation of Theorem 1, because the scalar product of a with (up to) n vectors in an orthogonal basis for B must be computed. Thus the gain we get by comparing each document with the query a NOT b using only one scalar product operation is absent for disjunction.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Negation and Disjunction in Vector Spaces",
                "sec_num": "2"
            },
            {
                "text": "However, this benefit is regained in the case of negated disjunction. Suppose we negate not only one argument but several. If a user specifies that they want documents related to a but not b 1 , b 2 , . . . , b n , then (unless otherwise stated) it is clear that they only want documents related to none of the unwanted terms b i (rather than, say, the average of these terms).",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Negation and Disjunction in Vector Spaces",
                "sec_num": "2"
            },
            {
                "text": "This motivates a process which can be thought of as a vector formulation of the classical de Morgan equivalence \u223c a\u2227 \u223c b \u2261\u223c (a \u2228 b), by which the expression",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Negation and Disjunction in Vector Spaces",
                "sec_num": "2"
            },
            {
                "text": "EQUATION",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [
                    {
                        "start": 0,
                        "end": 8,
                        "text": "EQUATION",
                        "ref_id": "EQREF",
                        "raw_str": "a AND NOT b 1 AND NOT b 2 . . . AND NOT b n is translated to a NOT (b 1 OR . . . OR b n ).",
                        "eq_num": "(2)"
                    }
                ],
                "section": "Negation and Disjunction in Vector Spaces",
                "sec_num": "2"
            },
            {
                "text": "Using Definition 1, this expression can be modelled with a unique vector which is orthogonal to all of the unwanted arguments {b 1 }. However, unless the vectors b 1 , . . . , b n are orthogonal (or identical), we need to obtain an orthogonal basis for the subspace b 1 OR . . . OR b n before we can implement a higherdimensional version of Theorem 1. This is because the projection operators involved are in general noncommutative, one of the hallmark differences between Boolean and quantum logic. In this way vector negation generates a meaningvector which takes into account the similarities and differences between the negative terms. A query for chip NOT computer, silicon is treated differently from a query for chip NOT computer, potato.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Negation and Disjunction in Vector Spaces",
                "sec_num": "2"
            },
            {
                "text": "Vector negation is capable of realising that for the first query, the two negative terms are referring to the same general topic area, but in the second case the task is to remove radically different meanings from the query. This technique has been used to remove several meanings from a query iteratively, allowing a user to 'home in on' the desired meaning by systematically pruning away unwanted features.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Negation and Disjunction in Vector Spaces",
                "sec_num": "2"
            },
            {
                "text": "word-senses Our first experiments with vector negation were to determine whether the negation operator could find different senses of ambiguous words by negating a word closely related to one of the meanings. A vector space model was built using Latent Semantic Analysis, similar to the systems of (Landauer and Dumais, 1997; Sch\u00fctze, 1998) . The effect of LSA is to increase linear dependency between terms, and for this reason it is likely that LSA is a crucial step in our approach. Terms were indexed depending on their co-occurrence with 1000 frequent \"content-bearing words\" in a 15 word context-window, giving each term 1000 coordinates. This was reduced to 100 dimensions using singular value decomposition. Later on, document vectors were assigned in the usual manner by summation of term vectors using tf-idf weighting (Salton and McGill, 1983, p. 121) . Vectors were normalised, so that the standard (Euclidean) scalar product and cosine similarity coincided. This scalar product was used as a measure of term-term and term-document similarity throughout our experiments. This method was used because it has been found to be effective at producing good term-term similarities for word-sense disambiguation (Sch\u00fctze, 1998) and automatic lexical acquisition (Widdows, 2003) , and these similarities were used to generate interesting queries and to judge the effectiveness of different forms of negation. More details on the building of this vector space model can be found in (Widdows, 2003; Widdows and Peters, 2003 Two early results using negation to find senses of ambiguous words are given in Table 1 , showing that vector negation is very effective for removing the 'legal' meaning from the word suit and the 'sporting' meaning from the word play, leaving respectively the 'clothing' and 'performance' meanings. Note that re-moving a particular word also removes concepts related to the negated word. This gives credence to the claim that our mathematical model is removing the meaning of a word, rather than just a string of characters. This encouraged us to set up a larger scale experiment to test this hypothesis, which is described in Section 4.",
                "cite_spans": [
                    {
                        "start": 298,
                        "end": 325,
                        "text": "(Landauer and Dumais, 1997;",
                        "ref_id": "BIBREF6"
                    },
                    {
                        "start": 326,
                        "end": 340,
                        "text": "Sch\u00fctze, 1998)",
                        "ref_id": "BIBREF10"
                    },
                    {
                        "start": 829,
                        "end": 862,
                        "text": "(Salton and McGill, 1983, p. 121)",
                        "ref_id": null
                    },
                    {
                        "start": 1217,
                        "end": 1232,
                        "text": "(Sch\u00fctze, 1998)",
                        "ref_id": "BIBREF10"
                    },
                    {
                        "start": 1267,
                        "end": 1282,
                        "text": "(Widdows, 2003)",
                        "ref_id": "BIBREF13"
                    },
                    {
                        "start": 1485,
                        "end": 1500,
                        "text": "(Widdows, 2003;",
                        "ref_id": "BIBREF13"
                    },
                    {
                        "start": 1501,
                        "end": 1525,
                        "text": "Widdows and Peters, 2003",
                        "ref_id": "BIBREF12"
                    }
                ],
                "ref_spans": [
                    {
                        "start": 1606,
                        "end": 1613,
                        "text": "Table 1",
                        "ref_id": "TABREF1"
                    }
                ],
                "eq_spans": [],
                "section": "Initial experiments modelling",
                "sec_num": "2.1"
            },
            {
                "text": "There have been rigourous studies of Boolean operators for information retrieval, including the pnorms of and the matrix forms of Turtle and Croft (1989) , which have focussed particularly on mathematical expressions for conjunction and disjunction. However, typical forms of negation (such as NOT p = 1\u2212p) have not taken into account the relationship between the negated argument and the rest of the query.",
                "cite_spans": [
                    {
                        "start": 130,
                        "end": 153,
                        "text": "Turtle and Croft (1989)",
                        "ref_id": "BIBREF11"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Other forms of Negation in IR",
                "sec_num": "3"
            },
            {
                "text": "Negation has been used in two main forms in IR systems: for the removal of unwanted documents after retrieval and for negative relevance feedback. We describe these methods and compare them with vector negation.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Other forms of Negation in IR",
                "sec_num": "3"
            },
            {
                "text": "retrieval A traditional Boolean search for documents related to the query a NOT b would return simply those documents which contain the term a and do not contain the term b. More formally, let D be the document collection and let D i \u2282 D be the subset of documents containing the term i. Then the results to the Boolean query for a NOT b would be the set D a \u2229D b , where D b is the complement of D b in D. Variants of this are used within a vector model, by using vector retrieval to retrieve a (ranked) set of relevant documents and then 'throwing away' documents containing the unwanted terms (Salton and McGill, 1983, p. 26) . This paper will refer to such methods under the general heading of 'post-retrieval filtering'.",
                "cite_spans": [
                    {
                        "start": 596,
                        "end": 628,
                        "text": "(Salton and McGill, 1983, p. 26)",
                        "ref_id": null
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Negation by filtering results after",
                "sec_num": "3.1"
            },
            {
                "text": "There are at least three reasons for preferring vector negation to post-retrieval filtering. Firstly, postretrieval filtering is not very principled and is subject to error: for example, it would remove a long document containing only one instance of the unwanted term.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Negation by filtering results after",
                "sec_num": "3.1"
            },
            {
                "text": "One might argue here that if a document containing unwanted terms is given a 'negative-score' rather than just disqualified, this problem is avoided. This would leaves us considering a combined score,",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Negation by filtering results after",
                "sec_num": "3.1"
            },
            {
                "text": "sim(d, a NOT b) = d \u2022 a \u2212 \u03bbd \u2022 b",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Negation by filtering results after",
                "sec_num": "3.1"
            },
            {
                "text": "for some parameter \u03bb. However, since this is the same as d \u2022 (a \u2212 \u03bbb), it is computationally more ef-ficient to treat a \u2212 \u03bbb as a single vector. This is exactly what vector negation accomplishes, and also determines a suitable value of \u03bb from a and b. Thus a second benefit for vector negation is that it produces a combined vector for a NOT b which enables the relevance score of each document to be computed using just one scalar product operation.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Negation by filtering results after",
                "sec_num": "3.1"
            },
            {
                "text": "The third gain is that vector retrieval proves to be better at removing not only an unwanted term but also its synonyms and related words (see Section 4), which is clearly desirable if we wish to remove not only a string of characters but the meaning represented by this string.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Negation by filtering results after",
                "sec_num": "3.1"
            },
            {
                "text": "Relevance feedback has been shown to improve retrieval (Salton and Buckley, 1990) . In this process, documents judged to be relevant have (some multiple of) their document vector added to the query: documents judged to be non-relevant have (some multiple of) their document vector subtracted from the query, producing a new query according to the formula",
                "cite_spans": [
                    {
                        "start": 55,
                        "end": 81,
                        "text": "(Salton and Buckley, 1990)",
                        "ref_id": "BIBREF7"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Negative relevance feedback",
                "sec_num": "3.2"
            },
            {
                "text": "Q i+1 = \u03b1Q i + \u03b2 rel D i |D i | \u2212 \u03b3 nonrel D i |D i | ,",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Negative relevance feedback",
                "sec_num": "3.2"
            },
            {
                "text": "where Q i is the i th query vector, D i is the set of documents returned by Q i which has been partitioned into relevant and non-relevant subsets, and \u03b1, \u03b2, \u03b3 \u2208 R are constants. Salton and Buckley (1990) report best results using \u03b2 = 0.75 and \u03b3 = 0.25. The positive feedback part of this process has become standard in many search engines with options such as \"More documents like this\" or \"Similar pages\". The subtraction option (called 'negative relevance feedback') is much rarer. A widely held opinion is that that negative feedback is liable to harm retrieval, because it may move the query away from relevant as well as non-relevant documents (Kowalski, 1997, p. 160) .",
                "cite_spans": [
                    {
                        "start": 178,
                        "end": 203,
                        "text": "Salton and Buckley (1990)",
                        "ref_id": "BIBREF7"
                    },
                    {
                        "start": 649,
                        "end": 673,
                        "text": "(Kowalski, 1997, p. 160)",
                        "ref_id": null
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Negative relevance feedback",
                "sec_num": "3.2"
            },
            {
                "text": "The concepts behind negative relevance feedback are discussed instructively by Dunlop (1997) . Negative relevance feedback introduces the idea of subtracting an unwanted vector from a query, but gives no general method for deciding \"how much to subtract\". We shall refer to such methods as 'Constant Subtraction'. Dunlop (1997, p. 139) gives an analysis which leads to a very intuitive reason for preferring vector negation over constant subtraction. If a user removes an unwanted term which the model deems to be closely related to the desired term, this should have a strong effect, because there is a significant 'difference of opinion' between the user and the model. (From an even more informal point of view, why would anyone take the trouble to remove a meaning that isn't there anyway?). With any kind of constant subtraction, however, the removal of distant points has a greater effect on the final querystatement than the removal of nearby points.",
                "cite_spans": [
                    {
                        "start": 79,
                        "end": 92,
                        "text": "Dunlop (1997)",
                        "ref_id": "BIBREF2"
                    },
                    {
                        "start": 314,
                        "end": 335,
                        "text": "Dunlop (1997, p. 139)",
                        "ref_id": null
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Negative relevance feedback",
                "sec_num": "3.2"
            },
            {
                "text": "Vector negation corrects this intuitive mismatch. Recall from Equation 1 that (using normalised vectors for simplicity) the vector a NOT b is given by a \u2212 (a \u2022 b)b. The similarity of a with a NOT b is therefore",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Negative relevance feedback",
                "sec_num": "3.2"
            },
            {
                "text": "a \u2022 (a \u2212 (a \u2022 b)b) = 1 \u2212 (a \u2022 b) 2 .",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Negative relevance feedback",
                "sec_num": "3.2"
            },
            {
                "text": "The closer a and b are, the greater the (a \u2022 b) 2 factor becomes, so the similarity of a with a NOT b becomes smaller the closer a is to b. This coincides exactly with Dunlop's intuitive view: removing a concept which in the model is very close to the original query has a large effect on the outcome. Negative relevance feedback introduces the idea of subtracting an unwanted vector from a query, but gives no general method for deciding 'how much to subtract'. We shall refer to such methods as 'Constant Subtraction'.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Negative relevance feedback",
                "sec_num": "3.2"
            },
            {
                "text": "This section describes experiments which compare the three methods of negation described above (postretrieval filtering, constant subtraction and vector negation) with the baseline alternative of no negation at all. The experiments were carried out using the vector space model described in Section 2.1.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Evaluation and Results",
                "sec_num": "4"
            },
            {
                "text": "To judge the effectiveness of different methods at removing unwanted meanings, with a large number of queries, we made the following assumptions. A document which is relevant to the meaning of 'term a NOT term b' should contain as many references to term a and as few references to term b as possible. Close neighbours and synonyms of term b are undesirable as well, since if they occur the document in question is likely to be related to the negated term even if the negated term itself does not appear.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Evaluation and Results",
                "sec_num": "4"
            },
            {
                "text": "and multiple terms 1200 queries of the form 'term a NOT term b' were generated for 3 different document collections. The terms chosen were the 100 most frequently occurring (non-stop) words in the collection, 100 mid-frequency words (the 1001 st to 1100 th most frequent), and 100 low-frequency words (the 5001 st to 5100 th most frequent). The nearest neighbour (word with highest cosine similarity) to each positive term was taken to be the negated term. (This assumes that a user is most likely to want to remove a meaning closely related to the positive term: there is no point in removing unrelated information which would not be retrieved anyway.) In addition, for the 100 most frequent words, an extra retrieval task was performed with the roles of the positive term and the negated term reversed, so that in this case the system was being asked to remove the very most common words in the collection from a query generated by their nearest neighbour. We anticipated that this would be an especially difficult task, and a particularly realistic one, simulating a user who is swamped with information about a 'popular topic' in which they are not interested. 1 The document collections used were from the British National Corpus (published by Oxford University, the textual data consisting of ca 90M words, 85K documents), the New York Times News Syndicate (1994-96, from the North American News Text Corpus published by the Linguistic Data Consortium, ca 143M words, 370K documents) and the Ohsumed corpus of medical documents (Hersh et al., 1994 ) (ca 40M words, 230K documents).",
                "cite_spans": [
                    {
                        "start": 1165,
                        "end": 1166,
                        "text": "1",
                        "ref_id": null
                    },
                    {
                        "start": 1534,
                        "end": 1553,
                        "text": "(Hersh et al., 1994",
                        "ref_id": "BIBREF4"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Queries and results for negating single",
                "sec_num": "4.1"
            },
            {
                "text": "The 20 documents most relevant to each query were obtained using each of the following four techniques.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Queries and results for negating single",
                "sec_num": "4.1"
            },
            {
                "text": "\u2022 No negation. The query was just the positive term and the negated term was ignored.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Queries and results for negating single",
                "sec_num": "4.1"
            },
            {
                "text": "\u2022 Post-retrieval filtering. After vector retrieval using only the positive term as the query term, documents containing the negated term were eliminated.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Queries and results for negating single",
                "sec_num": "4.1"
            },
            {
                "text": "\u2022 Constant subtraction. Experiments were performed with a variety of subtraction constants. The query a NOT b was thus given the vector a\u2212\u03bbb for some \u03bb \u2208 [0, 1]. The results recorded in this paper were obtained using \u03bb = 0.75, which gives a direct comparison with vector negation.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Queries and results for negating single",
                "sec_num": "4.1"
            },
            {
                "text": "\u2022 Vector negation, as described in this paper.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Queries and results for negating single",
                "sec_num": "4.1"
            },
            {
                "text": "For each set of retrieved documents, the following results were counted.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Queries and results for negating single",
                "sec_num": "4.1"
            },
            {
                "text": "\u2022 The relative frequency of the positive term.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Queries and results for negating single",
                "sec_num": "4.1"
            },
            {
                "text": "\u2022 The relative frequency of the negated term.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Queries and results for negating single",
                "sec_num": "4.1"
            },
            {
                "text": "\u2022 The relative frequency of the ten nearest neighbours of the negative term. One slight subtlety here is that the positive term was itself a close neighbour of the negated term: to avoid inconsistency, we took as 'negative neighbours' only those which were closer to the negated term than to the positive term.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Queries and results for negating single",
                "sec_num": "4.1"
            },
            {
                "text": "\u2022 The relative frequency of the synonyms of the negated term, as given by the WordNet database (Fellbaum, 1998) . As above, words which were also synonyms of the positive term were discounted. On the whole fewer such synonyms were found in the Ohsumed and NYT documents, which have many medical terms and proper names which are not in WordNet.",
                "cite_spans": [
                    {
                        "start": 95,
                        "end": 111,
                        "text": "(Fellbaum, 1998)",
                        "ref_id": null
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Queries and results for negating single",
                "sec_num": "4.1"
            },
            {
                "text": "Additional experiments were carried out to compare the effectiveness of different forms of negation at removing several unwanted terms. The same 1200 queries were used as above, and the next nearest neighbour was added as a further negative argument. For two negated terms, the post-retrieval filtering process worked by discarding documents containing either of the negative terms. Constant subtraction worked by subtracting a constant multiple of each of the negated terms from the query. Vector negation worked by making the query vector orthogonal to the plane generated by the two negated terms, as in Equation 2.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Queries and results for negating single",
                "sec_num": "4.1"
            },
            {
                "text": "Results were collected in much the same way as the results for single-argument negation. Occurrences of each of the negated terms were added together, as were occurrences of the neighbours and WordNet synonyms of either of the negated words.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Queries and results for negating single",
                "sec_num": "4.1"
            },
            {
                "text": "The results of our experiments are collected in Table 2 and summarised in Figure 1 . The results for a single negated term demonstrate the following points.",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 48,
                        "end": 55,
                        "text": "Table 2",
                        "ref_id": "TABREF3"
                    },
                    {
                        "start": 74,
                        "end": 82,
                        "text": "Figure 1",
                        "ref_id": "FIGREF0"
                    }
                ],
                "eq_spans": [],
                "section": "Queries and results for negating single",
                "sec_num": "4.1"
            },
            {
                "text": "\u2022 All forms of negation proved extremely good at removing the unwanted words. This is trivially true for post-retrieval filtering, which works by discarding any documents that contain the negated term. It is more interesting that constant subtraction and vector negation performed so well, cutting occurrences of the negated word by 82% and 85% respectively compared with the baseline of no negation.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Queries and results for negating single",
                "sec_num": "4.1"
            },
            {
                "text": "\u2022 On average, using no negation at all retrieved the most positive terms, though not in every case. While this upholds the claim that any form of negation is likely to remove relevant as well as irrelevant results, the damage done was only around 3% for post-retrieval filtering and 25% for constant and vector negation.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Queries and results for negating single",
                "sec_num": "4.1"
            },
            {
                "text": "\u2022 These observations alone would suggest that post-retrieval filtering is the best method for the simple goal of maximising occurrences of the positive term while minimising the occurrences of the negated term. However, vector negation and constant subtraction dramatically outperformed post-retrieval filtering at removing neighbours of the negated terms, and were reliably better at removing WordNet synonyms as well. We believe this to be good evidence that, while post-search filtering is by definition better at removing unwanted strings, the vector methods (either orthogonal or constant subtraction) are much better at removing unwanted meanings. Preliminary observations suggest that in the cases where vector negation retrieves fewer occurrences of the positive term than other methods, the other methods are often retrieving documents that are still related in meaning to the negated term.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Queries and results for negating single",
                "sec_num": "4.1"
            },
            {
                "text": "\u2022 Constant subtraction can give similar results to vector negation on these queries (though the vector negation results are slightly better). This is with queries where the negated term is the closest neighbour of the positive term, and the assumption that the similarity between these pairs is around 0.75 is a reasonable approximation. However, further experiments with a variety of negated arguments chosen at random from a list of neighbours demonstrated that in this more general setting, the flexibility provided by vector negation produced conclusively better results than constant subtraction for any single fixed constant.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Queries and results for negating single",
                "sec_num": "4.1"
            },
            {
                "text": "In addition, the results for removing multiple negated terms demonstrate the following points.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Queries and results for negating single",
                "sec_num": "4.1"
            },
            {
                "text": "\u2022 Removing another negated term further reduces the retrieval of the positive term for all forms of negation. Constant subtraction is the worst affected, performing noticeably worse than vector negation.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Queries and results for negating single",
                "sec_num": "4.1"
            },
            {
                "text": "\u2022 All three forms of negation still remove many occurrences of the negated term. Vector negation and (trivially) post-search filtering perform as well as they do with a single negated term. However, constant subtraction performs much worse, retrieving more than twice as many unwanted terms as vector negation.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Queries and results for negating single",
                "sec_num": "4.1"
            },
            {
                "text": "\u2022 Post-retrieval filtering was even less effective at removing neighbours of the negated term than with a single negated term. Constant subtraction also performed much less well. Vector negation was by far the best method for removing negative neighbours. Table 2 holds for WordNet synonyms, though the results are less pronounced.",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 256,
                        "end": 263,
                        "text": "Table 2",
                        "ref_id": "TABREF3"
                    }
                ],
                "eq_spans": [],
                "section": "Queries and results for negating single",
                "sec_num": "4.1"
            },
            {
                "text": "This shows that vector negation is capable of removing unwanted terms and their related words from retrieval results, while retaining more occurrences of the original query term than constant subtraction. Vector negation does much better than other methods at removing neighbours and synonyms, and we therefore expect that it is better at removing documents referring to unwanted meanings of ambiguous words. Experiments with sense-tagged data are planned to test this hypothesis.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Queries and results for negating single",
                "sec_num": "4.1"
            },
            {
                "text": "The goal of these experiments was to evaluate the extent to which the different methods could remove unwanted meanings, which we measured by counting the frequency of unwanted terms and concepts in retrieved documents. This leaves the problems of determining the optimal scope for the negation quantifier for an IR system, and of developing a natural user interface for this process for complex queries. These important challenges are beyond the scope of this paper, but would need to be addressed to incorporate vector negation into a state-of-the-art IR system.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Queries and results for negating single",
                "sec_num": "4.1"
            },
            {
                "text": "Traditional branches of science have exploited the structure inherent in vector spaces and developed rigourous techniques which could contribute to natural language processing. As an example of this potential fertility, we have adapted the negation and disjunction connectives used in quantum logic to the tasks of word-sense discrimination and information retrieval.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Conclusions",
                "sec_num": "5"
            },
            {
                "text": "Experiments focussing on the use of vector negation to remove individual and multiple terms from queries have shown that this is a powerful and efficient tool for removing both unwanted terms and their related meanings from retrieved documents.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Conclusions",
                "sec_num": "5"
            },
            {
                "text": "Because it associates a unique vector to each query statement involving negation, the similarity between each document and the query can be calculated using just one scalar product computation, a considerable gain in efficiency over methods which involve some form of post-retrieval filtering.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Conclusions",
                "sec_num": "5"
            },
            {
                "text": "We hope that these preliminary aspects will be initial gains in developing a concrete and effective system for learning, representing and composing aspects of lexical meaning.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Conclusions",
                "sec_num": "5"
            },
            {
                "text": "An interactive demonstration of negation for word similarity and document retrieval is publicly available at http://infomap.stanford.edu/webdemo.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Demonstration",
                "sec_num": null
            },
            {
                "text": "For reasons of space we do not show the retrieval performance on query terms of different frequencies in this paper, though more detailed results are available from the author on request.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "",
                "sec_num": null
            }
        ],
        "back_matter": [],
        "bib_entries": {
            "BIBREF0": {
                "ref_id": "b0",
                "title": "Modern Information Retrieval",
                "authors": [
                    {
                        "first": "Ricardo",
                        "middle": [],
                        "last": "Baeza",
                        "suffix": ""
                    },
                    {
                        "first": "-",
                        "middle": [],
                        "last": "Yates",
                        "suffix": ""
                    },
                    {
                        "first": "Berthier",
                        "middle": [],
                        "last": "Ribiero-Neto",
                        "suffix": ""
                    }
                ],
                "year": 1999,
                "venue": "",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Ricardo Baeza-Yates and Berthier Ribiero-Neto. 1999. Modern Information Retrieval. Addison Wesley / ACM Press.",
                "links": null
            },
            "BIBREF1": {
                "ref_id": "b1",
                "title": "The logic of quantum mechanics",
                "authors": [
                    {
                        "first": "Garrett",
                        "middle": [],
                        "last": "Birkhoff",
                        "suffix": ""
                    },
                    {
                        "first": "",
                        "middle": [],
                        "last": "John Von Neumann",
                        "suffix": ""
                    }
                ],
                "year": 1936,
                "venue": "Annals of Mathematics",
                "volume": "37",
                "issue": "",
                "pages": "823--843",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Garrett Birkhoff and John von Neumann. 1936. The logic of quantum mechanics. Annals of Mathemat- ics, 37:823-843.",
                "links": null
            },
            "BIBREF2": {
                "ref_id": "b2",
                "title": "The effect of accessing nonmatching documents on relevance feedback",
                "authors": [
                    {
                        "first": "Mark",
                        "middle": [],
                        "last": "Dunlop",
                        "suffix": ""
                    }
                ],
                "year": 1997,
                "venue": "ACM Transactions on Information Systems",
                "volume": "15",
                "issue": "2",
                "pages": "137--153",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Mark Dunlop. 1997. The effect of accessing non- matching documents on relevance feedback. ACM Transactions on Information Systems, 15(2):137- 153, April.",
                "links": null
            },
            "BIBREF3": {
                "ref_id": "b3",
                "title": "WordNet: An Electronic Lexical Database",
                "authors": [],
                "year": 1998,
                "venue": "",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Christiane Fellbaum, editor. 1998. WordNet: An Electronic Lexical Database. MIT Press, Cam- bridge MA.",
                "links": null
            },
            "BIBREF4": {
                "ref_id": "b4",
                "title": "Ohsumed: An interactive retrieval evaluation and new large test collection for research",
                "authors": [
                    {
                        "first": "William",
                        "middle": [],
                        "last": "Hersh",
                        "suffix": ""
                    },
                    {
                        "first": "Chris",
                        "middle": [],
                        "last": "Buckley",
                        "suffix": ""
                    },
                    {
                        "first": "T",
                        "middle": [
                            "J"
                        ],
                        "last": "Leone",
                        "suffix": ""
                    },
                    {
                        "first": "David",
                        "middle": [],
                        "last": "Hickam",
                        "suffix": ""
                    }
                ],
                "year": 1994,
                "venue": "Proceedings of the 17th Annual ACM SIGIR Conference",
                "volume": "",
                "issue": "",
                "pages": "192--201",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "William Hersh, Chris Buckley, T. J. Leone, and David Hickam. 1994. Ohsumed: An interactive retrieval evaluation and new large test collection for research. In Proceedings of the 17th Annual ACM SIGIR Conference, pages 192-201.",
                "links": null
            },
            "BIBREF5": {
                "ref_id": "b5",
                "title": "Information retrieval systems: theory and implementation. Kluwer academic publishers",
                "authors": [
                    {
                        "first": "Gerald",
                        "middle": [],
                        "last": "Kowalski",
                        "suffix": ""
                    }
                ],
                "year": 1997,
                "venue": "",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Gerald Kowalski. 1997. Information retrieval sys- tems: theory and implementation. Kluwer aca- demic publishers, Norwell, MA.",
                "links": null
            },
            "BIBREF6": {
                "ref_id": "b6",
                "title": "A solution to plato's problem: The latent semantic analysis theory of acquisition",
                "authors": [
                    {
                        "first": "Thomas",
                        "middle": [],
                        "last": "Landauer",
                        "suffix": ""
                    },
                    {
                        "first": "Susan",
                        "middle": [],
                        "last": "Dumais",
                        "suffix": ""
                    }
                ],
                "year": 1997,
                "venue": "Psychological Review",
                "volume": "104",
                "issue": "2",
                "pages": "211--240",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Thomas Landauer and Susan Dumais. 1997. A solu- tion to plato's problem: The latent semantic anal- ysis theory of acquisition. Psychological Review, 104(2):211-240.",
                "links": null
            },
            "BIBREF7": {
                "ref_id": "b7",
                "title": "Improving retrieval performance by relevance feedback",
                "authors": [
                    {
                        "first": "Gerard",
                        "middle": [],
                        "last": "Salton",
                        "suffix": ""
                    },
                    {
                        "first": "Chris",
                        "middle": [],
                        "last": "Buckley",
                        "suffix": ""
                    }
                ],
                "year": 1990,
                "venue": "Journal of the American society for information science",
                "volume": "41",
                "issue": "4",
                "pages": "288--297",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Gerard Salton and Chris Buckley. 1990. Improv- ing retrieval performance by relevance feedback. Journal of the American society for information science, 41(4):288-297.",
                "links": null
            },
            "BIBREF8": {
                "ref_id": "b8",
                "title": "Introduction to modern information retrieval",
                "authors": [
                    {
                        "first": "Gerard",
                        "middle": [],
                        "last": "Salton",
                        "suffix": ""
                    },
                    {
                        "first": "Michael",
                        "middle": [],
                        "last": "Mcgill",
                        "suffix": ""
                    }
                ],
                "year": 1983,
                "venue": "",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Gerard Salton and Michael McGill. 1983. Introduc- tion to modern information retrieval. McGraw- Hill, New York, NY.",
                "links": null
            },
            "BIBREF9": {
                "ref_id": "b9",
                "title": "Extended boolean information retrieval",
                "authors": [
                    {
                        "first": "Gerard",
                        "middle": [],
                        "last": "Salton",
                        "suffix": ""
                    },
                    {
                        "first": "Edward",
                        "middle": [
                            "A"
                        ],
                        "last": "Fox",
                        "suffix": ""
                    },
                    {
                        "first": "Harry",
                        "middle": [],
                        "last": "Wu",
                        "suffix": ""
                    }
                ],
                "year": 1983,
                "venue": "Communications of the ACM",
                "volume": "26",
                "issue": "11",
                "pages": "1022--1036",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Gerard Salton, Edward A. Fox, and Harry Wu. 1983. Extended boolean information retrieval. Commu- nications of the ACM, 26(11):1022-1036, Novem- ber.",
                "links": null
            },
            "BIBREF10": {
                "ref_id": "b10",
                "title": "Automatic word sense discrimination",
                "authors": [
                    {
                        "first": "Hinrich",
                        "middle": [],
                        "last": "Sch\u00fctze",
                        "suffix": ""
                    }
                ],
                "year": 1998,
                "venue": "Computational Linguistics",
                "volume": "24",
                "issue": "1",
                "pages": "97--124",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Hinrich Sch\u00fctze. 1998. Automatic word sense dis- crimination. Computational Linguistics, 24(1):97- 124.",
                "links": null
            },
            "BIBREF11": {
                "ref_id": "b11",
                "title": "Inference networks for document retrieval",
                "authors": [
                    {
                        "first": "Howard",
                        "middle": [],
                        "last": "Turtle",
                        "suffix": ""
                    },
                    {
                        "first": "W. Bruce",
                        "middle": [],
                        "last": "Croft",
                        "suffix": ""
                    }
                ],
                "year": 1989,
                "venue": "Proceedings of the 13th Annual ACM SIGIR Conference",
                "volume": "",
                "issue": "",
                "pages": "1--24",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Howard Turtle and W. Bruce Croft. 1989. Inference networks for document retrieval. In Proceedings of the 13th Annual ACM SIGIR Conference, pages 1-24.",
                "links": null
            },
            "BIBREF12": {
                "ref_id": "b12",
                "title": "Word vectors and quantum logic",
                "authors": [
                    {
                        "first": "Dominic",
                        "middle": [],
                        "last": "Widdows",
                        "suffix": ""
                    },
                    {
                        "first": "Stanley",
                        "middle": [],
                        "last": "Peters",
                        "suffix": ""
                    }
                ],
                "year": 2003,
                "venue": "Mathematics of Language",
                "volume": "8",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Dominic Widdows and Stanley Peters. 2003. Word vectors and quantum logic. In Mathematics of Language 8, Bloomington, Indiana.",
                "links": null
            },
            "BIBREF13": {
                "ref_id": "b13",
                "title": "Unsupervised methods for developing taxonomies by combining syntactic and statistical information",
                "authors": [
                    {
                        "first": "Dominic",
                        "middle": [],
                        "last": "Widdows",
                        "suffix": ""
                    }
                ],
                "year": 2003,
                "venue": "HLT-NAACL",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Dominic Widdows. 2003. Unsupervised methods for developing taxonomies by combining syntactic and statistical information. HLT-NAACL, Edmonton, Canada.",
                "links": null
            }
        },
        "ref_entries": {
            "FIGREF0": {
                "uris": null,
                "type_str": "figure",
                "num": null,
                "text": "Barcharts summarising results of"
            },
            "TABREF0": {
                "type_str": "table",
                "text": ").",
                "html": null,
                "content": "<table><tr><td colspan=\"2\">suit</td><td colspan=\"2\">suit NOT lawsuit</td></tr><tr><td>suit</td><td>1.000000</td><td>pants</td><td>0.810573</td></tr><tr><td>lawsuit</td><td>0.868791</td><td>shirt</td><td>0.807780</td></tr><tr><td>suits</td><td>0.807798</td><td>jacket</td><td>0.795674</td></tr><tr><td>plaintiff</td><td>0.717156</td><td>silk</td><td>0.781623</td></tr><tr><td>sued</td><td>0.706158</td><td>dress</td><td>0.778841</td></tr><tr><td>plaintiffs</td><td>0.697506</td><td>trousers</td><td>0.771312</td></tr><tr><td>suing</td><td>0.674661</td><td>sweater</td><td>0.765677</td></tr><tr><td>lawsuits</td><td>0.664649</td><td>wearing</td><td>0.764283</td></tr><tr><td>damages</td><td>0.660513</td><td>satin</td><td>0.761530</td></tr><tr><td>filed</td><td>0.655072</td><td>plaid</td><td>0.755880</td></tr><tr><td>behalf</td><td>0.650374</td><td>lace</td><td>0.755510</td></tr><tr><td>appeal</td><td>0.608732</td><td>worn</td><td>0.755260</td></tr><tr><td colspan=\"4\">Terms related to 'suit NOT lawsuit' (NYT data)</td></tr><tr><td>play</td><td/><td colspan=\"2\">play NOT game</td></tr><tr><td>play</td><td>1.000000</td><td>play</td><td>0.779183</td></tr><tr><td>playing</td><td>0.773676</td><td>playing</td><td>0.658680</td></tr><tr><td>plays</td><td>0.699858</td><td>role</td><td>0.594148</td></tr><tr><td>played</td><td>0.684860</td><td>plays</td><td>0.581623</td></tr><tr><td>game</td><td>0.626796</td><td>versatility</td><td>0.485053</td></tr><tr><td>offensively</td><td>0.597609</td><td>played</td><td>0.479669</td></tr><tr><td>defensively</td><td>0.546795</td><td>roles</td><td>0.470640</td></tr><tr><td>preseason</td><td>0.544166</td><td>solos</td><td>0.448625</td></tr><tr><td>midfield</td><td>0.540720</td><td>lalas</td><td>0.442326</td></tr><tr><td>role</td><td>0.535318</td><td>onstage</td><td>0.438302</td></tr><tr><td>tempo</td><td>0.504522</td><td>piano</td><td>0.438175</td></tr><tr><td>score</td><td>0.475698</td><td>tyrone</td><td>0.437917</td></tr><tr><td colspan=\"4\">Terms related to 'play NOT game' (NYT data)</td></tr></table>",
                "num": null
            },
            "TABREF1": {
                "type_str": "table",
                "text": "First experiments with negation and wordsenses",
                "html": null,
                "content": "<table/>",
                "num": null
            },
            "TABREF3": {
                "type_str": "table",
                "text": "Table of results showing the percentage frequency of different terms in retrieved documents",
                "html": null,
                "content": "<table><tr><td/><td>% frequency</td><td colspan=\"4\">Average results across corpora for one negated term</td></tr><tr><td>1</td><td/><td/><td/><td/></tr><tr><td>0</td><td colspan=\"2\">No negation</td><td>Post-retrieval filtering</td><td>Constant Subtraction</td><td>Vector negation</td></tr><tr><td/><td>% frequency</td><td colspan=\"4\">Average results across corpora for two negated terms</td></tr><tr><td>1</td><td/><td/><td/><td/></tr><tr><td>0</td><td colspan=\"2\">No negation</td><td>Post-retrieval filtering</td><td>Constant Subtraction</td><td>Vector negation</td></tr><tr><td colspan=\"3\">Positive Term</td><td/><td>Negated Term</td></tr><tr><td colspan=\"4\">Vector Neighbours of Negated Word</td><td colspan=\"2\">WordNet Synonyms of Negated Word</td></tr></table>",
                "num": null
            }
        }
    }
}