File size: 81,103 Bytes
6fa4bc9
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
1001
1002
1003
1004
1005
1006
1007
1008
1009
1010
1011
1012
1013
1014
1015
1016
1017
1018
1019
1020
1021
1022
1023
1024
1025
1026
1027
1028
1029
1030
1031
1032
1033
1034
1035
1036
1037
1038
1039
1040
1041
1042
1043
1044
1045
1046
1047
1048
1049
1050
1051
1052
1053
1054
1055
1056
1057
1058
1059
1060
1061
1062
1063
1064
1065
1066
1067
1068
1069
1070
1071
1072
1073
1074
1075
1076
1077
1078
1079
1080
1081
1082
1083
1084
1085
1086
1087
1088
1089
1090
1091
1092
1093
1094
1095
1096
1097
1098
1099
1100
1101
1102
1103
1104
1105
1106
1107
1108
1109
1110
1111
1112
1113
1114
1115
1116
1117
1118
1119
1120
1121
1122
1123
1124
1125
1126
1127
1128
1129
1130
1131
1132
1133
1134
1135
1136
1137
1138
1139
1140
1141
1142
1143
1144
1145
1146
1147
1148
1149
1150
1151
1152
1153
1154
1155
1156
1157
1158
1159
1160
1161
1162
1163
1164
1165
1166
1167
1168
1169
1170
1171
1172
1173
1174
1175
1176
1177
1178
1179
1180
1181
1182
1183
1184
1185
1186
1187
1188
1189
1190
1191
1192
1193
1194
1195
1196
1197
1198
1199
1200
1201
1202
1203
1204
1205
1206
1207
1208
1209
1210
1211
1212
1213
1214
1215
1216
1217
1218
1219
1220
1221
1222
1223
1224
1225
1226
1227
1228
1229
1230
1231
1232
1233
1234
1235
1236
1237
1238
1239
1240
1241
1242
1243
1244
1245
1246
1247
1248
1249
1250
1251
1252
1253
1254
1255
1256
1257
1258
1259
1260
1261
1262
1263
1264
1265
1266
1267
1268
1269
1270
1271
1272
1273
1274
1275
1276
1277
1278
1279
1280
1281
1282
1283
1284
1285
1286
1287
1288
1289
1290
1291
1292
1293
1294
1295
1296
1297
1298
1299
1300
1301
1302
1303
1304
1305
1306
1307
1308
1309
1310
1311
1312
1313
1314
1315
1316
1317
1318
1319
1320
1321
1322
1323
1324
1325
1326
1327
1328
1329
1330
1331
1332
1333
1334
1335
1336
1337
1338
1339
1340
1341
1342
1343
1344
1345
1346
1347
1348
1349
1350
1351
1352
1353
1354
1355
1356
1357
1358
1359
1360
1361
1362
1363
1364
1365
1366
1367
1368
1369
1370
1371
1372
1373
1374
1375
1376
1377
1378
1379
1380
1381
1382
1383
1384
1385
1386
1387
1388
1389
1390
1391
1392
1393
1394
1395
1396
1397
1398
1399
1400
1401
1402
1403
1404
1405
1406
1407
1408
1409
1410
1411
1412
1413
1414
1415
1416
1417
1418
1419
1420
1421
1422
1423
1424
1425
1426
1427
1428
1429
1430
1431
1432
1433
1434
1435
1436
1437
1438
1439
1440
1441
1442
1443
1444
1445
1446
1447
1448
1449
1450
1451
1452
1453
1454
1455
1456
1457
1458
1459
1460
1461
1462
1463
1464
1465
1466
1467
1468
1469
1470
1471
1472
1473
1474
1475
1476
1477
1478
1479
1480
1481
1482
1483
1484
1485
1486
1487
1488
1489
1490
1491
1492
1493
1494
1495
1496
1497
1498
1499
1500
1501
1502
1503
1504
1505
1506
1507
1508
1509
1510
1511
1512
1513
1514
1515
1516
1517
1518
1519
1520
1521
1522
1523
1524
1525
1526
1527
1528
1529
1530
1531
1532
1533
1534
1535
1536
1537
1538
1539
1540
1541
1542
1543
1544
1545
1546
1547
1548
1549
1550
1551
1552
{
    "paper_id": "U07-1007",
    "header": {
        "generated_with": "S2ORC 1.0.0",
        "date_generated": "2023-01-19T03:08:52.914208Z"
    },
    "title": "Exploring approaches to discriminating among near-synonyms",
    "authors": [
        {
            "first": "Mary",
            "middle": [],
            "last": "Gardiner",
            "suffix": "",
            "affiliation": {
                "laboratory": "Centre for Language Technology Macquarie University",
                "institution": "",
                "location": {}
            },
            "email": "gardiner@ics.mq.edu.au"
        },
        {
            "first": "Mark",
            "middle": [],
            "last": "Dras",
            "suffix": "",
            "affiliation": {
                "laboratory": "Centre for Language Technology Macquarie University",
                "institution": "",
                "location": {}
            },
            "email": "madras@ics.mq.edu.au"
        }
    ],
    "year": "",
    "venue": null,
    "identifiers": {},
    "abstract": "Near-synonyms are words that mean approximately the same thing, and which tend to be assigned to the same leaf in ontologies such as WordNet. However, they can differ from each other subtly in both meaning and usage-consider the pair of nearsynonyms frugal and stingy-and therefore choosing the appropriate near-synonym for a given context is not a trivial problem.",
    "pdf_parse": {
        "paper_id": "U07-1007",
        "_pdf_hash": "",
        "abstract": [
            {
                "text": "Near-synonyms are words that mean approximately the same thing, and which tend to be assigned to the same leaf in ontologies such as WordNet. However, they can differ from each other subtly in both meaning and usage-consider the pair of nearsynonyms frugal and stingy-and therefore choosing the appropriate near-synonym for a given context is not a trivial problem.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Abstract",
                "sec_num": null
            }
        ],
        "body_text": [
            {
                "text": "Initial work by Edmonds (1997) suggested that corpus statistics methods would not be particularly effective, and led to subsequent work adopting methods based on specific lexical resources. In earlier work (Gardiner and Dras, 2007) we discussed the hypothesis that some kind of corpus statistics approach may still be effective in some situations, particularly if the near-synonyms differ in sentiment from each other, and we presented some preliminary confirmation of the truth of this hypothesis. This suggests that problems involving this type of nearsynonym may be particularly amenable to corpus statistics methods.",
                "cite_spans": [
                    {
                        "start": 16,
                        "end": 30,
                        "text": "Edmonds (1997)",
                        "ref_id": "BIBREF5"
                    },
                    {
                        "start": 206,
                        "end": 231,
                        "text": "(Gardiner and Dras, 2007)",
                        "ref_id": "BIBREF7"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "",
                "sec_num": null
            },
            {
                "text": "In this paper we investigate whether this result extends to a different corpus statistics method and in addition we analyse the results with respect to a possible confounding factor discussed in the previous work: the skewness of the sets of near synonyms. Our results show that the relationship between success in prediction and the nature of the near-synonyms is method dependent and that skewness is a more significant factor.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "",
                "sec_num": null
            },
            {
                "text": "Choosing an appropriate word or phrase from among candidate near-synonyms or paraphrases is a significant language generation problem since even though near-synonyms and paraphrases are close in meaning, they differ in connotation and denotation in ways that may be significant to the desired effect of the generation output: for example, word choice can change a sentence from advice to admonishment. Particular applications that have been cited as having a use for modules which make effective word and phrase choices among closely related options are summarisation and rewriting (Barzilay and Lee, 2003) . Inkpen and Hirst (2006) extended the generation system HALogen (Langkilde and Knight, 1998; Langkilde, 2000) to include such a module.",
                "cite_spans": [
                    {
                        "start": 582,
                        "end": 606,
                        "text": "(Barzilay and Lee, 2003)",
                        "ref_id": "BIBREF0"
                    },
                    {
                        "start": 609,
                        "end": 632,
                        "text": "Inkpen and Hirst (2006)",
                        "ref_id": "BIBREF10"
                    },
                    {
                        "start": 672,
                        "end": 700,
                        "text": "(Langkilde and Knight, 1998;",
                        "ref_id": "BIBREF12"
                    },
                    {
                        "start": 701,
                        "end": 717,
                        "text": "Langkilde, 2000)",
                        "ref_id": "BIBREF13"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": "1"
            },
            {
                "text": "We discuss a particular aspect of choice between closely related words and phrases: choice between words when there is any difference in meaning or attitude. Typical examples are frugal and stingy; slender and skinny; and error and blunder.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": "1"
            },
            {
                "text": "In this paper, as in Gardiner and Dras (2007) , we explore whether corpus statistics methods have promise in discriminating between near-synonyms with attitude differences, particularly compared to near-synonyms that do not differ in attitude. In our work, we used the work of (Edmonds, 1997) , the first to attempt to distinguish among near-synonyms, adopting a corpus statistics approach. Based on that work, we found that there was a significant difference in attitudinal versus non-attitudinal nearsynonyms. However, the Edmonds algorithm produced on the whole poor results, only a little above the given baseline, if at all. According to (Inkpen, 2007) , the poor results were due to the way the al-gorithm handled data sparseness; she consequently presented an alternative algorithm with much better results. We also found that attitudinal versus non-attitudinal near-synonyms differed significantly in their baselines as a consequence of skewness of synset distribution, complicating analysis.",
                "cite_spans": [
                    {
                        "start": 21,
                        "end": 45,
                        "text": "Gardiner and Dras (2007)",
                        "ref_id": "BIBREF7"
                    },
                    {
                        "start": 277,
                        "end": 292,
                        "text": "(Edmonds, 1997)",
                        "ref_id": "BIBREF5"
                    },
                    {
                        "start": 643,
                        "end": 657,
                        "text": "(Inkpen, 2007)",
                        "ref_id": "BIBREF11"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": "1"
            },
            {
                "text": "In this paper then we develop an algorithm based on that of Inkpen, and use a far larger data set and a methodology suited to large data sets, to see whether this alternative method will support our previous findings. In addition we analyse results with regard to a measure of synset skewness. In Section 2 we outline the near-synonym task description; in Section 3 we present our method based on Inkpen; in Section 5 we present out method based on Inkpen, and our experimental method using it; in Section 4 we evaluate its effectiveness in comparison with Inkpen's own method; in Section 5 we test our hypothesis, present our results and discuss them; and in Section 6 we conclude.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": "1"
            },
            {
                "text": "Our experiment tests a system's ability to fill a gap in a sentence from a given set of near-synonyms. This problem was first described by Edmonds (1997) . Edmonds describes an experiment that he designed to test whether or not co-occurrence statistics are sufficient to predict which word in a set of nearsynonyms fills a lexical gap. He gives this example of asking the system to choose which of error, mistake or oversight fits into the gap in this sentence:",
                "cite_spans": [
                    {
                        "start": 139,
                        "end": 153,
                        "text": "Edmonds (1997)",
                        "ref_id": "BIBREF5"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Task Description",
                "sec_num": "2"
            },
            {
                "text": "(1) However, such a move also of cutting deeply into U.S. economic growth, which is why some economists think it would be a big .",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Task Description",
                "sec_num": "2"
            },
            {
                "text": "Performance on the task is measured by comparing system performance against real word choices: that is, sentences such as example 1 are drawn from real text, a word is removed, and the system is asked to choose between that word and all of its nearsynonyms as candidates to fill the gap.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Task Description",
                "sec_num": "2"
            },
            {
                "text": "3 An approximation to Inkpen's solution to the near-synonym choice problem",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Task Description",
                "sec_num": "2"
            },
            {
                "text": "We know of two descriptions of algorithms used to choose between near-synonyms based upon con-text: that described by Edmonds (1997) and that described by Inkpen (2007) . In our previous work we used Edmonds' method for discriminating between near-synonyms as a basis for comparing whether near-synonyms that differ in attitude in predictability from near-synonyms that do not. The more recent work by Inkpen is a more robust and reliable approach to the same problem, and therefore in this paper we develop a methodology based closely on that of Inkpen, using a different style of training corpus, in order to test whether the differences between the performance of nearsynonyms that differ in sentiment and those that do not persists on the better performing method.",
                "cite_spans": [
                    {
                        "start": 118,
                        "end": 132,
                        "text": "Edmonds (1997)",
                        "ref_id": "BIBREF5"
                    },
                    {
                        "start": 155,
                        "end": 168,
                        "text": "Inkpen (2007)",
                        "ref_id": "BIBREF11"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Task Description",
                "sec_num": "2"
            },
            {
                "text": "Edmonds' and Inkpen's approaches to nearsynonym prediction have the same underlying hypothesis: that the choice between near-synonyms can be predicted to an extent from the words immediately surrounding the gap. Returning to example 1, their approaches use words around the gap, eg big, to predict which of error, mistake or oversight would be used. They do this using some measure of how often big, and other words surrounding the gap, is used in contexts where each of error, mistake and oversight are used. Edmonds uses every word in the sentence containing the gap, whereas Inkpen uses a generally smaller window of words surrounding the gap.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Task Description",
                "sec_num": "2"
            },
            {
                "text": "In this section we briefly describe Edmonds' approach to discriminating between near-synonyms in Section 3.1 and describe Inkpen's approach in more detail in Section 3.2. We then describe our adaptation of Inkpen's approach in Section 3.3.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Task Description",
                "sec_num": "2"
            },
            {
                "text": "In Edmonds' approach to the word choice problem, the suitability of any candidate word c for a sentence S can be approximated as a score(c, S) of suitability, and where score(c,S) is a sum of the association between the candidate c and every other word w in the sentence.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Edmonds' approach",
                "sec_num": "3.1"
            },
            {
                "text": "score(c, S) = w\u2208S sig(c, w) (2)",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Edmonds' approach",
                "sec_num": "3.1"
            },
            {
                "text": "In Edmonds' original method, which we used in Gardiner and Dras (2007) , sig(c, w) is computed using either the t-score of c and w or a second degree association: a combination of the t-scores of c with a word w 0 and the same word w 0 with w. Edmonds' t-scores were computed using co-occurrence counts in the 1989 Wall Street Journal, and the performance did not improve greatly over a baseline of choosing the most frequent word in the synset to fill all gaps.",
                "cite_spans": [
                    {
                        "start": 46,
                        "end": 70,
                        "text": "Gardiner and Dras (2007)",
                        "ref_id": "BIBREF7"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Edmonds' approach",
                "sec_num": "3.1"
            },
            {
                "text": "In Inkpen's method, the suitability of candidate c for a given gap is approximated slightly differently: the entire sentence is not used to measure the suitability of the word. Instead, a certain sized window of k words either side of the gap is used. For example, if k = 3, the word missing from the sentence in example 3 is predicted using only the six words shown in example 4.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Inkpen's approach",
                "sec_num": "3.2"
            },
            {
                "text": "( 3)Visitors to Istanbul often sense a second, layer beneath the city's tangible beauty. 4sense a second, layer beneath the Given a text fragment f consisting of 2k words, k words either side of a gap g",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Inkpen's approach",
                "sec_num": "3.2"
            },
            {
                "text": "(w 1 , w 2 , . . . , w k , g, w k+1 , . . . , w 2k ),",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Inkpen's approach",
                "sec_num": "3.2"
            },
            {
                "text": "the suitability s(c, g) of any given candidate word c to fill the gap g is given by:",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Inkpen's approach",
                "sec_num": "3.2"
            },
            {
                "text": "s(c, g) = k j=1 PMI(c, w j ) + 2k j=k+1 PMI(w j , c) (5) PMI(x, y)",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Inkpen's approach",
                "sec_num": "3.2"
            },
            {
                "text": "is the pointwise mutual information score of two words x and y, and is given by (Church and Hanks, 1991) :",
                "cite_spans": [
                    {
                        "start": 80,
                        "end": 104,
                        "text": "(Church and Hanks, 1991)",
                        "ref_id": "BIBREF3"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Inkpen's approach",
                "sec_num": "3.2"
            },
            {
                "text": "PMI(x, y) = log 2 C(x, y) \u2022 N C(x) \u2022 C(y) (6)",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Inkpen's approach",
                "sec_num": "3.2"
            },
            {
                "text": "C(x), C(y) and C(x, y) are estimated using token counts in a corpus: C(x, y) is the number of times that x and y are found together, C(x) is the total number of occurrences of x in the corpus and C(y) the total number of occurrences of y in the corpus. N is the total number of words in the text.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Inkpen's approach",
                "sec_num": "3.2"
            },
            {
                "text": "Inkpen estimated C(x), C(y) and C(x, y) by issuing queries to the Waterloo MultiText System (Clarke and Terra, 2003) . She defined C(x, y) the number of times where x is followed by y within a certain query frame of length q within a corpus, so that, for example, if q = 3, example 7 would count as a co-occurrence of fresh and mango, but example 8 would not: 7He likes fresh cold mango.",
                "cite_spans": [
                    {
                        "start": 92,
                        "end": 116,
                        "text": "(Clarke and Terra, 2003)",
                        "ref_id": "BIBREF4"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Inkpen's approach",
                "sec_num": "3.2"
            },
            {
                "text": "(8) I like fresh fruits in general, particularly mango.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Inkpen's approach",
                "sec_num": "3.2"
            },
            {
                "text": "She also experimented with document counts where C(x) is the number of documents that x is found in and C(x, y) is the number of documents in which both x and y are found, called PMI-IR (Turney, 2001); but found that this method did not perform as well, although the difference was not statistically significant.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Inkpen's approach",
                "sec_num": "3.2"
            },
            {
                "text": "Inkpen's method outperformed both the baseline and Edmonds' method by 22 and 10 percentage points respectively.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Inkpen's approach",
                "sec_num": "3.2"
            },
            {
                "text": "Our variation on Inkpen's approach is designed to estimate PMI(x, y), the pointwise mutual information of words x and y, using the Web 1T 5-gram corpus Version 1 (Brants and Franz, 2006) .",
                "cite_spans": [
                    {
                        "start": 162,
                        "end": 186,
                        "text": "(Brants and Franz, 2006)",
                        "ref_id": "BIBREF1"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Our variation of Inkpen's approach",
                "sec_num": "3.3"
            },
            {
                "text": "Web 1T contains n-gram frequency counts, up to and including 5-grams, as they occur in a trillion words of World Wide Web text. There is no context information beyond the n-gram boundaries. Examples of a 3-gram and a 5-gram and their respective counts from Web 1T are shown in examples 9 and 10: (9) means official and 41",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Our variation of Inkpen's approach",
                "sec_num": "3.3"
            },
            {
                "text": "(10) Valley National Park 1948 Art 51",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Our variation of Inkpen's approach",
                "sec_num": "3.3"
            },
            {
                "text": "These n-gram counts allow us to estimate C(x, y) for a given window width k by summing the Web 1T counts of k-grams in which words x and y occur and x is followed by y.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Our variation of Inkpen's approach",
                "sec_num": "3.3"
            },
            {
                "text": "Counts are computed using a an especially developed version of the Web 1T processing software \"Get 1T\" 1 originally described in Hawker (2007) and detailed in Hawker et. al (2007) . The Get 1T software allows n-gram queries of the form in the following examples, where < * > is a wildcard which will match any token in that place in the n-gram. In order to find the number of n-grams with fresh and mango we need to construct three queries:",
                "cite_spans": [
                    {
                        "start": 129,
                        "end": 142,
                        "text": "Hawker (2007)",
                        "ref_id": "BIBREF9"
                    },
                    {
                        "start": 159,
                        "end": 179,
                        "text": "Hawker et. al (2007)",
                        "ref_id": "BIBREF9"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Our variation of Inkpen's approach",
                "sec_num": "3.3"
            },
            {
                "text": "(11) < * > fresh mango (12) fresh < * > mango (13) fresh mango < * >",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Our variation of Inkpen's approach",
                "sec_num": "3.3"
            },
            {
                "text": "However, in order to find fresh and mango within 4 grams we need multiple wildcards as in example 14, and added the embedded query hashing functionality described in Hawker et. al (2007) . 14fresh < * > < * > mango",
                "cite_spans": [
                    {
                        "start": 166,
                        "end": 186,
                        "text": "Hawker et. al (2007)",
                        "ref_id": "BIBREF9"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Our variation of Inkpen's approach",
                "sec_num": "3.3"
            },
            {
                "text": "Queries are matched case-insensitively, but no stemming takes place, and there is no deeper analysis (such as part of speech matching).",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Our variation of Inkpen's approach",
                "sec_num": "3.3"
            },
            {
                "text": "This gives us the following methodology for a given lexical gap g and a window of k words either side of the gap:",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Our variation of Inkpen's approach",
                "sec_num": "3.3"
            },
            {
                "text": "1. for every candidate near-synonym c:",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Our variation of Inkpen's approach",
                "sec_num": "3.3"
            },
            {
                "text": "(a) for every word w i in the set of words proceeding the gap, w 1 , . . . , w k , calculate PMI(w i , c) as in equation 6, given counts for C(w i ), C(c) and C(w i , c) from Web 1T 2 (b) for every word w j in the set of words following the gap, w k+1 , . . . , w 2k , calculate PMI(c, w j ) as in equation 6, given counts for C(c), C(w j ) and C(c, w j ) from Web 1T (c) compute the suitability score s(c, g) of candidate c as given by equation 5 2. select the candidate near-synonym with the highest suitability score for the gap where a single such candidate exists 3. where there is no single candidate with a highest suitability score, select the most frequent candidate for the gap (that is, fall back to the baseline described in Section 3.4) 3",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Our variation of Inkpen's approach",
                "sec_num": "3.3"
            },
            {
                "text": "Since Web 1T contains 5-gram counts, we can use query frame sizes from q = 1 (words x and y must be adjacent, that is, occur in the 2-gram counts) to q = 4.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Our variation of Inkpen's approach",
                "sec_num": "3.3"
            },
            {
                "text": "The baseline method that our method is compared to uses the most frequent word from a given synset as the chosen candidate for any gap requiring a member of that synset. Frequency is measured using frequency counts of the combined part of speech and word token in the 1989 Wall Street Journal.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Baseline method",
                "sec_num": "3.4"
            },
            {
                "text": "Inkpen's method",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Effectiveness of the approximation to",
                "sec_num": "4"
            },
            {
                "text": "In this section we compare our approximation of Inkpen's method described in Section 3.3 with her method described in Section 3.2. This will allow us to determine whether our approximation is effective enough to allow us to compare attitudinal and nonattitudinal near-synonyms.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Effectiveness of the approximation to",
                "sec_num": "4"
            },
            {
                "text": "In order to compare the two methods, we use five sets of near-synonyms, also used as test sets by both Edmonds and Inkpen:",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Test sets",
                "sec_num": "4.1"
            },
            {
                "text": "\u2022 the adjectives difficult, hard and tough;",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Test sets",
                "sec_num": "4.1"
            },
            {
                "text": "\u2022 the nouns error, mistake and oversight;",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Test sets",
                "sec_num": "4.1"
            },
            {
                "text": "\u2022 the nouns job, task and duty;",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Test sets",
                "sec_num": "4.1"
            },
            {
                "text": "\u2022 the nouns responsibility, burden, obligation and commitment; and",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Test sets",
                "sec_num": "4.1"
            },
            {
                "text": "\u2022 the nouns material, stuff and substance.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Test sets",
                "sec_num": "4.1"
            },
            {
                "text": "Inkpen compared her method to Edmonds' using these five sets and two more, both sets of verbs, which we have not tested on, as our attitudinal and non-attitudinal data does not included annotated verbs. We are therefore interested in the predictive power of our method compared to Inkpen's and Edmond's on adjectives and nouns.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Test sets",
                "sec_num": "4.1"
            },
            {
                "text": "We performed this experiment, as Edmonds and Inkpen did, using the 1987 Wall Street Journal as a source of test sentences. 4 Where ever one of the words in a test set is found, it is removed from the context in which it occurs to generate a gap for the algorithm to fill.",
                "cite_spans": [
                    {
                        "start": 123,
                        "end": 124,
                        "text": "4",
                        "ref_id": null
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Test contexts",
                "sec_num": "4.2"
            },
            {
                "text": "So, for example, when sentence 15 is found in the test data, the word error is removed from it and the system is asked to predict which of error, mistake or oversight fills the gap at 16: (15) . . .his adversarys' characterization of that minor sideshow as somehow a colossal error on the order of a World War. . ..",
                "cite_spans": [
                    {
                        "start": 188,
                        "end": 192,
                        "text": "(15)",
                        "ref_id": null
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Test contexts",
                "sec_num": "4.2"
            },
            {
                "text": "(16) a colossal on the",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Test contexts",
                "sec_num": "4.2"
            },
            {
                "text": "Recall from Section 3.2 these two parameters used by Inkpen: k and q.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Parameter settings",
                "sec_num": "4.3"
            },
            {
                "text": "Parameter k is the size of the 'window' of context on either side of a lexical gap in the test set: the k words on either side of a gap are used to predict which of the candidate words best fills the gap.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Parameter settings",
                "sec_num": "4.3"
            },
            {
                "text": "Parameter q is the query size used when querying the corpus to find out how often words x and y occur together in order to compute the value of C(x, y). In order to be counted as occurring together, x and y must occur within a window of length at most q.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Parameter settings",
                "sec_num": "4.3"
            },
            {
                "text": "Inkpen found, using Edmonds' near-synonym set difficult and hard as a development set, that results are best for a small window (k \u2208 {1, 2}) but that the query frame had to be somewhat longer to get the best results. Her results were reported using k = 2 and q = 5, chosen via tuning on the development set.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Parameter settings",
                "sec_num": "4.3"
            },
            {
                "text": "We have retained the setting k = 2 and explored results where q = 2 and q = 4: due to Web 1T containing 5-grams but no higher order n-grams we cannot measure the frequency of two words occurring together with any more than three intervening words, so q = 4 is the highest value q can have. Table 1 shows the performance of Edmonds' method and Inkpen's method as given in Inkpen (2007) 5 and our modified method on each of the test sets described in Section 4.1. Note that Inkpen reports different baseline results from us-we have not been able to reproduce her baselines. This may be due to choosing different part of speech tags: we simply used JJ for adjectives and NN for nouns.",
                "cite_spans": [
                    {
                        "start": 371,
                        "end": 384,
                        "text": "Inkpen (2007)",
                        "ref_id": "BIBREF11"
                    },
                    {
                        "start": 385,
                        "end": 386,
                        "text": "5",
                        "ref_id": null
                    }
                ],
                "ref_spans": [
                    {
                        "start": 290,
                        "end": 297,
                        "text": "Table 1",
                        "ref_id": null
                    }
                ],
                "eq_spans": [],
                "section": "Parameter settings",
                "sec_num": "4.3"
            },
            {
                "text": "Inkpen's improvements for the test synsets given in Section 4.1 were between +3.2% and 30.6%. Our performance is roughly comparable, with improvements as high as 31.2%. Further, we tend to improve especially largely over the baseline where Inkpen also does so: on the two sets error etc and responsibility etc..",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Results and Discussion",
                "sec_num": "4.4"
            },
            {
                "text": "The major anomaly when compared to Inkpen's performance is the set job, task and duty, where our method performs very badly compared to both Edmonds' and Inkpen's methods and the baseline (which perform similarly). We also perform under both methods on material, stuff and substance, although not as dramatically.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Results and Discussion",
                "sec_num": "4.4"
            },
            {
                "text": "Overall, the fact that we tend to improve over Edmonds where Inkpen also does so suggests that our algorithm based on Inkpen's takes advantage of the same aspects as hers to gain improvements over Edmonds, and thus that the method is a good candidate for use in our main experiment.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Results and Discussion",
                "sec_num": "4.4"
            },
            {
                "text": "Having determined in Section 4 that our modified version of Inkpen's method performs as a passable approximation of hers, and particularly that where her method improved dramatically over the baseline and Edmonds' method that ours improves likewise, we then tested our central hypothesis: that attitudinal synsets respond better to statistical prediction techniques than non-attitudinal synsets.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Comparing attitudinal and non-attitudinal synsets",
                "sec_num": "5"
            },
            {
                "text": "In order to test our hypothesis, we use synsets divided into near-synonym sets that differ in attitudinal and sets that do not. This test set is drawn from our set of annotated attitudinal and non-attitudinal near-synonyms described in Gardiner and Dras (2007) . These are WordNet2.0 (Fellbaum, 1998) Street Journal. The synsets were annotated as attitudinal and non-attitudinal by the authors of this paper. Synsets were chosen where both annotators are certain of their label, and where both annotators have the same label. This results in 60 synsets in total: 8 where the annotators agreed that there was definitely an attitude difference between words in the synset, and 52 where the annotators agreed that there were definitely not attitude differences between the words in the synset. An example of a synset agreed to have attitudinal differences was:",
                "cite_spans": [
                    {
                        "start": 236,
                        "end": 260,
                        "text": "Gardiner and Dras (2007)",
                        "ref_id": "BIBREF7"
                    },
                    {
                        "start": 284,
                        "end": 300,
                        "text": "(Fellbaum, 1998)",
                        "ref_id": null
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Test set",
                "sec_num": "5.1"
            },
            {
                "text": "(17)",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Test set",
                "sec_num": "5.1"
            },
            {
                "text": "bad, insecure, risky, high-risk, speculative",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Test set",
                "sec_num": "5.1"
            },
            {
                "text": "An example of synsets agreed to not have attitudinal differences was:",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Test set",
                "sec_num": "5.1"
            },
            {
                "text": "(18)",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Test set",
                "sec_num": "5.1"
            },
            {
                "text": "sphere, domain, area, orbit, field, arena",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Test set",
                "sec_num": "5.1"
            },
            {
                "text": "The synsets are not used in their entirety, due to the differences in the number of words in each synset (compare {violence, force} with two members to {arduous, backbreaking, grueling, gruelling, hard, heavy, laborious, punishing, toilsome} with nine, for example). Instead, a certain number n of words are selected from each synset (where n \u2208 {3, 4}) based on the frequency count in the 1989 Wall Street Journal corpus. For example hard, heavy, gruelling and punishing are the four most frequent words in the {arduous, backbreaking, grueling, gruelling, hard, heavy, laborious, punishing, toilsome} synset, so when n = 4 those four words would be selected. When the synset's length is less than or equal to n, for example when n = 4 but the synset is {violence, force}, the entire synset is used.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Test set",
                "sec_num": "5.1"
            },
            {
                "text": "These test sets are referred to as top3 (synsets reduced to 3 or less members) and top4 (synsets reduced to 4 or less members).",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Test set",
                "sec_num": "5.1"
            },
            {
                "text": "Exactly as in Section 4.2, our lexical gaps and their surrounding contexts are drawn from sentences in the 1987 Wall Street Journal containing one of the words in the test synsets.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Test contexts",
                "sec_num": "5.2"
            },
            {
                "text": "As described in Sections 3.2 and 4.3, there are two parameters that can be varied regarding the context around a lexical gap (k), and the nearness of two words x and y in the corpus in order for them to be considered to occur together (q).",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Parameter settings",
                "sec_num": "5.3"
            },
            {
                "text": "As per Inkpen's results on her development set, and as in Section 4 we use the setting k = 2 and vary q such that q = 2 on some test runs and q = 4 on others. We cannot test with Inkpen's suggested q = 5, as that would require 6-grams.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Parameter settings",
                "sec_num": "5.3"
            },
            {
                "text": "The overall performance of our method on our sets of attitudinal and non-attitudinal near-synonyms is shown in Table 2 .",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 111,
                        "end": 118,
                        "text": "Table 2",
                        "ref_id": "TABREF2"
                    }
                ],
                "eq_spans": [],
                "section": "Results and Discussion",
                "sec_num": "5.4"
            },
            {
                "text": "We did four test runs in total, two each on sets top3 and top4 varying q between q = 2 and q = 4. The baseline result does not depend on q and therefore is the same for both tests of top3 and of top4.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Results and Discussion",
                "sec_num": "5.4"
            },
            {
                "text": "Baseline correctness (%) q Our method's correctness (%) Synsets Att. Table 3 : Distribution of improvements on baseline for top3, k = 2, q = 2",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 69,
                        "end": 76,
                        "text": "Table 3",
                        "ref_id": null
                    }
                ],
                "eq_spans": [],
                "section": "Contexts containing a test word",
                "sec_num": null
            },
            {
                "text": "As in our previous paper (Gardiner and Dras, 2007) , the baselines behave noticeably differently for attitudinal and non-attitudinal synsets. Calculating the z-statistic as is standard for comparing two proportions (Moore and McCabe, 2003) we find that the difference between the pair of attitudinal and non-attitudinal results for each test are all statistically significant (p < 0.01). Thus, again, it is difficult from the data in Table 2 alone to determine whether the better performance of non-attitudinal synsets is due to the higher baseline performance for those same synsets.",
                "cite_spans": [
                    {
                        "start": 25,
                        "end": 50,
                        "text": "(Gardiner and Dras, 2007)",
                        "ref_id": "BIBREF7"
                    },
                    {
                        "start": 215,
                        "end": 239,
                        "text": "(Moore and McCabe, 2003)",
                        "ref_id": "BIBREF14"
                    }
                ],
                "ref_spans": [
                    {
                        "start": 434,
                        "end": 441,
                        "text": "Table 2",
                        "ref_id": "TABREF2"
                    }
                ],
                "eq_spans": [],
                "section": "Contexts containing a test word",
                "sec_num": null
            },
            {
                "text": "There are two major aspects of this result requiring further investigation. The first is that our method performs very similarly to the baseline according to these aggregate numbers, which wasn't anticipated based on the results in Section 4, which showed that on a limited set of synsets our method performed well above the baseline, although not as well as Inkpen's original method.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Contexts containing a test word",
                "sec_num": null
            },
            {
                "text": "Secondly, inspection of individual synsets and their performance reveals that this aggregate is not representative of the performance as a whole: it is simply an average of approximately equal numbers of good and bad predictions by our method. Table 3 shows that for one test run (top3, k = 2, q = 2) there were a number of synsets on which our method performed very well with an improvement of more than 20 percentage points over the baseline but also a substantial number where it performed very badly, losing more than 20 percentage points from the baseline.",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 244,
                        "end": 251,
                        "text": "Table 3",
                        "ref_id": null
                    }
                ],
                "eq_spans": [],
                "section": "Contexts containing a test word",
                "sec_num": null
            },
            {
                "text": "In our previous work we expressed a suspicion that the success of Edmonds' prediction method might be being influenced by the evenness of distribution of frequencies within a synset. That is, if a synset contains a very dominant member (which will cause the baseline to perform well) then the Edmonds method may perform worse against the baseline than it would for a synset in which the word choices were distributed fairly evenly among the members of the set.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Contexts containing a test word",
                "sec_num": null
            },
            {
                "text": "Given the results of the test runs shown in Table 2 , and the wide distribution of prediction successes shown in Table 3 , we decided to test this hypothesis that the distribution of words in the synsets influence the performance of prediction methods that use context. This is described in Section 5.4.1.",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 44,
                        "end": 51,
                        "text": "Table 2",
                        "ref_id": "TABREF2"
                    },
                    {
                        "start": 113,
                        "end": 120,
                        "text": "Table 3",
                        "ref_id": null
                    }
                ],
                "eq_spans": [],
                "section": "Contexts containing a test word",
                "sec_num": null
            },
            {
                "text": "In this section, we describe an analysis of the results in Section 5.4 in terms of whether the balance of frequencies among words in the synset contribute to the quality of our prediction result.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Entropy analysis",
                "sec_num": "5.4.1"
            },
            {
                "text": "In order to measure a correlation between the balance of frequencies of words and the prediction result, we need a measure of 'balance'. In this case we have chosen information entropy (Shannon, 1948) , the measure of bits of information required to convey a particular result. The entropy of a synset's frequencies here is measured using the proportion Table 4 : Regression co-efficients between independent variables synset category and synset entropy, and dependent variable prediction improvement over baseline (statistically significant results p < 0.05 marked *) of total uses of the synset that each particular word represents. A synset in which frequencies are reasonably evenly distributed has high information entropy and a synset in which one or more words are very frequent as a proportion of use of that synset as a whole have low entropy. We then carried out multiple regression analysis using the category of the synset (attitudinal or not attitudinal, coded as 1 and 0 for this analysis) and the entropy of the synset's members' frequencies as our two independent variables; this allows us to separate out the two effects of synset skewness and attitudinality. Regression co-efficients are shown in Table 4 . Table 4 shows that in general, performance is negatively correlated with both category but positively with entropy, although the correlation with category is not always significant. The positive relationship with entropy confirms our suspicion in Gardiner and Dras (2007) that statistical techniques perform better when the synset does not have a highly dominant member. The negative correlation with category implies that the reverse of our main hypothesis holds: that our statistical method works better for predicting the use of non-attitudinal near-synonyms.",
                "cite_spans": [
                    {
                        "start": 185,
                        "end": 200,
                        "text": "(Shannon, 1948)",
                        "ref_id": "BIBREF15"
                    },
                    {
                        "start": 1472,
                        "end": 1496,
                        "text": "Gardiner and Dras (2007)",
                        "ref_id": "BIBREF7"
                    }
                ],
                "ref_spans": [
                    {
                        "start": 354,
                        "end": 361,
                        "text": "Table 4",
                        "ref_id": null
                    },
                    {
                        "start": 1215,
                        "end": 1222,
                        "text": "Table 4",
                        "ref_id": null
                    },
                    {
                        "start": 1225,
                        "end": 1232,
                        "text": "Table 4",
                        "ref_id": null
                    }
                ],
                "eq_spans": [],
                "section": "Entropy analysis",
                "sec_num": "5.4.1"
            },
            {
                "text": "There are two questions that arise from the result that our Inkpen-based method gives a different result from the Edmonds-based one.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Entropy analysis",
                "sec_num": "5.4.1"
            },
            {
                "text": "First, is our approximation to Inkpen's method inherently faulty or can it be improved in some way? We know from Section 4 that it tends to perform well where her method performs well. An obvious second test is to compare our results to another test described in Inkpen (2007) which used a larger set of near-synonyms and tested the predictive power using the British National Corpus as a source of test contexts. This test will test our system's performance in genres quite different from news-wire text, and allow us to make a further comparison with Inkpen's method.",
                "cite_spans": [
                    {
                        "start": 263,
                        "end": 276,
                        "text": "Inkpen (2007)",
                        "ref_id": "BIBREF11"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Entropy analysis",
                "sec_num": "5.4.1"
            },
            {
                "text": "Second, why do we perform significantly better for near-synonyms without attitude difference? One possible explanation that we intend to explore is that attitude differences are predicted by attitude differences exhibited in a very large context; perhaps an entire document or section thereof. Sentiment analysis techniques may be able to be used to detect the attitude bearing parts of a document and these may serve as more useful features for predicting attitudinal word choice than surrounding words.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Entropy analysis",
                "sec_num": "5.4.1"
            },
            {
                "text": "In this paper we have developed a modification to Inkpen's method of making a near-synonym choice that on a set of her test data performs reasonably promisingly; however, when tested on a larger set of near-synonyms on average it does not perform very differently to the baseline. We have also shown that, contrary to our hypothesis that near-synonyms with attitude differences would perform better using statistical methods, on this method the nearsynonyms without attitude differences are predicted better when there's a difference in predictive power.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Conclusion and Future Work",
                "sec_num": "6"
            },
            {
                "text": "Ultimately, we plan to develop a system that will acquire and predict usage of attitudinal nearsynonyms, drawing on statistical methods and methods from sentiment analysis. In order to achieve this we will need a comprehensive understanding of why this method's performance was not adequate for the task.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Conclusion and Future Work",
                "sec_num": "6"
            },
            {
                "text": "Available at http://get1t.sf.net/",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "",
                "sec_num": null
            },
            {
                "text": "The result of equation 6 is undefined when any of C(x) = 0, C(y) = 0 or C(x, y) = 0 hold, that is, x or y or at least one n-gram containing x and y cannot be found in the Web 1T counts. For the purpose of computing s(c, g), we define PMI(x, y) = 0 when C(x) = 0, C(y) = 0 or C(x, y) = 0, so that it has no influence on the score s(c, g) given by equation 5.3 Typically, in this case, all candidates have scored 0.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "",
                "sec_num": null
            },
            {
                "text": "All references to the Wall Street Journal data used in this paper refer toCharniak et. al (2000).5 Inkpen actually gives two methods, one using PMI estimates from document counts, one using PMI estimates using word counts. Here we are discussing her word count method and use those values in our table.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "",
                "sec_num": null
            }
        ],
        "back_matter": [
            {
                "text": "Thank you to: Diana Inkpen for sending us a copy of Inkpen (2007) while it was under review; and Tobias Hawker for providing a copy of his Web 1T processing software, Get 1T, before its public release.This work has been supported by the Australian Research Council under Discovery Project DP0558852.",
                "cite_spans": [
                    {
                        "start": 52,
                        "end": 65,
                        "text": "Inkpen (2007)",
                        "ref_id": "BIBREF11"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Acknowledgements",
                "sec_num": null
            }
        ],
        "bib_entries": {
            "BIBREF0": {
                "ref_id": "b0",
                "title": "Learning to paraphrase: An unsupervised approach using multiplesequence alignment",
                "authors": [
                    {
                        "first": "Regina",
                        "middle": [],
                        "last": "Barzilay",
                        "suffix": ""
                    },
                    {
                        "first": "Lillian",
                        "middle": [],
                        "last": "Lee",
                        "suffix": ""
                    }
                ],
                "year": 2003,
                "venue": "HLT-NAACL 2003: Main Proceedings",
                "volume": "",
                "issue": "",
                "pages": "16--23",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Regina Barzilay and Lillian Lee. 2003. Learning to paraphrase: An unsupervised approach using multiple- sequence alignment. In HLT-NAACL 2003: Main Pro- ceedings, pages 16-23.",
                "links": null
            },
            "BIBREF1": {
                "ref_id": "b1",
                "title": "Web 1T 5-gram Version 1",
                "authors": [
                    {
                        "first": "Thorsten",
                        "middle": [],
                        "last": "Brants",
                        "suffix": ""
                    },
                    {
                        "first": "Alex",
                        "middle": [],
                        "last": "Franz",
                        "suffix": ""
                    }
                ],
                "year": 2006,
                "venue": "",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Thorsten Brants and Alex Franz. 2006. Web 1T 5-gram Version 1. http://www.ldc.upenn.edu/ Catalog/CatalogEntry.jsp?catalogId= LDC2006T13.",
                "links": null
            },
            "BIBREF2": {
                "ref_id": "b2",
                "title": "BLLIP 1987-89 WSJ Corpus Release 1",
                "authors": [
                    {
                        "first": "Eugene",
                        "middle": [],
                        "last": "Charniak",
                        "suffix": ""
                    },
                    {
                        "first": "Don",
                        "middle": [],
                        "last": "Blaheta",
                        "suffix": ""
                    },
                    {
                        "first": "Niyu",
                        "middle": [],
                        "last": "Ge",
                        "suffix": ""
                    },
                    {
                        "first": "Keith",
                        "middle": [],
                        "last": "Hall",
                        "suffix": ""
                    },
                    {
                        "first": "John",
                        "middle": [],
                        "last": "Hale",
                        "suffix": ""
                    },
                    {
                        "first": "Mark",
                        "middle": [],
                        "last": "Johnson",
                        "suffix": ""
                    }
                ],
                "year": 2000,
                "venue": "",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Eugene Charniak, Don Blaheta, Niyu Ge, Keith Hall, John Hale, , and Mark Johnson. 2000. BLLIP 1987-89 WSJ Corpus Release 1. http://www. ldc.upenn.edu/Catalog/CatalogEntry. jsp?catalogId=LDC2000T43.",
                "links": null
            },
            "BIBREF3": {
                "ref_id": "b3",
                "title": "Word association norms and mutual information",
                "authors": [
                    {
                        "first": "Kenneth",
                        "middle": [],
                        "last": "Church",
                        "suffix": ""
                    },
                    {
                        "first": "Patrick",
                        "middle": [],
                        "last": "Hanks",
                        "suffix": ""
                    }
                ],
                "year": 1991,
                "venue": "lexicography. Computational Linguistics",
                "volume": "16",
                "issue": "1",
                "pages": "22--29",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Kenneth Church and Patrick Hanks. 1991. Word asso- ciation norms and mutual information, lexicography. Computational Linguistics, 16(1):22-29.",
                "links": null
            },
            "BIBREF4": {
                "ref_id": "b4",
                "title": "Passage retrieval vs. document retrieval for factoid question answering",
                "authors": [
                    {
                        "first": "L",
                        "middle": [
                            "A"
                        ],
                        "last": "Charles",
                        "suffix": ""
                    },
                    {
                        "first": "",
                        "middle": [],
                        "last": "Clarke",
                        "suffix": ""
                    },
                    {
                        "first": "L",
                        "middle": [],
                        "last": "Egidio",
                        "suffix": ""
                    },
                    {
                        "first": "",
                        "middle": [],
                        "last": "Terra",
                        "suffix": ""
                    }
                ],
                "year": 2003,
                "venue": "Proceedings of the 26th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval",
                "volume": "",
                "issue": "",
                "pages": "427--428",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Charles L. A. Clarke and Egidio L. Terra. 2003. Passage retrieval vs. document retrieval for factoid question an- swering. In Proceedings of the 26th Annual Interna- tional ACM SIGIR Conference on Research and De- velopment in Information Retrieval, pages 427-428, Toronto, Canada.",
                "links": null
            },
            "BIBREF5": {
                "ref_id": "b5",
                "title": "Choosing the word most typical in context using a lexical co-occurrence network",
                "authors": [
                    {
                        "first": "Philip",
                        "middle": [],
                        "last": "Edmonds",
                        "suffix": ""
                    }
                ],
                "year": 1997,
                "venue": "Proceedings of the 35th Annual Meeting of the Association for Computational Linguistics and the 8th Conference of the European Chapter of the Association for Computational Linguistics",
                "volume": "",
                "issue": "",
                "pages": "507--509",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Philip Edmonds. 1997. Choosing the word most typical in context using a lexical co-occurrence network. In Proceedings of the 35th Annual Meeting of the Associ- ation for Computational Linguistics and the 8th Con- ference of the European Chapter of the Association for Computational Linguistics, pages 507-509, July.",
                "links": null
            },
            "BIBREF6": {
                "ref_id": "b6",
                "title": "WordNet: An Electronic Lexical Database",
                "authors": [],
                "year": 1998,
                "venue": "",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Christiane Fellbaum, editor. 1998. WordNet: An Elec- tronic Lexical Database. The MIT Press, May.",
                "links": null
            },
            "BIBREF7": {
                "ref_id": "b7",
                "title": "Corpus statistics approaches to discriminating among near-synonyms",
                "authors": [
                    {
                        "first": "Mary",
                        "middle": [],
                        "last": "Gardiner",
                        "suffix": ""
                    },
                    {
                        "first": "Mark",
                        "middle": [],
                        "last": "Dras",
                        "suffix": ""
                    }
                ],
                "year": 2007,
                "venue": "Proceedings of the 10th Conference of the Pacific Association for Computational Linguistics (PACLING 2007)",
                "volume": "",
                "issue": "",
                "pages": "31--39",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Mary Gardiner and Mark Dras. 2007. Corpus statistics approaches to discriminating among near-synonyms. In Proceedings of the 10th Conference of the Pacific Association for Computational Linguistics (PACLING 2007), pages 31-39, Melbourne, Australia, September.",
                "links": null
            },
            "BIBREF8": {
                "ref_id": "b8",
                "title": "Practical queries of a massive n-gram database",
                "authors": [
                    {
                        "first": "Tobias",
                        "middle": [],
                        "last": "Hawker",
                        "suffix": ""
                    },
                    {
                        "first": "Mary",
                        "middle": [],
                        "last": "Gardiner",
                        "suffix": ""
                    },
                    {
                        "first": "Andrew",
                        "middle": [],
                        "last": "Bennetts",
                        "suffix": ""
                    }
                ],
                "year": 2007,
                "venue": "Proceedings of the Australasian Language Technology Workshop",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Tobias Hawker, Mary Gardiner, and Andrew Bennetts. 2007. Practical queries of a massive n-gram database. In Proceedings of the Australasian Language Technol- ogy Workshop 2007 (ALTW 2007), Melbourne, Aus- tralia. To appear.",
                "links": null
            },
            "BIBREF9": {
                "ref_id": "b9",
                "title": "USYD: WSD and lexical substitution using the Web 1T corpus",
                "authors": [
                    {
                        "first": "Tobias",
                        "middle": [],
                        "last": "Hawker",
                        "suffix": ""
                    }
                ],
                "year": 2007,
                "venue": "Proceedings of SemEval-2007: the 4th International Workshop on Semantic Evaluations",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Tobias Hawker. 2007. USYD: WSD and lexical substi- tution using the Web 1T corpus. In Proceedings of SemEval-2007: the 4th International Workshop on Se- mantic Evaluations, Prague, Czech Republic.",
                "links": null
            },
            "BIBREF10": {
                "ref_id": "b10",
                "title": "Building and using a lexical knowledge-base of near-synonym differences",
                "authors": [
                    {
                        "first": "Diana",
                        "middle": [],
                        "last": "Inkpen",
                        "suffix": ""
                    },
                    {
                        "first": "Graeme",
                        "middle": [],
                        "last": "Hirst",
                        "suffix": ""
                    }
                ],
                "year": 2006,
                "venue": "Computational Linguistics",
                "volume": "32",
                "issue": "2",
                "pages": "223--262",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Diana Inkpen and Graeme Hirst. 2006. Building and using a lexical knowledge-base of near-synonym dif- ferences. Computational Linguistics, 32(2):223-262, June.",
                "links": null
            },
            "BIBREF11": {
                "ref_id": "b11",
                "title": "A statistical model of nearsynonym choice",
                "authors": [
                    {
                        "first": "Diana",
                        "middle": [],
                        "last": "Inkpen",
                        "suffix": ""
                    }
                ],
                "year": 2007,
                "venue": "ACM Transactions of Speech and Language Processing",
                "volume": "4",
                "issue": "1",
                "pages": "1--17",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Diana Inkpen. 2007. A statistical model of near- synonym choice. ACM Transactions of Speech and Language Processing, 4(1):1-17, January.",
                "links": null
            },
            "BIBREF12": {
                "ref_id": "b12",
                "title": "The practical value of N-grams in generation",
                "authors": [
                    {
                        "first": "Irene",
                        "middle": [],
                        "last": "Langkilde",
                        "suffix": ""
                    },
                    {
                        "first": "Kevin",
                        "middle": [],
                        "last": "Knight",
                        "suffix": ""
                    }
                ],
                "year": 1998,
                "venue": "Proceedings of the 9th International Natural Language Generation Workshop",
                "volume": "",
                "issue": "",
                "pages": "248--255",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Irene Langkilde and Kevin Knight. 1998. The practical value of N-grams in generation. In Proceedings of the 9th International Natural Language Generation Work- shop, pages 248-255, Niagra-on-the-Lake, Canada.",
                "links": null
            },
            "BIBREF13": {
                "ref_id": "b13",
                "title": "Forest-based statistical sentence generation",
                "authors": [
                    {
                        "first": "Irene",
                        "middle": [],
                        "last": "Langkilde",
                        "suffix": ""
                    }
                ],
                "year": 2000,
                "venue": "Proceedings of the 1st Conference of the North American Chapter of the Association for Computational Linguistics and the 6th Conference on Applied Natural Language Processing",
                "volume": "",
                "issue": "",
                "pages": "170--177",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Irene Langkilde. 2000. Forest-based statistical sentence generation. In Proceedings of the 1st Conference of the North American Chapter of the Association for Computational Linguistics and the 6th Conference on Applied Natural Language Processing (NAACL-ANLP 2000), pages 170-177, Seattle, USA.",
                "links": null
            },
            "BIBREF14": {
                "ref_id": "b14",
                "title": "Introduction to the Practice of Statistics",
                "authors": [
                    {
                        "first": "David",
                        "middle": [
                            "S"
                        ],
                        "last": "Moore",
                        "suffix": ""
                    },
                    {
                        "first": "George",
                        "middle": [
                            "P"
                        ],
                        "last": "Mccabe",
                        "suffix": ""
                    }
                ],
                "year": 2003,
                "venue": "",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "David S. Moore and George P. McCabe. 2003. Introduc- tion to the Practice of Statistics. W. H. Freeman and Company, 4 edition.",
                "links": null
            },
            "BIBREF15": {
                "ref_id": "b15",
                "title": "A mathematical theory of communication",
                "authors": [
                    {
                        "first": "Claude",
                        "middle": [
                            "E"
                        ],
                        "last": "Shannon",
                        "suffix": ""
                    }
                ],
                "year": 1948,
                "venue": "Bell System Technical Journal",
                "volume": "27",
                "issue": "",
                "pages": "379--423",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Claude E. Shannon. 1948. A mathematical theory of communication. Bell System Technical Journal, 27:379-423 and 623-656, July and October.",
                "links": null
            },
            "BIBREF16": {
                "ref_id": "b16",
                "title": "Mining the web for synonyms: PMI-IR versus LSA on TOEFL",
                "authors": [
                    {
                        "first": "Peter",
                        "middle": [],
                        "last": "Turney",
                        "suffix": ""
                    }
                ],
                "year": 2001,
                "venue": "Proceedings of the Twelfth European Conference on Machine Learning (ECML 2001)",
                "volume": "",
                "issue": "",
                "pages": "491--502",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Peter Turney. 2001. Mining the web for synonyms: PMI-IR versus LSA on TOEFL. In Proceedings of the Twelfth European Conference on Machine Learn- ing (ECML 2001), pages 491-502, Freiburg and Ger- many.",
                "links": null
            }
        },
        "ref_entries": {
            "TABREF2": {
                "num": null,
                "type_str": "table",
                "content": "<table><tr><td>Improvement over</td><td colspan=\"2\">Number of synsets</td><td/></tr><tr><td>baseline</td><td/><td/><td/></tr><tr><td/><td colspan=\"3\">Att. Non-att. Total</td></tr><tr><td>\u2265 +20%</td><td>0</td><td>16</td><td>16</td></tr><tr><td>\u2265 +10% and &lt; +20%</td><td>1</td><td>7</td><td>8</td></tr><tr><td>\u2265 +5% and &lt; +1%</td><td>2</td><td>2</td><td>4</td></tr><tr><td>&gt; -5% and &lt; -5%</td><td>2</td><td>10</td><td>12</td></tr><tr><td>\u2264 -5% and &gt; -10%</td><td>0</td><td>6</td><td>6</td></tr><tr><td>\u2264 -10% and &gt; -20%</td><td>1</td><td>3</td><td>4</td></tr><tr><td>\u2264 -20%</td><td>1</td><td>8</td><td>9</td></tr></table>",
                "text": "Performance of the baseline and our method on all test sentences (k = 2)",
                "html": null
            }
        }
    }
}