File size: 65,726 Bytes
6fa4bc9
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
1001
1002
1003
1004
1005
1006
1007
1008
1009
1010
1011
1012
1013
1014
1015
1016
1017
1018
1019
1020
1021
1022
1023
1024
1025
1026
1027
1028
1029
1030
1031
1032
1033
1034
1035
1036
1037
1038
1039
1040
1041
1042
1043
1044
1045
1046
1047
1048
1049
1050
1051
1052
1053
1054
1055
1056
1057
1058
1059
1060
1061
1062
1063
1064
{
    "paper_id": "P96-1010",
    "header": {
        "generated_with": "S2ORC 1.0.0",
        "date_generated": "2023-01-19T09:02:54.469917Z"
    },
    "title": "Combining Trigram-based and Feature-based Methods for Context-Sensitive Spelling Correction",
    "authors": [
        {
            "first": "Andrew",
            "middle": [
                "R"
            ],
            "last": "Golding",
            "suffix": "",
            "affiliation": {
                "laboratory": "Mitsubishi Electric Research Laboratories",
                "institution": "",
                "location": {
                    "postCode": "201 Broadway, 02139",
                    "settlement": "Cambridge",
                    "region": "MA"
                }
            },
            "email": "golding@com"
        },
        {
            "first": "Yves",
            "middle": [],
            "last": "Schabes",
            "suffix": "",
            "affiliation": {
                "laboratory": "Mitsubishi Electric Research Laboratories",
                "institution": "",
                "location": {
                    "postCode": "201 Broadway, 02139",
                    "settlement": "Cambridge",
                    "region": "MA"
                }
            },
            "email": "schabes@merl@com"
        }
    ],
    "year": "",
    "venue": null,
    "identifiers": {},
    "abstract": "This paper addresses the problem of correcting spelling errors that result in valid, though unintended words (such as peace and piece, or quiet and quite) and also the problem of correcting particular word usage errors (such as amount and number, or among and between). Such corrections require contextual information and are not handled by conventional spelling programs such as Unix spell. First, we introduce a method called Trigrams that uses part-of-speech trigrams to encode the context. This method uses a small number of parameters compared to previous methods based on word trigrams. However, it is effectively unable to distinguish among words that have the same part of speech. For this case, an alternative feature-based method called Bayes performs better; but Bayes is less effective than Trigrams when the distinction among words depends on syntactic constraints. A hybrid method called Tribayes is then introduced that combines the best of the previous two methods. The improvement in performance of Tribayes over its components is verified experimentally. Tribayes is also compared with the grammar checker in Microsoft Word, and is found to have substantially higher performance.",
    "pdf_parse": {
        "paper_id": "P96-1010",
        "_pdf_hash": "",
        "abstract": [
            {
                "text": "This paper addresses the problem of correcting spelling errors that result in valid, though unintended words (such as peace and piece, or quiet and quite) and also the problem of correcting particular word usage errors (such as amount and number, or among and between). Such corrections require contextual information and are not handled by conventional spelling programs such as Unix spell. First, we introduce a method called Trigrams that uses part-of-speech trigrams to encode the context. This method uses a small number of parameters compared to previous methods based on word trigrams. However, it is effectively unable to distinguish among words that have the same part of speech. For this case, an alternative feature-based method called Bayes performs better; but Bayes is less effective than Trigrams when the distinction among words depends on syntactic constraints. A hybrid method called Tribayes is then introduced that combines the best of the previous two methods. The improvement in performance of Tribayes over its components is verified experimentally. Tribayes is also compared with the grammar checker in Microsoft Word, and is found to have substantially higher performance.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Abstract",
                "sec_num": null
            }
        ],
        "body_text": [
            {
                "text": "Spelling correction has become a very common technology and is often not perceived as a problem where progress can be made. However, conventional spelling checkers, such as Unix spell, are concerned only with spelling errors that result in words that cannot be found in a word list of a given language. One analysis has shown that up to 15% of spelling errors that result from elementary typographical errors (character insertion, deletion, or transposition) yield another valid word in the language (Peterson, 1986 ). These errors remain undetected by traditional spelling checkers. In addition to typographical errors, words that can be easily confused with each other (for instance, the homophones peace and piece) also remain undetected. Recent studies of actual observed spelling errors have estimated that overall, errors resulting in valid words account for anywhere from 25% to over 50% of the errors, depending on the application (Kukich, 1992) .",
                "cite_spans": [
                    {
                        "start": 500,
                        "end": 515,
                        "text": "(Peterson, 1986",
                        "ref_id": "BIBREF9"
                    },
                    {
                        "start": 939,
                        "end": 953,
                        "text": "(Kukich, 1992)",
                        "ref_id": "BIBREF6"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": "1"
            },
            {
                "text": "We will use the term context-sensitive spelling correction to refer to the task of fixing spelling errors that result in valid words, such as:",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": "1"
            },
            {
                "text": "(1) * Can I have a peace of cake?",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": "1"
            },
            {
                "text": "where peace was typed when piece was intended.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": "1"
            },
            {
                "text": "The task will be cast as one of lexical disambiguation: we are given a predefined collection of confusion sets, such as {peace, piece}, {than, then}, etc., which circumscribe the space of spelling errors to look for. A confusion set means that each word in the set could mistakenly be typed when another word in the set was intended. The task is to predict, given an occurrence of a word in one of the confusion sets, which word in the set was actually intended.",
                "cite_spans": [
                    {
                        "start": 128,
                        "end": 155,
                        "text": "piece}, {than, then}, etc.,",
                        "ref_id": null
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": "1"
            },
            {
                "text": "Previous work on context-sensitive spelling correction and related lexical disambiguation tasks has its limitations. Word-trigram methods (Mays, Damerau, and Mercer, 1991) require an extremely large body of text to train the word-trigram model; even with extensive training sets, the problem of sparse data is often acute. In addition, huge word-trigram tables need to be available at run time. Moreover, word trigrams are ineffective at capturing longdistance properties such as discourse topic and tense.",
                "cite_spans": [
                    {
                        "start": 138,
                        "end": 171,
                        "text": "(Mays, Damerau, and Mercer, 1991)",
                        "ref_id": "BIBREF8"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": "1"
            },
            {
                "text": "Feature-based approaches, such as Bayesian classifters (Gale, Church, and Yarowsky, 1993) , decision lists (Yarowsky, 1994) , and Bayesian hybrids (Golding, 1995) , have had varying degrees of success for the problem of context-sensitive spelling correction. However, we report experiments that show that these methods are of limited effectiveness for cases such as {their, there, they're} and {than, then}, where the predominant distinction to be made among the words is syntactic. Train Test  Most freq. Base  their, there, they're  3265  850  than, then  2096  514  its, it's  1364  366  your, you're  750  187  begin, being  559  146  passed, past  307  74  quiet, quite  264  66  weather, whether  239  61  accept, except  173  50  lead, led  173 Table 1 : Performance of the baseline method for 18 confusion sets. \"Train\" and \"Test\" give the number of occurrences of any word in the confusion set in the training and test corpora. \"Most freq.\" is the word in the confusion set that occurred most often in the training corpus. \"Base\" is the percentage of correct predictions of the baseline system on the test corpus.",
                "cite_spans": [
                    {
                        "start": 55,
                        "end": 89,
                        "text": "(Gale, Church, and Yarowsky, 1993)",
                        "ref_id": "BIBREF3"
                    },
                    {
                        "start": 107,
                        "end": 123,
                        "text": "(Yarowsky, 1994)",
                        "ref_id": "BIBREF10"
                    },
                    {
                        "start": 147,
                        "end": 162,
                        "text": "(Golding, 1995)",
                        "ref_id": "BIBREF5"
                    }
                ],
                "ref_spans": [
                    {
                        "start": 483,
                        "end": 751,
                        "text": "Train Test  Most freq. Base  their, there, they're  3265  850  than, then  2096  514  its, it's  1364  366  your, you're  750  187  begin, being  559  146  passed, past  307  74  quiet, quite  264  66  weather, whether  239  61  accept, except  173  50  lead, led  173",
                        "ref_id": "TABREF2"
                    },
                    {
                        "start": 752,
                        "end": 759,
                        "text": "Table 1",
                        "ref_id": null
                    }
                ],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": "1"
            },
            {
                "text": "In this paper, we first introduce a method called",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Confusion set",
                "sec_num": null
            },
            {
                "text": "Trigrams that uses part-of-speech trigrams to encode the context. This method greatly reduces the number of parameters compared to known methods, which are based on word trigrams. This method also has the advantage that training can be done once and for all, and quite manageably, for all confusion sets; new confusion sets can be added later without any additional training. This feature makes Trigrams a very easily expandable system. Empirical evaluation of the trigram method demonstrates that it performs well when the words to be discriminated have different parts of speech, but poorly when they have the same part of speech. In the latter case, it is reduced to simply guessing whichever word in the confusion set is the most common representative of its part-of-speech class.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Confusion set",
                "sec_num": null
            },
            {
                "text": "We consider an alternative method, Bayes, a Bayesian hybrid method (Golding, 1995) , for the case where the words have the same part of speech. We confirm experimentally that Bayes and Trigrams have complementary performance, Trigrams being better when the words in the confusion set have different parts of speech, and Bayes being better when they have the same part of speech. We introduce a hybrid method, Tribayes, that exploits this complementarity by invoking each method when it is strongest. Tribayes achieves the best accuracy of the methods under consideration in all situations.",
                "cite_spans": [
                    {
                        "start": 67,
                        "end": 82,
                        "text": "(Golding, 1995)",
                        "ref_id": "BIBREF5"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Confusion set",
                "sec_num": null
            },
            {
                "text": "To evaluate the performance of Tribayes with respect to an external standard, we compare it to the grammar checker in Microsoft Word. Tribayes is found to have substantially higher performance.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Confusion set",
                "sec_num": null
            },
            {
                "text": "This paper is organized as follows: first we present the methodology used in the experiments. We then discuss the methods mentioned above, interleaved with experimental results. The comparison with Microsoft Word is then presented. The final section concludes.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Confusion set",
                "sec_num": null
            },
            {
                "text": "Each method will be described in terms of its operation on a single confusion set C = {Wl,..., w,}; that is, we will say how the method disambiguates occurrences of words wl through wn. The methods handle multiple confusion sets by applying the same technique to each confusion set independently. Each method involves a training phase and a test phase.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Methodology",
                "sec_num": "2"
            },
            {
                "text": "We trained each method on 80% (randomly selected) of the Brown corpus (Ku6era and Francis, 1967) and tested it on the remaining 20%. All methods were run on a collection of 18 confusion sets, which were largely taken from the list of \"Words Commonly Confused\" in the back of Random House (Flexner, 1983) . The confusion sets were selected on the basis of being frequently-occurring in Brown, and representing a variety of types of errors, including homophone confusions (e.g., {peace, piece}) and grammatical mistakes (e.g., {among, between}). A few confusion sets not in Random House were added, representing typographical errors (e.g., {begin, being}). The confusion sets appear in Table 1 .",
                "cite_spans": [
                    {
                        "start": 82,
                        "end": 96,
                        "text": "Francis, 1967)",
                        "ref_id": "BIBREF7"
                    },
                    {
                        "start": 288,
                        "end": 303,
                        "text": "(Flexner, 1983)",
                        "ref_id": "BIBREF2"
                    }
                ],
                "ref_spans": [
                    {
                        "start": 684,
                        "end": 691,
                        "text": "Table 1",
                        "ref_id": null
                    }
                ],
                "eq_spans": [],
                "section": "Methodology",
                "sec_num": "2"
            },
            {
                "text": "As an indicator of the difficulty of the task, we compared each of the methods to the method which ignores the context in which the word occurred, and just guesses based on the priors. Table 1 shows the performance of the baseline method for the 18 confusion sets.",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 185,
                        "end": 192,
                        "text": "Table 1",
                        "ref_id": null
                    }
                ],
                "eq_spans": [],
                "section": "Baseline",
                "sec_num": "3"
            },
            {
                "text": "Trigrams Mays, Damerau, and Mercer (1991) proposed a word-trigram method for context-sensitive spelling correction based on the noisy channel model. Since this method is based on word trigrams, it requires an enormous training corpus to fit all of these parameters accurately; in addition, at run time it requires extensive system resources to store and manipulate the resulting huge word-trigram table.",
                "cite_spans": [
                    {
                        "start": 9,
                        "end": 41,
                        "text": "Mays, Damerau, and Mercer (1991)",
                        "ref_id": "BIBREF8"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "4",
                "sec_num": null
            },
            {
                "text": "In contrast, the method proposed here uses partof-speech trigrams. Given a target occurrence of a word to correct, it substitutes in turn each word in the confusion set into the sentence. Por each substitution, it calculates the probability of the resulting sentence. It selects as its answer the word that gives the highest probability.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "4",
                "sec_num": null
            },
            {
                "text": "More precisely, assume that the word wh occurs in a sentence W = wl...Wk...wn, and that w~ is a word we are considering substituting for it, yielding sentence W I. Word w~ is then preferred over wk iff P(W') > P(W), where P(W) and P(W') are the probabilities of sentences W and W f respectively. 1",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "4",
                "sec_num": null
            },
            {
                "text": "We calculate P(W) using the tag sequence of W as an intermediate quantity, and summing, over all possible tag sequences, the probability of the sentence with that tagging; that is:",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "4",
                "sec_num": null
            },
            {
                "text": "P(W) = ~ P(W, T) T",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "4",
                "sec_num": null
            },
            {
                "text": "where T is a tag sequence for sentence W. The above probabilities are estimated as is traditionally done in trigram-based part-of-speech tagging (Church, 1988; DeRose, 1988) :",
                "cite_spans": [
                    {
                        "start": 145,
                        "end": 159,
                        "text": "(Church, 1988;",
                        "ref_id": "BIBREF0"
                    },
                    {
                        "start": 160,
                        "end": 173,
                        "text": "DeRose, 1988)",
                        "ref_id": "BIBREF1"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "4",
                "sec_num": null
            },
            {
                "text": "(1)",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "P(W,T) = P(WIT)P(T )",
                "sec_num": null
            },
            {
                "text": "i i",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "= HP(wi[ti) HP(t, lt,_2t,_l)(2)",
                "sec_num": null
            },
            {
                "text": "where T = tl ...tn, and P(ti]tl-2ti-1) is the prob ability of seeing a part-of-speech tag tl given the two preceding part-of-speech tags ti-2 and ti-1. Equations 1 and 2 will also be used to tag sentences W and W ~ with their most likely part-of-speech sequences. This will allow us to determine the tag that 1To enable fair comparisons between sequences of different length (as when considering maybe and may be),",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "= HP(wi[ti) HP(t, lt,_2t,_l)(2)",
                "sec_num": null
            },
            {
                "text": "we actually compare the per-word geometric mean of the sentence probabilities. Otherwise, the shorter sequence will usually be preferred, as shorter sequences tend to have higher probabilities than longer ones.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "= HP(wi[ti) HP(t, lt,_2t,_l)(2)",
                "sec_num": null
            },
            {
                "text": "would be assigned to each word in the confusion set when substituted into the target sentence. Table 2 gives the results of the trigram method (as well as the Bayesian method of the next section) for the 18 confusion sets. 2 The results are broken down into two cases: \"Different tags\" and \"Same tags\". A target occurrence is put in the latter iff all words in the confusion set would have the same tag when substituted into the target sentence. In the \"Different tags\" condition, Trigrams generally does well, outscoring Bayes for all but 3 confusion sets -and in each of these cases, making no more than 3 errors more than Bayes.",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 95,
                        "end": 102,
                        "text": "Table 2",
                        "ref_id": "TABREF2"
                    }
                ],
                "eq_spans": [],
                "section": "= HP(wi[ti) HP(t, lt,_2t,_l)(2)",
                "sec_num": null
            },
            {
                "text": "In the \"Same tags\" condition, however, Trigrams performs only as well as Baseline. This follows from Equations 1 and 2: when comparing P(W) and P(WI), the dominant term corresponds to the most likely tagging; and in this term, if the target word wk and its substitute w~ have the same tag t, then the comparison amounts to comparing P(wk [/) and P(w~lt ). In other words, the decision reduces to which of the two words, Wk and w~, is the more common representative of part-of-speech class t. 3",
                "cite_spans": [
                    {
                        "start": 338,
                        "end": 341,
                        "text": "[/)",
                        "ref_id": null
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "= HP(wi[ti) HP(t, lt,_2t,_l)(2)",
                "sec_num": null
            },
            {
                "text": "The previous section showed that the part-of-speech trigram method works well when the words in the confusion set have different parts of speech, but essentially cannot distinguish among the words if they have the same part of speech. In this case, a more effective approach is to learn features that characterize the different contexts in which each word tends to occur. A number of feature-based methods have been proposed, including Bayesian classifiers (Gale, Church, and Yarowsky, 1993) , decision lists (Yarowsky, 1994) , Bayesian hybrids (Golding, 1995) , and, more recently, a method based on the Winnow multiplicative weight-updating algorithm (Golding and Roth, 1996) . We adopt the Bayesian hybrid method, which we will call Bayes, having experimented with each of the methods and found Bayes to be among the best-performing for the task at hand. This method has been described elsewhere (Golding, 1995) and so will only be briefly reviewed here; however, the version used here uses an improved smoothing technique, which is mentioned briefly below.",
                "cite_spans": [
                    {
                        "start": 457,
                        "end": 491,
                        "text": "(Gale, Church, and Yarowsky, 1993)",
                        "ref_id": "BIBREF3"
                    },
                    {
                        "start": 509,
                        "end": 525,
                        "text": "(Yarowsky, 1994)",
                        "ref_id": "BIBREF10"
                    },
                    {
                        "start": 545,
                        "end": 560,
                        "text": "(Golding, 1995)",
                        "ref_id": "BIBREF5"
                    },
                    {
                        "start": 653,
                        "end": 677,
                        "text": "(Golding and Roth, 1996)",
                        "ref_id": "BIBREF4"
                    },
                    {
                        "start": 899,
                        "end": 914,
                        "text": "(Golding, 1995)",
                        "ref_id": "BIBREF5"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Bayes",
                "sec_num": "5"
            },
            {
                "text": "~In the experiments reported here, the trigram method was run using the tag inventory derived from the Brown corpus, except that a handful of common function words were tagged as themselves, namely: except, than, then, to, too, and whether.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Bayes",
                "sec_num": "5"
            },
            {
                "text": "3 In a few cases, however, Trig'rams does not get exactly the same score as Baseline. This can happen when the words in the confusion set have more than one tag in common; e.g., for (affect, effect}, the words can both be norms or verbs. Trigrams may then choose differently when the words are tagged as nouns versus verbs, whereas Baseline makes the same choice in all cases. Bayes uses two types of features: context words and collocations. Context-word features test for the presence of a particular word within +k words of the target word; collocations test for a pattern of up to ~ contiguous words and/or part-of-speech tags around the target word. Examples for the confusion set {dairy, diary} include:",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Bayes",
                "sec_num": "5"
            },
            {
                "text": "(2) milk within +10 words (3) in POSS-DET where (2) is a context-word feature that tends to imply dairy, while (3) is a collocation implying diary.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Bayes",
                "sec_num": "5"
            },
            {
                "text": "Feature 3includes the tag POSS-I)ET for possessive determiners (his, her, etc.), and matches, for example, the sequence in his 4 in:",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Bayes",
                "sec_num": "5"
            },
            {
                "text": "(4) He made an entry in his diary.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Bayes",
                "sec_num": "5"
            },
            {
                "text": "Bayes learns these features from a training corpus of correct text. Each time a word in the confusion set occurs in the corpus, Bayes proposes every feature that matches the context --one context-word feature for every distinct word within +k words of the target word, and one collocation for every way of 4A tag is taken to match a word in the sentence iff the tag is a member of the word's set of possible part-ofspeech tags. Tag sets are used, rather than actual tags, because it is in general impossible to tag the sentence uniquely at spelling-correction time, as the identity of the target word has not yet been established. expressing a pattern of up to ~ contiguous elements. After working through the whole training corpus, Bayes collects and returns the set of features proposed. Pruning criteria may be applied at this point to eliminate features that are based on insufficient data, or that are ineffective at discriminating among the words in the confusion set.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Bayes",
                "sec_num": "5"
            },
            {
                "text": "At run time, Bayes uses the features learned during training to correct the spelling of target words. Let jr be the set of features that match a particular target occurrence. Suppose for a moment that we were applying a naive Bayesian approach. We would then calculate the probability that each word wi in the confusion set is the correct identity of the target word, given that we have observed features 9 r, using Bayes' rule with the independence assumption: P(w,l~') = P(flw,) P",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Bayes",
                "sec_num": "5"
            },
            {
                "text": "where each probability on the right-hand side is calculated by a maximum-likelihood estimate (MLE) over the training set. We would then pick as our answer the wi with the highest P(wiI.T\" ). The method presented here differs from the naive approach in two respects: first, it does not assume independence among features, but rather has heuristics for detecting strong dependencies, and resolving them by deleting features until it is left with a reduced set .T \"~ of (relatively) independent features, which are then used in place of ~\" in the formula above. Second, to estimate the P(flwi) terms, rather than using a simple MLE, it performs smoothing by interpolating between the MLE of P(flwi) and the MLE of the unigram probability, P(f). These enhancements greatly improve the performance of Bayes over the naive Bayesian approach. The results of Bayes are shown in Table 2 . 5 Generally speaking, Bayes does worse than Trigrams when the words in the confusion set have different parts of speech. The reason is that, in such cases, the predominant distinction to be made among the words is syntactic; and the trigram method, which brings to bear part-of-speech knowledge for the whole sentence, is better equipped to make this distinction than Bayes, which only tests up to two syntactic elements in its collocations. Moreover, Bayes' use of context-word features is arguably misguided here, as context words pick up differences in topic and tense, which are irrelevant here, and in fact tend to degrade performance by detecting spurious differences. In a few cases, such as {begin, being}, this effect is enough to drive Bayes slightly below Baseline. 6",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 870,
                        "end": 877,
                        "text": "Table 2",
                        "ref_id": "TABREF2"
                    }
                ],
                "eq_spans": [],
                "section": "Bayes",
                "sec_num": "5"
            },
            {
                "text": "For the condition where the words have the same part of speech, Table 2 shows that Bayes almost always does better than Trigrams. This is because, as discussed above, Trigrams is essentially acting like Baseline in this condition. Bayes, on the other hand, learns features that allow it to discriminate among the particular words at issue, regardless of their part of speech. The one exception is {country, county}, for which Bayes scores somewhat below Baseline. This is another case in which context words actually hurt Bayes, as running it without context words again improved its performance to the Baseline level.",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 64,
                        "end": 71,
                        "text": "Table 2",
                        "ref_id": "TABREF2"
                    }
                ],
                "eq_spans": [],
                "section": "Bayes",
                "sec_num": "5"
            },
            {
                "text": "The previous sections demonstrated the complementarity between Trigrams and Bayes: Trigrams works best when the words in the confusion set do not all have the same part of speech, while Bayes works best when they do. This complementarity leads directly to a hybrid method, Tribayes, that gets the best of each. It applies Trigrams first; in the process, it ascertains whether all the words in the confusion set would have the same tag when substituted into the 5For the experiments reported here, Bayes was configured as follows: k (the half-width of the window of context words) was set to 10; \u00a3 (the maximum length of a collocation) was set to 2; feature strength was measured using the reliability metric; pruning of collocations at training time was enabled; and pruning of context words was minimal --context words were pruned only if they had fewer than 2 occurrences or non-occurrences.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Tribayes",
                "sec_num": "6"
            },
            {
                "text": "eWe confirmed this by running Bayes without context words (i.e., with collocations only). Its performance was then always at or above Baseline. target sentence. If they do not, it accepts the answer provided by Trigrams; if they do, it applies Bayes.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Tribayes",
                "sec_num": "6"
            },
            {
                "text": "Two points about the application of Bayes in the hybrid method: first, Bayes is now being asked to distinguish among words only when they have the same part of speech. It should be trained accordingly --that is, only on examples where the words have the same part of speech. The Bayes component of the hybrid will therefore be trained on a subset of the examples that would be used for training the stand-alone version of Bayes.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Tribayes",
                "sec_num": "6"
            },
            {
                "text": "The second point about Bayes is that, like Trigrams, it sometimes makes uninformed decisions -decisions based only on the priors. For Bayes, this happens when none of its features matches the target occurrence. Since, for now, we do not have a good \"third-string\" algorithm to call when both Trigrams and Bayes fall by the wayside, we content ourselves with the guess made by Bayes in such situations. Table 3 shows the performance of Tribayes compared to its components. In the \"Different tags\" condition, Tribayes invokes Trigrams, and thus scores identically. In the \"Same tags\" condition, Tribayes invokes Bayes. It does not necessarily score the same, however, because, as mentioned above, it is trained on a subset of the examples that stand-alone Bayes is trained on. This can lead to higher or lower performance --higher because the training examples are more homogeneous (representing only cases where the words have the same part of speech); lower because there may not be enough training examples to learn from. Both effects show up in Table 3 . Table 4 summarizes the overall performance of all methods discussed. It can be seen that Trigrams and Bayes each have their strong points. Tribayes, however, achieves the maximum of their scores, by and large, the exceptions being due to cases where one method or the other had an unexpectedly low score (discussed in Sections 4 and 5). The confusion set {raise, rise} demonstrates (albeit modestly) the ability of the hybrid to outscore both of its components, by putting together the performance of the better component for both conditions.",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 402,
                        "end": 409,
                        "text": "Table 3",
                        "ref_id": "TABREF4"
                    },
                    {
                        "start": 1047,
                        "end": 1054,
                        "text": "Table 3",
                        "ref_id": "TABREF4"
                    },
                    {
                        "start": 1057,
                        "end": 1064,
                        "text": "Table 4",
                        "ref_id": null
                    }
                ],
                "eq_spans": [],
                "section": "Tribayes",
                "sec_num": "6"
            },
            {
                "text": "The previous section evaluated the performance of Tribayes with respect to its components, and showed that it got the best of both. In this section, we calibrate this overall performance by comparing Tribayes with Microsoft Word (version 7.0), a widely used word-processing system whose grammar checker represents the state of the art in commercial context-sensitive spelling correction.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Comparison with Microsoft Word",
                "sec_num": null
            },
            {
                "text": "Unfortunately we cannot evaluate Word using \"prediction accuracy\" (as we did above), as we do not always have access to the system's predictions -sometimes it suppresses its predictions in an effort to filter out the bad ones. Instead, in this section Table 4 : Overall performance of all methods: Baseline (Base), Trigrams System scores are given as percentages of correct predictions.",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 252,
                        "end": 259,
                        "text": "Table 4",
                        "ref_id": null
                    }
                ],
                "eq_spans": [],
                "section": "Comparison with Microsoft Word",
                "sec_num": null
            },
            {
                "text": "(T), Bayes (B), and Tribayes (TB).",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Comparison with Microsoft Word",
                "sec_num": null
            },
            {
                "text": "we will use two parameters to evaluate system per-",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Comparison with Microsoft Word",
                "sec_num": null
            },
            {
                "text": "formance: system accuracy when tested on correct usages of words, and system accuracy on incorrect usages. Together, these two parameters give a complete picture of system performance: the score on correct usages measures the system's rate of false negative errors (changing a right word to a wrong one), while the score on incorrect usages measures false positives (failing to change a wrong word to a right one). We will not attempt to combine these two parameters into a single measure of system \"goodness\", as the appropriate combination varies for different users, depending on the user's typing accuracy and tolerance of false negatives and positives. The test sets for the correct condition are the same ones used earlier, based on 20% of the Brown corpus. The test sets for the incorrect condition were generated by corrupting the correct test sets; in particular, each correct occurrence of a word in the confusion set was replaced, in turn, with each other word in the confusion set, yielding n -1 incorrect occurrences for each correct occurrence (where n is the size of the confusion set). We will also refer to the incorrect condition as the corrupted condition.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Comparison with Microsoft Word",
                "sec_num": null
            },
            {
                "text": "To run Microsoft Word on a particular test set, we started by disabling error checking for all error types except those needed for the confusion set at issue. This was done to avoid confounding effects.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Comparison with Microsoft Word",
                "sec_num": null
            },
            {
                "text": "For {their, there, they're}, for instance, we enabled \"word usage\" errors (which include substitutions of their for there, etc.), but we disabled \"contractions\" (which include replacing they're with they are). We then invoked the grammar checker, accepting every suggestion offered. Sometimes errors were pointed out but no correction given; in such cases, we skipped over the error. Sometimes the suggestions led to an infinite loop, as with the sentence:",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Comparison with Microsoft Word",
                "sec_num": null
            },
            {
                "text": "(5) Be sure it's out when you leave.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Comparison with Microsoft Word",
                "sec_num": null
            },
            {
                "text": "where the system alternately suggested replacing it's with its and vice versa. In such cases, we accepted the first suggestion, and then moved on.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Comparison with Microsoft Word",
                "sec_num": null
            },
            {
                "text": "Unlike Word, Tribayes, as presented above, is purely a predictive system, and never suppresses its suggestions. This is somewhat of a handicap in the comparison, as Word can achieve higher scores in the correct condition by suppressing its weaker suggestions (albeit at the cost of lowering its scores in the corrupted condition). To put Tribayes on an equal footing, we added a postprocessing step in which it uses thresholds to decide whether to suppress its suggestions. A suggestion is allowed to go through iff the ratio of the probability of the word being suggested to the probability of the word that appeared originally in the sentence is above a threshold. The probability associated with each word is the perword sentence probability in the case of Trigrams, or the conditional probability P(wi [~) in the case of Bayes. The thresholds are set in a preprocessing 77 phase based on the training set (80% of Brown, in our case). A single tunable parameter controls how steeply the thresholds are set; for the study here, this parameter was set to the middle of its useful range, providing a fairly neutral balance between reducing false negatives and increasing false positives.",
                "cite_spans": [
                    {
                        "start": 806,
                        "end": 809,
                        "text": "[~)",
                        "ref_id": null
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Comparison with Microsoft Word",
                "sec_num": null
            },
            {
                "text": "The results of Word and Tribayes for the 18 confusion sets appear in Table 5 . Six of the confusion sets (marked with asterisks in the table) are not handled by Word; Word's scores in these cases are 100% for the correct condition and 0% for the corrupted condition, which are the scores one gets by never making a suggestion. The opposite behavior --always suggesting a different word --would result in scores of 0% and 100% (for a confusion set of size 2). Although this behavior is never observed in its extreme form, it is a good approximation of Word's behavior in a few cases, such as {principal, principle}, where it scores 12% and 94%. In general, Word achieves a high score in either the correct or the corrupted condition, but not both at once.",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 69,
                        "end": 76,
                        "text": "Table 5",
                        "ref_id": "TABREF6"
                    }
                ],
                "eq_spans": [],
                "section": "Comparison with Microsoft Word",
                "sec_num": null
            },
            {
                "text": "Tribayes compares quite favorably with Word in this experiment. In both the correct and corrupted conditions, Tribayes' scores are mostly higher (often by a wide margin) or the same as Word's; in the cases where they are lower in one condition, they are almost always considerably higher in the other.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Comparison with Microsoft Word",
                "sec_num": null
            },
            {
                "text": "The one exception is {raise, rise}, where Tribayes and Word score about the same in both conditions.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Comparison with Microsoft Word",
                "sec_num": null
            },
            {
                "text": "Spelling errors that result in valid, though unintended words, have been found to be very common in the production of text. Such errors were thought to be too difficult to handle and remain undetected in conventional spelling checkers. This paper introduced Trigrams, a part-of-speech trigram-based method, that improved on previous trigram methods, which were word-based, by greatly reducing the number of parameters. The method was supplemented by Bayes, a method that uses context features to discriminate among the words in the confusion set. Trigrams and Bayes were shown to have complementary strengths. A hybrid method, Tribayes, was then introduced to exploit this complementarity by applying Trigrams when the words in the confusion set do not have the same part of speech, and Bayes when they do. Tribayes thereby gets the best of both methods, as was confirmed experimentally. Tribayes was also compared with the grammar checker in Microsoft Word, and was found to have substantially higher performance.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Conclusion",
                "sec_num": "8"
            },
            {
                "text": "Tribayes is being used as part of a grammarchecking system we are currently developing. We are presently working on elaborating the system's threshold model; scaling up the number of confusion sets that can be handled efficiently; and acquiring confusion sets (or confusion matrices) automatically.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Conclusion",
                "sec_num": "8"
            },
            {
                "text": "Microsoft Word Correct Corrupted Correct Corrupted their, there, they're than, then its, it's your, you're begin, being passed, past quiet, quite weather, whether accept, except lead, led cite, sight, site principal, principle rMse, rise affect, effect peace, piece country, county amount, number among, between ",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Tribayes",
                "sec_num": null
            }
        ],
        "back_matter": [],
        "bib_entries": {
            "BIBREF0": {
                "ref_id": "b0",
                "title": "A stochastic parts program and noun phrase parser for unrestricted text",
                "authors": [
                    {
                        "first": "Kenneth",
                        "middle": [],
                        "last": "Church",
                        "suffix": ""
                    },
                    {
                        "first": "",
                        "middle": [],
                        "last": "Ward",
                        "suffix": ""
                    }
                ],
                "year": 1988,
                "venue": "Second Conference on Applied Natural Language Processing",
                "volume": "",
                "issue": "",
                "pages": "136--143",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Church, Kenneth Ward. 1988. A stochastic parts program and noun phrase parser for unrestricted text. In Second Conference on Applied Natural Language Processing, pages 136-143, Austin, TX.",
                "links": null
            },
            "BIBREF1": {
                "ref_id": "b1",
                "title": "Grammatical category disambiguation by statistical optimization",
                "authors": [
                    {
                        "first": "S",
                        "middle": [
                            "J"
                        ],
                        "last": "Derose",
                        "suffix": ""
                    }
                ],
                "year": 1988,
                "venue": "Computational Linguistics",
                "volume": "14",
                "issue": "",
                "pages": "31--39",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "DeRose, S.J. 1988. Grammatical category disam- biguation by statistical optimization. Computa- tional Linguistics, 14:31-39.",
                "links": null
            },
            "BIBREF2": {
                "ref_id": "b2",
                "title": "Random House Unabridged Dictionary. Random House",
                "authors": [
                    {
                        "first": "S",
                        "middle": [
                            "B"
                        ],
                        "last": "Flexner",
                        "suffix": ""
                    }
                ],
                "year": 1983,
                "venue": "",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Flexner, S. B., editor. 1983. Random House Unabridged Dictionary. Random House, New York. Second edition.",
                "links": null
            },
            "BIBREF3": {
                "ref_id": "b3",
                "title": "A method for disambiguating word senses in a large corpus",
                "authors": [
                    {
                        "first": "William",
                        "middle": [
                            "A"
                        ],
                        "last": "Gale",
                        "suffix": ""
                    },
                    {
                        "first": "W",
                        "middle": [],
                        "last": "Kenneth",
                        "suffix": ""
                    },
                    {
                        "first": "David",
                        "middle": [],
                        "last": "Church",
                        "suffix": ""
                    },
                    {
                        "first": "",
                        "middle": [],
                        "last": "Yarowsky",
                        "suffix": ""
                    }
                ],
                "year": 1993,
                "venue": "Computers and the Humanities",
                "volume": "26",
                "issue": "",
                "pages": "415--439",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Gale, William A., Kenneth W. Church, and David Yarowsky. 1993. A method for disambiguating word senses in a large corpus. Computers and the Humanities, 26:415-439.",
                "links": null
            },
            "BIBREF4": {
                "ref_id": "b4",
                "title": "Applying Winnow to context-sensitive spelling correction",
                "authors": [
                    {
                        "first": "Andrew",
                        "middle": [
                            "P~"
                        ],
                        "last": "Golding",
                        "suffix": ""
                    },
                    {
                        "first": "Dan",
                        "middle": [],
                        "last": "Roth",
                        "suffix": ""
                    }
                ],
                "year": 1996,
                "venue": "Machine Learning: Proceedings of the 13th International Conference",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Golding, Andrew P~. and Dan Roth. 1996. Apply- ing Winnow to context-sensitive spelling correc- tion. In Lorenza Saitta, editor, Machine Learning: Proceedings of the 13th International Conference, Bari, Italy. To appear.",
                "links": null
            },
            "BIBREF5": {
                "ref_id": "b5",
                "title": "A Bayesian hybrid method for context-sensitive spelling correction",
                "authors": [
                    {
                        "first": "Andrew",
                        "middle": [
                            "R"
                        ],
                        "last": "Golding",
                        "suffix": ""
                    }
                ],
                "year": 1995,
                "venue": "Proceedings of the Third Workshop on Very Large Corpora",
                "volume": "",
                "issue": "",
                "pages": "39--53",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Golding, Andrew R. 1995. A Bayesian hybrid method for context-sensitive spelling correction. In Proceedings of the Third Workshop on Very Large Corpora, pages 39-53, Boston, MA.",
                "links": null
            },
            "BIBREF6": {
                "ref_id": "b6",
                "title": "Techniques for automaticMly correcting words in text",
                "authors": [
                    {
                        "first": "Karen",
                        "middle": [],
                        "last": "Kukich",
                        "suffix": ""
                    }
                ],
                "year": 1992,
                "venue": "ACM Computing Surveys",
                "volume": "24",
                "issue": "4",
                "pages": "377--439",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Kukich, Karen. 1992. Techniques for automaticMly correcting words in text. ACM Computing Sur- veys, 24(4):377-439, December.",
                "links": null
            },
            "BIBREF7": {
                "ref_id": "b7",
                "title": "Computational Analysis of Present-Day American English",
                "authors": [
                    {
                        "first": "H",
                        "middle": [],
                        "last": "Kuaera",
                        "suffix": ""
                    },
                    {
                        "first": "W",
                        "middle": [
                            "N"
                        ],
                        "last": "Francis",
                        "suffix": ""
                    }
                ],
                "year": 1967,
                "venue": "",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Kuaera, H. and W. N. Francis. 1967. Computa- tional Analysis of Present-Day American English. Brown University Press, Providence, RI.",
                "links": null
            },
            "BIBREF8": {
                "ref_id": "b8",
                "title": "Context based spelling correction. Information Processing and Management",
                "authors": [
                    {
                        "first": "Eric",
                        "middle": [],
                        "last": "Mays",
                        "suffix": ""
                    },
                    {
                        "first": "J",
                        "middle": [],
                        "last": "Damerau",
                        "suffix": ""
                    },
                    {
                        "first": "Robert",
                        "middle": [
                            "L"
                        ],
                        "last": "Mercer",
                        "suffix": ""
                    }
                ],
                "year": 1991,
                "venue": "",
                "volume": "27",
                "issue": "",
                "pages": "517--522",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Mays, Eric, bred J. Damerau, and Robert L. Mercer. 1991. Context based spelling correction. Informa- tion Processing and Management, 27(5):517-522.",
                "links": null
            },
            "BIBREF9": {
                "ref_id": "b9",
                "title": "A note on undetected typing errors",
                "authors": [
                    {
                        "first": "James",
                        "middle": [
                            "L"
                        ],
                        "last": "Peterson",
                        "suffix": ""
                    }
                ],
                "year": 1986,
                "venue": "Communications of the ACM",
                "volume": "29",
                "issue": "7",
                "pages": "633--637",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Peterson, James L. 1986. A note on undetected typing errors. Communications of the ACM, 29(7):633-637, July.",
                "links": null
            },
            "BIBREF10": {
                "ref_id": "b10",
                "title": "Decision lists for lexical ambiguity resolution: Application to accent restoration in Spanish and French",
                "authors": [
                    {
                        "first": "David",
                        "middle": [],
                        "last": "Yarowsky",
                        "suffix": ""
                    }
                ],
                "year": 1994,
                "venue": "Proceedings of the 32nd Annual Meeting of the Association for Computational Linguistics",
                "volume": "",
                "issue": "",
                "pages": "88--95",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Yarowsky, David. 1994. Decision lists for lexi- cal ambiguity resolution: Application to accent restoration in Spanish and French. In Proceedings of the 32nd Annual Meeting of the Associa- tion for Computational Linguistics, pages 88-95, Las Cruces, NM.",
                "links": null
            }
        },
        "ref_entries": {
            "TABREF2": {
                "html": null,
                "type_str": "table",
                "content": "<table/>",
                "num": null,
                "text": ""
            },
            "TABREF4": {
                "html": null,
                "type_str": "table",
                "content": "<table><tr><td>Confusion set</td><td/><td colspan=\"2\">System scores</td></tr><tr><td/><td>Base</td><td>T</td><td>B</td><td>TB</td></tr><tr><td>their, there, they're</td><td colspan=\"3\">56.8 97.6 94.4</td><td>97.6</td></tr><tr><td>than, then</td><td colspan=\"2\">63.4 94.9</td><td>93.2</td><td>94.9</td></tr><tr><td>its, it's</td><td colspan=\"3\">91.3 98.1 95.9</td><td>98.1</td></tr><tr><td>your, you're</td><td colspan=\"2\">89.3 98.9</td><td>89.8</td><td>98.9</td></tr><tr><td>begin, being</td><td colspan=\"3\">93.2 97.3 91.8</td><td>97.3</td></tr><tr><td>passed, past</td><td colspan=\"2\">68.9 95.9</td><td colspan=\"2\">89.2 95.9</td></tr><tr><td>quiet, quite</td><td colspan=\"3\">83.3 95.5 89.4</td><td>95.5</td></tr><tr><td>weather, whether</td><td colspan=\"2\">86.9 93.4</td><td colspan=\"2\">96.7 93.4</td></tr><tr><td>accept, except</td><td colspan=\"3\">70.0 82.0 88.0</td><td>82.0</td></tr><tr><td>lead, led</td><td>46.9</td><td colspan=\"3\">83.7 79.6 83.7</td></tr><tr><td>cite, sight, site</td><td>64.7</td><td colspan=\"2\">70.6 73.5</td><td>70.6</td></tr><tr><td>principal, principle</td><td>58.8</td><td colspan=\"3\">88.2 85.3 88.2</td></tr><tr><td>raise, rise</td><td colspan=\"3\">64.1 64.1 74.4</td><td>76.9</td></tr><tr><td>affect, effect</td><td colspan=\"2\">91.8 93.9</td><td colspan=\"2\">95.9 95.9</td></tr><tr><td>peace, piece</td><td colspan=\"2\">44.0 44.0</td><td>90.0</td><td>90.0</td></tr><tr><td>country, county</td><td>91.9</td><td colspan=\"3\">91.9 85.5 85.5</td></tr><tr><td>amount, number</td><td>71.5</td><td colspan=\"2\">73.2 82.9</td><td>82.9</td></tr><tr><td>among, between</td><td colspan=\"2\">71.5 71.5</td><td colspan=\"2\">75.3 75.3</td></tr></table>",
                "num": null,
                "text": "Performance of the hybrid method, Tribayes (TB), as compared with Trigrams (T) and Bayes (B). System scores are given as percentages of correct predictions. The results are broken down by whether or not all words in the confusion set would have the same tagging when substituted into the target sentence. The \"Breakdown\" columns give the percentage of examples under each condition."
            },
            "TABREF6": {
                "html": null,
                "type_str": "table",
                "content": "<table/>",
                "num": null,
                "text": "Comparison of Tribayes with Microsoft Word. System scores are given for two test sets, one containing correct usages, and the other containing incorrect (corrupted) usages. Scores are given as percentages of correct answers. Asterisks mark confusion sets that are not handled by Microsoft Word."
            }
        }
    }
}