File size: 96,686 Bytes
6fa4bc9
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
1001
1002
1003
1004
1005
1006
1007
1008
1009
1010
1011
1012
1013
1014
1015
1016
1017
1018
1019
1020
1021
1022
1023
1024
1025
1026
1027
1028
1029
1030
1031
1032
1033
1034
1035
1036
1037
1038
1039
1040
1041
1042
1043
1044
1045
1046
1047
1048
1049
1050
1051
1052
1053
1054
1055
1056
1057
1058
1059
1060
1061
1062
1063
1064
1065
1066
1067
1068
1069
1070
1071
1072
1073
1074
1075
1076
1077
1078
1079
1080
1081
1082
1083
1084
1085
1086
1087
1088
1089
1090
1091
1092
1093
1094
1095
1096
1097
1098
1099
1100
1101
1102
1103
1104
1105
1106
1107
1108
1109
1110
1111
1112
1113
1114
1115
1116
1117
1118
1119
1120
1121
1122
1123
1124
1125
1126
1127
1128
1129
1130
1131
1132
1133
1134
1135
1136
1137
1138
1139
1140
1141
1142
1143
1144
1145
1146
1147
1148
1149
1150
1151
1152
1153
1154
1155
1156
1157
1158
1159
1160
1161
1162
1163
1164
1165
1166
1167
1168
1169
1170
1171
1172
1173
1174
1175
1176
1177
1178
1179
1180
1181
1182
1183
1184
1185
1186
1187
1188
1189
1190
1191
1192
1193
1194
1195
1196
1197
1198
1199
1200
1201
1202
1203
1204
1205
1206
1207
1208
1209
1210
1211
1212
1213
1214
1215
1216
1217
1218
1219
1220
1221
1222
1223
1224
1225
1226
1227
1228
1229
1230
1231
1232
1233
1234
1235
1236
1237
1238
1239
1240
1241
1242
1243
1244
1245
1246
1247
1248
1249
1250
1251
1252
1253
1254
1255
1256
1257
1258
1259
1260
1261
1262
1263
1264
1265
1266
1267
1268
1269
1270
1271
1272
1273
1274
1275
1276
1277
1278
1279
1280
1281
1282
1283
1284
1285
1286
1287
1288
1289
1290
1291
1292
1293
1294
1295
1296
1297
1298
1299
1300
1301
1302
1303
1304
1305
1306
1307
1308
1309
1310
1311
1312
1313
1314
1315
1316
1317
1318
1319
1320
1321
1322
1323
1324
1325
1326
1327
1328
1329
1330
1331
1332
1333
1334
1335
1336
1337
1338
1339
1340
1341
1342
1343
1344
1345
1346
1347
1348
1349
1350
1351
1352
1353
1354
1355
1356
1357
1358
1359
1360
1361
1362
1363
1364
1365
1366
1367
1368
1369
1370
1371
1372
1373
1374
1375
1376
1377
1378
1379
1380
1381
1382
1383
1384
1385
1386
1387
1388
1389
1390
1391
1392
1393
1394
1395
1396
1397
1398
1399
1400
1401
1402
1403
1404
1405
1406
1407
1408
1409
1410
1411
1412
1413
1414
1415
1416
1417
1418
1419
1420
1421
1422
1423
1424
1425
1426
1427
1428
1429
1430
1431
1432
1433
1434
1435
1436
1437
1438
1439
1440
1441
1442
1443
1444
1445
1446
1447
1448
1449
1450
1451
1452
1453
1454
1455
1456
1457
1458
1459
1460
1461
1462
1463
1464
1465
1466
1467
1468
1469
1470
1471
1472
1473
1474
1475
1476
1477
1478
1479
1480
1481
1482
1483
1484
1485
1486
1487
1488
1489
1490
1491
1492
1493
1494
1495
1496
1497
1498
1499
1500
1501
1502
1503
1504
1505
1506
1507
1508
1509
1510
1511
1512
1513
1514
1515
1516
1517
1518
1519
1520
1521
1522
1523
1524
1525
1526
1527
1528
1529
1530
1531
1532
1533
1534
1535
1536
1537
1538
1539
1540
1541
1542
1543
1544
1545
1546
1547
1548
1549
1550
1551
1552
1553
1554
1555
1556
1557
1558
1559
1560
1561
1562
1563
1564
1565
1566
1567
1568
1569
1570
1571
1572
1573
1574
1575
1576
1577
1578
1579
1580
1581
1582
1583
1584
1585
1586
1587
1588
1589
1590
1591
1592
1593
1594
1595
1596
1597
1598
1599
1600
1601
1602
1603
1604
1605
1606
1607
1608
1609
1610
1611
1612
1613
1614
1615
1616
1617
1618
1619
1620
1621
1622
1623
1624
1625
1626
1627
1628
1629
1630
1631
1632
1633
1634
1635
1636
1637
1638
1639
1640
1641
1642
1643
1644
1645
1646
1647
1648
1649
1650
1651
1652
1653
1654
1655
1656
1657
1658
1659
1660
1661
1662
1663
1664
1665
1666
1667
1668
1669
1670
1671
1672
1673
1674
1675
1676
1677
1678
1679
1680
1681
1682
1683
1684
1685
1686
1687
1688
1689
1690
1691
1692
1693
1694
1695
1696
1697
1698
1699
1700
1701
1702
1703
1704
1705
1706
1707
1708
1709
1710
{
    "paper_id": "P13-1005",
    "header": {
        "generated_with": "S2ORC 1.0.0",
        "date_generated": "2023-01-19T09:34:58.491153Z"
    },
    "title": "Smoothed marginal distribution constraints for language modeling",
    "authors": [
        {
            "first": "Brian",
            "middle": [],
            "last": "Roark",
            "suffix": "",
            "affiliation": {
                "laboratory": "",
                "institution": "Oregon Health & Science University",
                "location": {
                    "settlement": "Portland"
                }
            },
            "email": "roarkbr@gmail.com"
        },
        {
            "first": "Cyril",
            "middle": [],
            "last": "Allauzen",
            "suffix": "",
            "affiliation": {},
            "email": "allauzen@google.com"
        },
        {
            "first": "Michael",
            "middle": [],
            "last": "Riley",
            "suffix": "",
            "affiliation": {},
            "email": "riley@google.com"
        }
    ],
    "year": "",
    "venue": null,
    "identifiers": {},
    "abstract": "We present an algorithm for re-estimating parameters of backoff n-gram language models so as to preserve given marginal distributions, along the lines of wellknown Kneser-Ney (1995) smoothing. Unlike Kneser-Ney, our approach is designed to be applied to any given smoothed backoff model, including models that have already been heavily pruned. As a result, the algorithm avoids issues observed when pruning Kneser-Ney models (Siivola et al., 2007; Chelba et al., 2010), while retaining the benefits of such marginal distribution constraints. We present experimental results for heavily pruned backoff ngram models, and demonstrate perplexity and word error rate reductions when used with various baseline smoothing methods. An open-source version of the algorithm has been released as part of the OpenGrm ngram library. 1",
    "pdf_parse": {
        "paper_id": "P13-1005",
        "_pdf_hash": "",
        "abstract": [
            {
                "text": "We present an algorithm for re-estimating parameters of backoff n-gram language models so as to preserve given marginal distributions, along the lines of wellknown Kneser-Ney (1995) smoothing. Unlike Kneser-Ney, our approach is designed to be applied to any given smoothed backoff model, including models that have already been heavily pruned. As a result, the algorithm avoids issues observed when pruning Kneser-Ney models (Siivola et al., 2007; Chelba et al., 2010), while retaining the benefits of such marginal distribution constraints. We present experimental results for heavily pruned backoff ngram models, and demonstrate perplexity and word error rate reductions when used with various baseline smoothing methods. An open-source version of the algorithm has been released as part of the OpenGrm ngram library. 1",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Abstract",
                "sec_num": null
            }
        ],
        "body_text": [
            {
                "text": "Smoothed n-gram language models are the defacto standard statistical models of language for a wide range of natural language applications, including speech recognition and machine translation. Such models are trained on large text corpora, by counting the frequency of n-gram collocations, then normalizing and smoothing (regularizing) the resulting multinomial distributions. Standard techniques store the observed n-grams and derive probabilities of unobserved n-grams via their longest observed suffix and \"backoff\" costs associated with the prefix histories of the unobserved suffixes. Hence the size of the model grows with the number of observed n-grams, which is very large for typical training corpora.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": "1"
            },
            {
                "text": "Natural language applications, however, are commonly used in scenarios requiring relatively small footprint models. For example, applications running on mobile devices or in low latency streaming scenarios may be required to limit the complexity of models and algorithms to achieve the desired operating profile. As a result, statistical language models -an important component of many such applications -are often trained on very large corpora, then modified to fit within some pre-specified size bound. One method to achieve significant space reduction is through randomized data structures, such as Bloom (Talbot and Osborne, 2007) or Bloomier (Talbot and Brants, 2008) filters. These data structures permit efficient querying for specific n-grams in a model that has been stored in a fraction of the space required to store the full, exact model, though with some probability of false positives. Another common approach -which we pursue in this paper -is model pruning, whereby some number of the n-grams are removed from explicit storage in the model, so that their probability must be assigned via backoff smoothing. One simple pruning method is count thresholding, i.e., discarding n-grams that occur less than k times in the corpus. Beyond count thresholding, the most widely used pruning methods (Seymore and Rosenfeld, 1996; Stolcke, 1998) employ greedy algorithms to reduce the number of stored n-grams by comparing the stored probabilities to those that would be assigned via the backoff smoothing mechanism, and removing those with the least impact according to some criterion.",
                "cite_spans": [
                    {
                        "start": 608,
                        "end": 634,
                        "text": "(Talbot and Osborne, 2007)",
                        "ref_id": "BIBREF15"
                    },
                    {
                        "start": 647,
                        "end": 672,
                        "text": "(Talbot and Brants, 2008)",
                        "ref_id": "BIBREF14"
                    },
                    {
                        "start": 1305,
                        "end": 1334,
                        "text": "(Seymore and Rosenfeld, 1996;",
                        "ref_id": "BIBREF9"
                    },
                    {
                        "start": 1335,
                        "end": 1349,
                        "text": "Stolcke, 1998)",
                        "ref_id": "BIBREF13"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": "1"
            },
            {
                "text": "While these greedy pruning methods are highly effective for models estimated with most common smoothing approaches, they have been shown to be far less effective with Kneser-Ney trained language models (Siivola et al., 2007; Chelba et al., 2010) , leading to severe degradation in model quality relative to other standard smoothing meth-4-gram models",
                "cite_spans": [
                    {
                        "start": 202,
                        "end": 224,
                        "text": "(Siivola et al., 2007;",
                        "ref_id": "BIBREF10"
                    },
                    {
                        "start": 225,
                        "end": 245,
                        "text": "Chelba et al., 2010)",
                        "ref_id": "BIBREF1"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": "1"
            },
            {
                "text": "Interpolated Perplexity n-grams Perplexity n-grams Smoothing method full pruned (\u00d71000) full pruned (\u00d71000) Absolute Discounting (Ney et al., 1994) 120. 5 197.3 383.4 119.8 198.1 386.2 Witten-Bell (Witten and Bell, 1991) 118.8 196.3 380.4 121.6 202.3 396.4 Ristad (1995) 126.4 203.6 395.6 ---N/A --- Katz (1987) 119.8 198.1 386.2 ---N/A ---Kneser-Ney (Kneser and Ney, 1995) 114.5 285.1 388.2 115.8 274.3 398.7 Mod. Kneser-Ney (Chen and Goodman, 1998) Table 3 in Chelba et al. (2010) , demonstrating perplexity degradation of Kneser-Ney smoothed models in contrast to other common smoothing methods. Data: English Broadcast News, 128M words training; 692K words test; 143K word vocabulary. 4-gram language models, pruned using Stolcke (1998) relative entropy pruning to approximately 1.3% of the original size of 31,095,260 n-grams.",
                "cite_spans": [
                    {
                        "start": 129,
                        "end": 147,
                        "text": "(Ney et al., 1994)",
                        "ref_id": "BIBREF6"
                    },
                    {
                        "start": 153,
                        "end": 220,
                        "text": "5 197.3 383.4 119.8 198.1 386.2 Witten-Bell (Witten and Bell, 1991)",
                        "ref_id": null
                    },
                    {
                        "start": 257,
                        "end": 270,
                        "text": "Ristad (1995)",
                        "ref_id": "BIBREF7"
                    },
                    {
                        "start": 300,
                        "end": 311,
                        "text": "Katz (1987)",
                        "ref_id": "BIBREF4"
                    },
                    {
                        "start": 351,
                        "end": 373,
                        "text": "(Kneser and Ney, 1995)",
                        "ref_id": "BIBREF5"
                    },
                    {
                        "start": 415,
                        "end": 450,
                        "text": "Kneser-Ney (Chen and Goodman, 1998)",
                        "ref_id": null
                    },
                    {
                        "start": 462,
                        "end": 482,
                        "text": "Chelba et al. (2010)",
                        "ref_id": "BIBREF1"
                    },
                    {
                        "start": 726,
                        "end": 740,
                        "text": "Stolcke (1998)",
                        "ref_id": "BIBREF13"
                    }
                ],
                "ref_spans": [
                    {
                        "start": 451,
                        "end": 458,
                        "text": "Table 3",
                        "ref_id": null
                    }
                ],
                "eq_spans": [],
                "section": "Backoff",
                "sec_num": null
            },
            {
                "text": "ods. Thus, while Kneser-Ney may be the preferred smoothing method for large, unpruned models -where it can achieve real improvements over other smoothing methods -when relatively sparse, pruned models are required, it has severely diminished utility. Table 1 presents a slightly reformatted version of Table 3 from Chelba et al. (2010) . In their experiments (see Table 1 caption for specifics on training/test setup), they trained 4-gram Broadcast News language models using a variety of both backoff and interpolated smoothing methods and measured perplexity before and after Stolcke (1998) relative entropy based pruning. With this size training data, the perplexity of all of the smoothing methods other than Kneser-Ney degrades from around 120 with the full model to around 200 with the heavily pruned model. Kneser-Ney smoothed models have lower perplexity with the full model than the other methods by about 5 points, but degrade with pruning to far higher perplexity, between 270-285.",
                "cite_spans": [
                    {
                        "start": 315,
                        "end": 335,
                        "text": "Chelba et al. (2010)",
                        "ref_id": "BIBREF1"
                    }
                ],
                "ref_spans": [
                    {
                        "start": 251,
                        "end": 258,
                        "text": "Table 1",
                        "ref_id": "TABREF1"
                    },
                    {
                        "start": 302,
                        "end": 309,
                        "text": "Table 3",
                        "ref_id": null
                    },
                    {
                        "start": 364,
                        "end": 371,
                        "text": "Table 1",
                        "ref_id": "TABREF1"
                    }
                ],
                "eq_spans": [],
                "section": "Backoff",
                "sec_num": null
            },
            {
                "text": "The cause of this degradation is Kneser-Ney's unique method for estimating smoothed language models, which will be presented in more detail in Section 3. Briefly, the smoothing method reestimates lower-order n-gram parameters in order to avoid over-estimating the likelihood of n-grams that already have ample probability mass allocated as part of higher-order n-grams. This is done via a marginal distribution constraint which requires the expected frequency of the lower-order n-grams to match their observed frequency in the training data, much as is commonly done for maximum entropy model training. Goodman (2001) proved that, under certain assumptions, such constraints can only improve language models. Lower-order n-gram parameters resulting from Kneser-Ney are not relative frequency estimates, as with other smoothing methods; rather they are parameters estimated specifically for use within the larger smoothed model.",
                "cite_spans": [
                    {
                        "start": 604,
                        "end": 618,
                        "text": "Goodman (2001)",
                        "ref_id": "BIBREF3"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Backoff",
                "sec_num": null
            },
            {
                "text": "There are (at least) a couple of reasons why such parameters do not play well with model pruning. First, the pruning methods commonly use lower order n-gram probabilities to derive an estimate of state marginals, and, since these parameters are no longer smoothed relative frequency estimates, they do not serve that purpose well. For this reason, the widely-used SRILM toolkit recently provided switches to modify their pruning algorithm to use another model for state marginal estimates (Stolcke et al., 2011) . Second, and perhaps more importantly, the marginal constraints that were applied prior to smoothing will not in general be consistent with the much smaller pruned model. For example, if a bigram parameter is modified due to the presence of some set of trigrams, and then some or all of those trigrams are pruned from the model, the bigram associated with the modified parameter will be unlikely to have an overall expected frequency equal to its observed frequency anymore. As a result, the resulting model degrades dramatically with pruning.",
                "cite_spans": [
                    {
                        "start": 489,
                        "end": 511,
                        "text": "(Stolcke et al., 2011)",
                        "ref_id": "BIBREF12"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Backoff",
                "sec_num": null
            },
            {
                "text": "In this paper, we present an algorithm that imposes marginal distribution constraints of the sort used in Kneser-Ney modeling on arbitrary smoothed backoff n-gram language models. Our approach makes use of the same sort of derivation as the original Kneser-Ney modeling, but, among other differences, relies on smoothed estimates of the empirical relative frequency rather than the unsmoothed observed frequency. The algorithm can be applied after the smoothed model has been pruned, hence avoiding the pitfalls associated with Kneser-Ney modeling. Furthermore, while Kneser-Ney is conventionally defined as a variant of absolute discounting, our method can be applied to models smoothed with any backoff smoothing, including mixtures of models, widely used for domain adaptation.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Backoff",
                "sec_num": null
            },
            {
                "text": "We next establish formal preliminaries and our smoothed marginal distribution constraints method.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Backoff",
                "sec_num": null
            },
            {
                "text": "N-gram language models are typically presented mathematically in terms of words w, the strings (histories) h that precede them, and the suffixes of the histories (backoffs) h that are used in the smoothing recursion. Let V be a vocabulary (alphabet), and V * a string of zero or more symbols drawn from V . Let V k denote the set of strings w \u2208 V * of length k, i.e., |w| = k. We will use variables u, v, w, x, y, z \u2208 V to denote single symbols from the vocabulary; h, g \u2208 V * to denote history sequences preceding the specific word; and h , g \u2208 V * the respective backoff histories of h and g as typically defined (see below). For a string w = w 1 . . . w |w| we can calculate the smoothed conditional probability of each word w i in the sequence given the k words that preceded it, depending on the order of the Markov model. Let",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Preliminaries",
                "sec_num": "2"
            },
            {
                "text": "h k i = w i\u2212k .",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Preliminaries",
                "sec_num": "2"
            },
            {
                "text": ". . w i\u22121 be the previous k words in the sequence. Then the smoothed model is defined recursively as follows:",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Preliminaries",
                "sec_num": "2"
            },
            {
                "text": "P(wi | h k i ) = P(wi | h k i ) if c(h k i wi) > 0 \u03b1(h k i ) P(wi | h k\u22121 i ) otherwise",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Preliminaries",
                "sec_num": "2"
            },
            {
                "text": "where c(h k i w i ) is the count of the n-gram sequence w i\u2212k . . . w i in the training corpus; P is a regularized probability estimate that provides some probability mass for unobserved n-grams; and \u03b1(h k i ) is a factor that ensures normalization. Note that for h = h k i , the typically defined backoff history h = h k\u22121 i , i.e., the longest suffix of h that is not h itself. When we use h and g (for notational convenience) in future equations, it is this definition that we are using.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Preliminaries",
                "sec_num": "2"
            },
            {
                "text": "There are many ways to estimate P, including absolute discounting (Ney et al., 1994) , Katz (1987) and Witten and Bell (1991) . Interpolated models are special cases of this form, where the P is determined using model mixing, and the \u03b1 parameter is exactly the mixing factor value for the lower order model. N-gram language models allow for a sparse representation, so that only a subset of the possible ngrams must be explicitly stored. Probabilities for the rest of the n-grams are calculated through the \"otherwise\" semantics in the equation above. For an n-gram language model G, we will say that an n-gram hw \u2208 G if it is explicitly represented in the model; otherwise hw \u2208 G. In the standard ngram formulation above, the assumption is that if c(h k i w i ) > 0 then the n-gram has a parameter; yet with pruning, we remove many observed n-grams from the model, hence this is no longer the appropriate criterion. We reformulate the standard equation as follows:",
                "cite_spans": [
                    {
                        "start": 66,
                        "end": 84,
                        "text": "(Ney et al., 1994)",
                        "ref_id": "BIBREF6"
                    },
                    {
                        "start": 87,
                        "end": 98,
                        "text": "Katz (1987)",
                        "ref_id": "BIBREF4"
                    },
                    {
                        "start": 103,
                        "end": 125,
                        "text": "Witten and Bell (1991)",
                        "ref_id": "BIBREF16"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Preliminaries",
                "sec_num": "2"
            },
            {
                "text": "P(wi|h k i ) = \u03b2(h k i wi) if h k i wi \u2208 G \u03b1(h k i , h k\u22121 i ) P(wi|h k\u22121 i ) otherwise (1) where \u03b2(h k i w i ) is the parameter associated with the n-gram h k i w i and \u03b1(h k i , h k\u22121 i )",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Preliminaries",
                "sec_num": "2"
            },
            {
                "text": "is the backoff cost associated with going from state h k i to state h k\u22121 i . We assume that, if hw \u2208 G then all prefixes and suffixes of hw are also in G. Figure 1 presents a schema of an automaton representation of an n-gram model, of the sort used in the OpenGrm library (Roark et al., 2012) . States represent histories h, and the words w, whose probabilities are conditioned on h, label the arcs, leading to the history state for the subsequent word. State labels are provided in Figure 1 as a convenience, to show the (implicit) history encoded by the state, e.g., ' xyz' indicates that the state represents a history with the previous three symbols being x, y and z. Failure arcs, labeled with a \u03c6 in Figure 1 , encode an \"otherwise\" semantics and have as destination the origin state's backoff history. Many higher order states will back off to the same lower order state, specifically those that share the same suffix.",
                "cite_spans": [
                    {
                        "start": 274,
                        "end": 294,
                        "text": "(Roark et al., 2012)",
                        "ref_id": "BIBREF8"
                    },
                    {
                        "start": 571,
                        "end": 572,
                        "text": "'",
                        "ref_id": null
                    }
                ],
                "ref_spans": [
                    {
                        "start": 156,
                        "end": 164,
                        "text": "Figure 1",
                        "ref_id": "FIGREF0"
                    },
                    {
                        "start": 485,
                        "end": 493,
                        "text": "Figure 1",
                        "ref_id": "FIGREF0"
                    },
                    {
                        "start": 708,
                        "end": 716,
                        "text": "Figure 1",
                        "ref_id": "FIGREF0"
                    }
                ],
                "eq_spans": [],
                "section": "Preliminaries",
                "sec_num": "2"
            },
            {
                "text": "Note that, in general, the recursive definition of backoff may require the traversal of several back- off arcs before emitting a word, e.g., the highest order states in Figure 1 needing to traverse a couple of \u03c6 arcs to reach state 'z'. We can define the backoff cost between a state h k i and any of its suffix states as follows. Let \u03b1(h, h) = 1 and for m > 1,",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 169,
                        "end": 177,
                        "text": "Figure 1",
                        "ref_id": "FIGREF0"
                    }
                ],
                "eq_spans": [],
                "section": "Preliminaries",
                "sec_num": "2"
            },
            {
                "text": "yz z xyz u/\u03b2(xyzu) w/\u03b2(yzw) w/\u03b2(zw) \u03c6/\u03b1(xyz,yz) \u03c6/\u03b1(yz,z) zw yyz \u03c6/\u03b1(yyz,yz) yzw \u03b5 yzu yzv v/\u03b2(yyzv) w/\u03b2(yyzw) \u03c6/\u03b1(z,\u03b5) \u03c6/\u03b1(yzw,zw) z/\u03b2(z)",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Preliminaries",
                "sec_num": "2"
            },
            {
                "text": "\u03b1(h k i , h k\u2212m i ) = m j=1 \u03b1(h k\u2212j+1 i , h k\u2212j i ).",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Preliminaries",
                "sec_num": "2"
            },
            {
                "text": "If h k i w \u2208 G then the probability of that n-gram will be defined in terms of backoff to its longest suffix h k\u2212m i w \u2208 G. Let h wG denote the longest suffix of h such that h wG w \u2208 G. Note that this is not necessarily a proper suffix, since h wG could be h itself or it could be . Then",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Preliminaries",
                "sec_num": "2"
            },
            {
                "text": "EQUATION",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [
                    {
                        "start": 0,
                        "end": 8,
                        "text": "EQUATION",
                        "ref_id": "EQREF",
                        "raw_str": "P(w | h) = \u03b1(h, h wG ) \u03b2(h wG w)",
                        "eq_num": "(2)"
                    }
                ],
                "section": "Preliminaries",
                "sec_num": "2"
            },
            {
                "text": "which is equivalent to equation 1.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Preliminaries",
                "sec_num": "2"
            },
            {
                "text": "Marginal distribution constraints attempt to match the expected frequency of an n-gram with its observed frequency. In other words, if we use the model to randomly generate a very large corpus, the n-grams should occur with the same relative frequency in both the generated and original (training) corpus. Standard smoothing methods overgenerate lower-order n-grams. Using standard n-gram notation (where g is the backoff history for g), this constraint is stated in Kneser and Ney (1995) as",
                "cite_spans": [
                    {
                        "start": 467,
                        "end": 488,
                        "text": "Kneser and Ney (1995)",
                        "ref_id": "BIBREF5"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Marginal distribution constraints",
                "sec_num": "3"
            },
            {
                "text": "EQUATION",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [
                    {
                        "start": 0,
                        "end": 8,
                        "text": "EQUATION",
                        "ref_id": "EQREF",
                        "raw_str": "P(w | h ) = g:g =h P(g, w | h )",
                        "eq_num": "(3)"
                    }
                ],
                "section": "Marginal distribution constraints",
                "sec_num": "3"
            },
            {
                "text": "where P is the empirical relative frequency estimate. Taking this approach, certain base smoothing methods end up with very nice, easy to calculate solutions based on counts. Absolute discounting (Ney et al., 1994) in particular, using the above approach, leads to the well-known Kneser-Ney smoothing approach (Kneser and Ney, 1995; Chen and Goodman, 1998) . We will follow this same approach, with a couple of changes. First, we will make use of regularized estimates of relative frequency P rather than raw relative frequency P. Second, rather than just looking at observed histories h that back off to h , we will look at all histories (observed or not) of the length of the longest history in the model. For notational simplicity, suppose we have an n+1-gram model, hence the longest history in the model is of length n. Assume the length of the particular backoff history |h | = k. Let V n\u2212k h be the set of strings h \u2208 V n with h as a suffix. Then we can restate the marginal distribution constraint in equation 3 as",
                "cite_spans": [
                    {
                        "start": 196,
                        "end": 214,
                        "text": "(Ney et al., 1994)",
                        "ref_id": "BIBREF6"
                    },
                    {
                        "start": 310,
                        "end": 332,
                        "text": "(Kneser and Ney, 1995;",
                        "ref_id": "BIBREF5"
                    },
                    {
                        "start": 333,
                        "end": 356,
                        "text": "Chen and Goodman, 1998)",
                        "ref_id": "BIBREF2"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Marginal distribution constraints",
                "sec_num": "3"
            },
            {
                "text": "EQUATION",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [
                    {
                        "start": 0,
                        "end": 8,
                        "text": "EQUATION",
                        "ref_id": "EQREF",
                        "raw_str": "P(w | h ) = h\u2208V n\u2212k h P(h, w | h )",
                        "eq_num": "(4)"
                    }
                ],
                "section": "Marginal distribution constraints",
                "sec_num": "3"
            },
            {
                "text": "Next we solve for \u03b2(h w) parameters used in equation 1. Note that h is a suffix of any h \u2208 V n\u2212k h , so conditioning probabilities on h and h is the same as conditioning on just h. Each of the following derivation steps simply relies on the chain rule or definition of conditional probability, as well as pulling terms out of the summation.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Marginal distribution constraints",
                "sec_num": "3"
            },
            {
                "text": "P(w | h ) = h\u2208V n\u2212k h P(h, w | h ) = h\u2208V n\u2212k h P(w | h, h ) P(h | h ) = h\u2208V n\u2212k h P(w | h) P(h) g\u2208V n\u2212k h P(g) = 1 g\u2208V n\u2212k h P(g) h\u2208V n\u2212k h P(w | h) P(h) (5)",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Marginal distribution constraints",
                "sec_num": "3"
            },
            {
                "text": "Then, multiplying both sides by the normalizing denominator on the right-hand side and using equation 2 to substitute \u03b1(h, h wG ) \u03b2(h wG w) for P(w | h):",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Marginal distribution constraints",
                "sec_num": "3"
            },
            {
                "text": "EQUATION",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [
                    {
                        "start": 0,
                        "end": 8,
                        "text": "EQUATION",
                        "ref_id": "EQREF",
                        "raw_str": "P(w | h ) g\u2208V n\u2212k h P(g) = h\u2208V n\u2212k h P(w | h) P(h) = h\u2208V n\u2212k h \u03b1(h, h wG ) \u03b2(h wG w) P(h)",
                        "eq_num": "(6)"
                    }
                ],
                "section": "Marginal distribution constraints",
                "sec_num": "3"
            },
            {
                "text": "Note that we are only interested in h w \u2208 G, hence there are two disjoint subsets of histories h \u2208 V n\u2212k h that are being summed over: those such that h wG = h and those such that |h wG | > |h |. We next separate these sums in the next step of the derivation:",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Marginal distribution constraints",
                "sec_num": "3"
            },
            {
                "text": "EQUATION",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [
                    {
                        "start": 0,
                        "end": 8,
                        "text": "EQUATION",
                        "ref_id": "EQREF",
                        "raw_str": "P(w | h ) g\u2208V n\u2212k h P(g) = h\u2208V n\u2212k h :|h wG |>|h | \u03b1(h, h wG ) \u03b2(h wG w) P(h) + h\u2208V n\u2212k h :h wG =h \u03b1(h, h ) \u03b2(h w) P(h)",
                        "eq_num": "(7)"
                    }
                ],
                "section": "Marginal distribution constraints",
                "sec_num": "3"
            },
            {
                "text": "Finally, we solve for \u03b2(h w) in the second sum on the right-hand side of equation 7, yielding the formula in equation 8. Note that this equation is the correlate of equation 6in Kneser and Ney",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Marginal distribution constraints",
                "sec_num": "3"
            },
            {
                "text": "EQUATION",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [
                    {
                        "start": 0,
                        "end": 8,
                        "text": "EQUATION",
                        "ref_id": "EQREF",
                        "raw_str": "\u03b2(h w) = P(w | h ) g\u2208V n\u2212k h P(g) \u2212 h\u2208V n\u2212k h :|h wG |>|h | \u03b1(h, h wG ) \u03b2(h wG w) P(h) h\u2208V n\u2212k h :h wG =h \u03b1(h, h ) P(h)",
                        "eq_num": "(8)"
                    }
                ],
                "section": "Marginal distribution constraints",
                "sec_num": "3"
            },
            {
                "text": "(1995), modulo the two differences noted earlier: use of smoothed probability P rather than raw relative frequency; and summing over all history substrings in V n\u2212k h rather than just those with count greater than zero, which is also a change due to smoothing. Keep in mind, P is the target expected frequency from a given smoothed model. Kneser-Ney models are not useful input models, since their P n-gram parameters are not relative frequency estimates. This means that we cannot simply 'repair' pruned Kneser-Ney models, but must use other smoothing methods where the smoothed values are based on relative frequency estimation.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Marginal distribution constraints",
                "sec_num": "3"
            },
            {
                "text": "There are, in addition, two other important differences in our approach from that in Kneser and Ney (1995) , which would remain as differences even if our target expected frequency were the unsmoothed relative frequency P instead of the smoothed estimate P. First, the sum in the numerator is over histories of length n, the highest order in the n-gram model, whereas in the Kneser-Ney approach the sum is over histories that immediately back off to h , i.e., from the next highest order in the n-gram model. Thus the unigram distribution is with respect to the bigram model, the bigram model is with respect to the trigram model, and so forth. In our optimization, we sum instead over all possible history sequences of length n. Second, an early assumption made in Kneser and Ney (1995) is that the denominator term in their equation 6(our Eq. 8) is constant across all words for a given history, which is clearly false. We do not make this assumption. Of course, the probabilities must be normalized, hence the final values of \u03b2(h w) will be proportional to the values in Eq. 8.",
                "cite_spans": [
                    {
                        "start": 85,
                        "end": 106,
                        "text": "Kneser and Ney (1995)",
                        "ref_id": "BIBREF5"
                    },
                    {
                        "start": 766,
                        "end": 787,
                        "text": "Kneser and Ney (1995)",
                        "ref_id": "BIBREF5"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Marginal distribution constraints",
                "sec_num": "3"
            },
            {
                "text": "We briefly note that, like Kneser-Ney, if the baseline smoothing method is consistent, then the amount of smoothing in the limit will go to zero and our resulting model will also be consistent.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Marginal distribution constraints",
                "sec_num": "3"
            },
            {
                "text": "The smoothed relative frequency estimate P and higher order \u03b2 values on the right-hand side of Eq. 8 are given values (from the input smoothed model and previous stages in the algorithm, respectively), implying an algorithm that estimates highest orders of the model first. In addition, steady state history probabilities P(h) must be calculated. We turn to the estimation algorithm next.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Marginal distribution constraints",
                "sec_num": "3"
            },
            {
                "text": "Our algorithm takes a smoothed backoff n-gram language model in an automaton format (see Figure 1) and returns a smoothed backoff n-gram language model with the same topology. For all ngrams in the model that are suffixes of other ngrams in the model -i.e., that are backed-off to -we calculate the weight provided by equation 8 and assign it (after normalization) to the appropriate n-gram arc in the automaton. There are several important considerations for this algorithm, which we address in this section. First, we must provide a probability for every state in the model. Second, we must memoize summed values that are used repeatedly. Finally, we must iterate the calculation of certain values that depend on the n-gram weights being re-estimated.",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 89,
                        "end": 98,
                        "text": "Figure 1)",
                        "ref_id": "FIGREF0"
                    }
                ],
                "eq_spans": [],
                "section": "Model constraint algorithm",
                "sec_num": "4"
            },
            {
                "text": "The steady state probability P(h) is taken to be the probability of observing h after a long word sequence, i.e., the state's relative frequency in a long sequence of randomly-generated sentences from the model:",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Steady state probability calculation",
                "sec_num": "4.1"
            },
            {
                "text": "P(h) = lim m\u2192\u221e w 1 ...wmP (w 1 . . . w m h) (9)",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Steady state probability calculation",
                "sec_num": "4.1"
            },
            {
                "text": "whereP is the corpus probability derived as follows: The smoothed n-gram probability model P(w | h) is naturally extended to a sentence s = w 0 . . . w l , where w 0 = <s> and w l = </s> are the sentence initial and final words, by",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Steady state probability calculation",
                "sec_num": "4.1"
            },
            {
                "text": "P(s) = l i=1 P(w i | h n i ).",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Steady state probability calculation",
                "sec_num": "4.1"
            },
            {
                "text": "The corpus probability s 1 . . . s r is taken as:",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Steady state probability calculation",
                "sec_num": "4.1"
            },
            {
                "text": "EQUATION",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [
                    {
                        "start": 0,
                        "end": 8,
                        "text": "EQUATION",
                        "ref_id": "EQREF",
                        "raw_str": "P(s 1 . . . s r ) = (1 \u2212 \u03bb)\u03bb r\u22121 r i=1 P(s i )",
                        "eq_num": "(10)"
                    }
                ],
                "section": "Steady state probability calculation",
                "sec_num": "4.1"
            },
            {
                "text": "where \u03bb parameterizes the corpus length distribution. 2 Assuming the n-gram language model automaton G has a single final state </s> into 2P models words in a corpus rather than a single sentence since Equation 9 tends to zero as m \u2192 \u221e otherwise. In Markov chain terms, the corpus distribution is made irreducible to allow a non-trivial stationary distribution. which all </s> arcs enter, adding a \u03bb weighted arc from the </s> state to the initial state and having a final weight 1 \u2212 \u03bb in order to leave the automaton at the </s> state will model this corpus distribution. According to Eq. 9, P (h) is then the stationary distribution of the finite irreducible Markov Chain defined by this altered automaton. There are many methods for computing such a stationary distribution; we use the well-known power method (Stewart, 1999) .",
                "cite_spans": [
                    {
                        "start": 813,
                        "end": 828,
                        "text": "(Stewart, 1999)",
                        "ref_id": "BIBREF11"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Steady state probability calculation",
                "sec_num": "4.1"
            },
            {
                "text": "One difficulty remains to be resolved. The backoff arcs have a special interpretation in the automaton: they are traversed only if a word fails to match at the higher order. These failure arcs must be properly handled before applying standard stationary distribution calculations. A simple approach would be for each word w and state h such that hw / \u2208 G, but h w \u2208 G, add a w arc from state h to h w with weight \u03b1(h, h )\u03b2(h w ) and then remove all failure arcs (see Figure 2a ). This however results in an automaton with |V | arcs leaving every state, which is unwieldy with larger vocabularies and n-gram orders. Instead, for each word w and state h such that hw \u2208 G, add a w arc from state h to h w with weight \u2212\u03b1(h, h )\u03b2(h w) and then replace all failure labels with labels (see Figure 2b ). In this case, the added negativelyweighted arcs compensate for the excess probability mass allowed by the epsilon arcs 3 . The number of added arcs is no more than found in the original model.",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 467,
                        "end": 476,
                        "text": "Figure 2a",
                        "ref_id": "FIGREF1"
                    },
                    {
                        "start": 783,
                        "end": 792,
                        "text": "Figure 2b",
                        "ref_id": "FIGREF1"
                    }
                ],
                "eq_spans": [],
                "section": "Steady state probability calculation",
                "sec_num": "4.1"
            },
            {
                "text": "We are summing over all possible histories of length n in equation 8, and the steady state probability calculation outlined in the previous section includes the probability mass for histories h \u2208 G. The probability mass of states not in G ends up being allocated to the state representing their longest suffix that is explicitly in G. That is the state that would be active when these histories are encountered. Hence, once we have calculated the steady state probabilities for each state in the smoothed model, we only need to sum over states explicitly in the model.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Accumulation of higher order values",
                "sec_num": "4.2"
            },
            {
                "text": "As stated earlier, the use of \u03b2(h wG w) in the numerator of equation 8 for h wG that are larger than h implies that the longer n-grams must be re-estimated first. Thus we process each history length in descending order, finishing with the unigram state. Since we assume that, for every ngram hw \u2208 G, every prefix and suffix is also in G, we know that if h w \u2208 G then there is no history h such that h is a suffix of h and hw \u2208 G. This allows us to recursively accumulate the \u03b1(h, h ) P(h) in the denominator of Eq. 8.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Accumulation of higher order values",
                "sec_num": "4.2"
            },
            {
                "text": "(a) (b) h h' w/\u03b2(hw) w'/\u03b2(h'w') \u03c6/\u03b1(h,h') hw h'w' w'/\u03b1(h,h') \u03b2(h'w') h h' w/\u03b2(hw) w/\u03b2(h'w) \u03b5/\u03b1(h,h') hw h'w w/-\u03b1(h,h') \u03b2(h'w)",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Accumulation of higher order values",
                "sec_num": "4.2"
            },
            {
                "text": "For every n-gram, we can accumulate values required to calculate the three terms in equation 8, and pass them along to calculate lower order ngram values. Note, however, that a naive implementation of an algorithm to assign these values is O(|V | n ). This is due to the fact that the denominator factor must be accumulated for all higher order states that do not have the given n-gram. Hence, for every state h directly backing off to h (order |V |), and for every n-gram arc leaving state h (order |V |), some value must be accumulated. This can be particularly clearly seen at the unigram state, which has an arc for every unigram (the size of the vocabulary): for every bigram state (also order of the vocabulary), in the naive algorithm we must look for every possible arc. Since there are O(|V | n\u22122 ) lower order histories in the model in the worst case, we have overall complexity O(|V | n ). However, we know that the number of stored n-grams is very sparse relative to the possible number of n-grams, so the typical case complexity is far lower. Importantly, the denominator is calculated by first assuming that all higher order states back off to the current n-gram, then subtract out the mass associated with those that are already observed at the higher order. In such a way, we need only perform work for higher order n-grams hw that are explicitly in the model. This optimization achieves orders-of-magnitude speedups, so that models take seconds to process.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Accumulation of higher order values",
                "sec_num": "4.2"
            },
            {
                "text": "Because smoothing is not necessarily con-strained across n-gram orders, it is possible that higher-order n-grams could be smoothed less than lower order n-grams, so that the numerator of equation 8 can be less than zero, which is not valid. A value less than zero means that the higher order n-grams will already produce the n-gram more frequently than its smoothed expected frequency. We set a minimum value for the numerator, and any n-gram numerator value less than is replaced with (for the current study, = 0.001). We find this to be relatively infrequent, about 1% of n-grams for most models.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Accumulation of higher order values",
                "sec_num": "4.2"
            },
            {
                "text": "Recall that P and \u03b2 terms on the right-hand side of equation 8 are given and do not change. But there are two other terms in the equation that change as we update the n-gram parameters. The \u03b1(h, h ) backoff weights in the denominator ensure normalization at the higher order states, and change as the n-gram parameters at the current state are modified. Further, the steady state probabilities will change as the model changes. Hence, at each state, we must iterate the calculation of the denominator term: first adjust n-gram weights and normalize; then recalculate backoff weights at higher order states and iterate. Since this only involves the denominator term, each n-gram weight can be updated by multiplying by the ratio of the old term and the new term. After the entire model has been re-estimated, the steady state probability calculation presented in Section 4.1 is run again and model estimation happens again. As we shall see in the experimental results, this typically converges after just a few iterations. At this time, we have no convergence proofs for either of these iterative components to the algorithm, but expect that something can be said about this, which will be a priority in future work.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Iteration",
                "sec_num": "4.3"
            },
            {
                "text": "All results presented here are for English Broadcast News. We received scripts for replicating the Chelba et al. (2010) results from the authors, and we report statistics on our replication of their paper's results in Table 2 . The scripts are distributed in such a way that the user supplies the data from LDC98T31 (1996 CSR HUB4 Language Model corpus) and the script breaks the collection into training and testing sets, normalizes the text, and trains and prunes the language models using the SRILM toolkit (Stolcke et al., 2011) . Presumably due to minor differences in text normalization, resulting in very slightly fewer n-grams in all conditions, we achieve negligibly lower perplexities (one or two tenths of a point) in all conditions, as can be seen when comparing with Table 1 . All of the same trends result, thus that paper's result is successfully replicated here. Note that we ran our Kneser-Ney pruning (noted with a \u2020 in the table), using the new -prune-history-lm switch in SRILM -created in response to the Chelba et al. (2010) paper -which allows the use of another model to calculate the state marginals for pruning. This fixes part of the problem -perplexity does not degrade as much as the Kneser-Ney pruned model in Table 1 -but, as argued earlier in this paper, this is not the sole reason for the degradation and the perplexity remains extremely inflated.",
                "cite_spans": [
                    {
                        "start": 99,
                        "end": 119,
                        "text": "Chelba et al. (2010)",
                        "ref_id": "BIBREF1"
                    },
                    {
                        "start": 510,
                        "end": 532,
                        "text": "(Stolcke et al., 2011)",
                        "ref_id": "BIBREF12"
                    },
                    {
                        "start": 1026,
                        "end": 1046,
                        "text": "Chelba et al. (2010)",
                        "ref_id": "BIBREF1"
                    }
                ],
                "ref_spans": [
                    {
                        "start": 218,
                        "end": 225,
                        "text": "Table 2",
                        "ref_id": "TABREF3"
                    },
                    {
                        "start": 780,
                        "end": 787,
                        "text": "Table 1",
                        "ref_id": "TABREF1"
                    },
                    {
                        "start": 1240,
                        "end": 1247,
                        "text": "Table 1",
                        "ref_id": "TABREF1"
                    }
                ],
                "eq_spans": [],
                "section": "Experimental results",
                "sec_num": "5"
            },
            {
                "text": "We follow Chelba et al. (2010) in training and test set definition, vocabulary size, and parameters for reporting perplexity. Note that unigrams in the models are never pruned, hence all models assign probabilities over an identical vocabulary and perplexity is comparable across models. For all results reported here, we use the SRILM toolkit for baseline model training and pruning, then convert from the resulting ARPA format model to an OpenFst format (Allauzen et al., 2007) , as used in the OpenGrm n-gram library (Roark et al., 2012) . We then apply the marginal distribution constraints, and convert the result back to ARPA format for perplexity evaluation with the SRILM toolkit. All models are subjected to full normalization sanity checks, as with typical model functions in the OpenGrm library.",
                "cite_spans": [
                    {
                        "start": 10,
                        "end": 30,
                        "text": "Chelba et al. (2010)",
                        "ref_id": "BIBREF1"
                    },
                    {
                        "start": 456,
                        "end": 479,
                        "text": "(Allauzen et al., 2007)",
                        "ref_id": "BIBREF0"
                    },
                    {
                        "start": 520,
                        "end": 540,
                        "text": "(Roark et al., 2012)",
                        "ref_id": "BIBREF8"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Experimental results",
                "sec_num": "5"
            },
            {
                "text": "Recall that our algorithm assumes that, for every n-gram in the model, all prefix and suffix ngrams are also in the model. For pruned models, the SRILM toolkit does not impose such a requirement, hence explicit arcs are added to the Table 3 : Perplexity reductions achieved with marginal distribution constraints (MDC) on the heavily pruned models from Chelba et al. (2010) , and a mixture model. WFST ngram counts are slightly higher than ARPA format in Table 2 due to adding prefix and suffix n-grams.",
                "cite_spans": [
                    {
                        "start": 353,
                        "end": 373,
                        "text": "Chelba et al. (2010)",
                        "ref_id": "BIBREF1"
                    }
                ],
                "ref_spans": [
                    {
                        "start": 233,
                        "end": 240,
                        "text": "Table 3",
                        "ref_id": null
                    },
                    {
                        "start": 455,
                        "end": 462,
                        "text": "Table 2",
                        "ref_id": "TABREF3"
                    }
                ],
                "eq_spans": [],
                "section": "Experimental results",
                "sec_num": "5"
            },
            {
                "text": "model during conversion, with probability equal to what they would receive in the the original model. The resulting model is equivalent, but with a small number of additional arcs in the explicit representation (around 1% for the most heavily pruned models). Table 3 presents perplexity results for models that result from applying our marginal distribution constraints to the four heavily pruned models from Table 2 . In all four cases, we get perplexity reductions of around 10 points. We present the number of n-grams represented explicitly in the WFST, which is a slight increase from those presented in Table 2 due to the reintroduction of prefix and suffix n-grams.",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 259,
                        "end": 266,
                        "text": "Table 3",
                        "ref_id": null
                    },
                    {
                        "start": 409,
                        "end": 416,
                        "text": "Table 2",
                        "ref_id": "TABREF3"
                    },
                    {
                        "start": 608,
                        "end": 615,
                        "text": "Table 2",
                        "ref_id": "TABREF3"
                    }
                ],
                "eq_spans": [],
                "section": "Experimental results",
                "sec_num": "5"
            },
            {
                "text": "In addition to the four models reported in Chelba et al. (2010) , we produced a mixture model by interpolating (with equal weight) smoothed ngram probabilities from the full (unpruned) absolute discounting, Witten-Bell and Katz models, which share the same set of n-grams. After renormalizing and pruning to approximately the same size as the other models, we get commensurate gains using this model as with the other models. Figure 3 demonstrates the importance of iterating the steady state history calculation. All of the methods achieve perplexity reductions with subsequent iterations. Katz and absolute discounting achieve very little reduction in the first iteration, but catch back up in the second iteration.",
                "cite_spans": [
                    {
                        "start": 43,
                        "end": 63,
                        "text": "Chelba et al. (2010)",
                        "ref_id": "BIBREF1"
                    }
                ],
                "ref_spans": [
                    {
                        "start": 426,
                        "end": 434,
                        "text": "Figure 3",
                        "ref_id": "FIGREF3"
                    }
                ],
                "eq_spans": [],
                "section": "Experimental results",
                "sec_num": "5"
            },
            {
                "text": "The other iterative part of the algorithm, discussed in Section 4.3, is the denominator of equation 8, which changes due to adjustments in the backoff weights required by the revised n-gram probabilities. If we do not iteratively update the backoff weights when reestimating the weights, the 'Pruned+MDC' perplexities in Table 3 increase by between 0.2-0.4 points. Hence, iterating the steady state probability calculation is quite important, as illustrated by Figure 3 ; iterating the denominator calculation much less so, at least for these models. We noted in Section 3 that a key difference between our approach and Kneser and Ney (1995) is that their approach treated the denominator as a constant. If we do this, the 'Pruned+MDC' perplexities increase by between 4.5-5.6 points, i.e., about half of the perplexity reduction is lost for each method. Thus, while iteration of denominator calculation may not be critical, it should not be treated as a constant.",
                "cite_spans": [
                    {
                        "start": 620,
                        "end": 641,
                        "text": "Kneser and Ney (1995)",
                        "ref_id": "BIBREF5"
                    }
                ],
                "ref_spans": [
                    {
                        "start": 321,
                        "end": 328,
                        "text": "Table 3",
                        "ref_id": null
                    },
                    {
                        "start": 461,
                        "end": 469,
                        "text": "Figure 3",
                        "ref_id": "FIGREF3"
                    }
                ],
                "eq_spans": [],
                "section": "Experimental results",
                "sec_num": "5"
            },
            {
                "text": "We now look at the impacts on system performance we can achieve with these new models 4 , and whether the perplexity differences that we observe translate to real error rate reductions.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Experimental results",
                "sec_num": "5"
            },
            {
                "text": "For automatic speech recognition experiments, we used as test set the 1997 Hub4 evaluation set consisting of 32,689 words. The acoustic model is a tied-state triphone GMM-based HMM whose input features are 9-frame stacked 13-dimensional PLP-cepstral coefficients projected down to 39 dimensions using LDA. The model was trained on the 1996 and 1997 Hub4 acoustic model training sets (about 150 hours of data) using semi-tied covariance modeling and CMLLR-based speaker adaptive training and 4 iterations of boosted MMI.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Experimental results",
                "sec_num": "5"
            },
            {
                "text": "We used a multi-pass decoding strategy: two quick passes for adaptation supervision, CMLLR and MLLR estimation; then a slower full decoding pass running about 3 times slower than real time. Table 4 presents recognition results for the heavily pruned models that we have been considering, both for first pass decoding and rescoring of the resulting lattices using failure transitions rather than epsilon backoff approximations. Chelba et al. (2010) , and a mixture model. Kneser-Ney results are shown for: a) original pruning; and b) with -prune-history-lm switch.",
                "cite_spans": [
                    {
                        "start": 427,
                        "end": 447,
                        "text": "Chelba et al. (2010)",
                        "ref_id": "BIBREF1"
                    }
                ],
                "ref_spans": [
                    {
                        "start": 190,
                        "end": 197,
                        "text": "Table 4",
                        "ref_id": "TABREF6"
                    }
                ],
                "eq_spans": [],
                "section": "Experimental results",
                "sec_num": "5"
            },
            {
                "text": "The perplexity reductions that were achieved for these models do translate to real word error rate reductions at both stages of between 0.5 and 0.9 percent absolute. All of these gains are statistically significant at p < 0.0001 using the stratified shuffling test (Yeh, 2000) . For pruned Kneser-Ney models, fixing the state marginals with the -prune-history-lm switch reduces the WER versus the original pruned model, but no reductions were achieved vs. baseline methods. Table 5 presents perplexity and WER results for less heavily pruned models, where the pruning thresholds were set to yield approximately 1.5 million n-grams (4 times more than the previous models); and another set at around 5 million n-grams, as well as the full, unpruned models. While the robust gains we've observed up to now persist with the 1.5M n-gram models (WER reductions significant, Witten-Bell at p < 0.02, others at p < 0.0001), the larger models yield diminishing gains, with no real WER improvements. Performance of Witten-Bell models with the marginal distribution constraints degrade badly for the larger models, indicating that this method of regularization, unmodified by aggressive pruning, does not provide a well suited distribution for this sort of optimization. We speculate that this is due to underregularization, having noted some floating point precision issues when allowing the backoff recalculation to run indefinitely.",
                "cite_spans": [
                    {
                        "start": 265,
                        "end": 276,
                        "text": "(Yeh, 2000)",
                        "ref_id": "BIBREF17"
                    }
                ],
                "ref_spans": [
                    {
                        "start": 474,
                        "end": 481,
                        "text": "Table 5",
                        "ref_id": null
                    }
                ],
                "eq_spans": [],
                "section": "Experimental results",
                "sec_num": "5"
            },
            {
                "text": "The presented method reestimates lower order n-gram model parameters for a given smoothed backoff model, achieving perplexity and WER reductions for many smoothed models. There remain a number of open questions to investigate in the future. Recall that the numerator in Eq. 8 can be less than zero, meaning that no parameterization would lead to a model with the target frequency of the lower order n-gram, presumably due to over-or under-regularization. We anticipate a pre-constraint on the baseline smoothing method, that would recognize this problem and adjust the smoothing to ensure that a solution does exist. Additionally, it is clear that different regularization methods yield different behaviors, notably that large, relatively lightly pruned Witten-Bell models yield poor results. We will look to identify the issues with such models and provide general guidelines for prepping models prior to processing. Finally, we would like to perform extensive controlled experimentation to examine the relative contribution of the various aspects of our approach.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Summary and Future Directions",
                "sec_num": "6"
            },
            {
                "text": "Thanks to Ciprian Chelba and colleagues for the scripts to replicate their results. This work was supported in part by a Google Faculty Research Award and NSF grant #IIS-0964102. Any opinions, findings, conclusions or recommendations expressed in this publication are those of the authors and do not necessarily reflect the views of the NSF. Table 5 : Perplexity (PPL) and both first pass (FP) and rescoring (RS) WER reductions for less heavily pruned models using marginal distribution constraints (MDC).",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 342,
                        "end": 349,
                        "text": "Table 5",
                        "ref_id": null
                    }
                ],
                "eq_spans": [],
                "section": "Summary and Future Directions",
                "sec_num": "6"
            },
            {
                "text": "www.opengrm.org",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "",
                "sec_num": null
            },
            {
                "text": "Since each negatively-weighted arc leaving a state exactly cancels an epsilon arc followed by a matching positively-weighted arc in each iteration of the power method, convergence is assured.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "",
                "sec_num": null
            },
            {
                "text": "For space purposes, we exclude the Ristad method from this point forward since it is not competitive with the others.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "",
                "sec_num": null
            }
        ],
        "back_matter": [],
        "bib_entries": {
            "BIBREF0": {
                "ref_id": "b0",
                "title": "OpenFst: A general and efficient weighted finite-state transducer library",
                "authors": [
                    {
                        "first": "Cyril",
                        "middle": [],
                        "last": "Allauzen",
                        "suffix": ""
                    },
                    {
                        "first": "Michael",
                        "middle": [],
                        "last": "Riley",
                        "suffix": ""
                    },
                    {
                        "first": "Johan",
                        "middle": [],
                        "last": "Schalkwyk",
                        "suffix": ""
                    },
                    {
                        "first": "Wojciech",
                        "middle": [],
                        "last": "Skut",
                        "suffix": ""
                    },
                    {
                        "first": "Mehryar",
                        "middle": [],
                        "last": "Mohri",
                        "suffix": ""
                    }
                ],
                "year": 2007,
                "venue": "Proceedings of the Twelfth International Conference on Implementation and Application of Automata (CIAA 2007)",
                "volume": "4793",
                "issue": "",
                "pages": "11--23",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Cyril Allauzen, Michael Riley, Johan Schalkwyk, Wo- jciech Skut, and Mehryar Mohri. 2007. OpenFst: A general and efficient weighted finite-state transducer library. In Proceedings of the Twelfth International Conference on Implementation and Application of Automata (CIAA 2007), Lecture Notes in Computer Science, volume 4793, pages 11-23.",
                "links": null
            },
            "BIBREF1": {
                "ref_id": "b1",
                "title": "Study on interaction between entropy pruning and Kneser-Ney smoothing",
                "authors": [
                    {
                        "first": "Ciprian",
                        "middle": [],
                        "last": "Chelba",
                        "suffix": ""
                    },
                    {
                        "first": "Thorsten",
                        "middle": [],
                        "last": "Brants",
                        "suffix": ""
                    },
                    {
                        "first": "Will",
                        "middle": [],
                        "last": "Neveitt",
                        "suffix": ""
                    },
                    {
                        "first": "Peng",
                        "middle": [],
                        "last": "Xu",
                        "suffix": ""
                    }
                ],
                "year": 2010,
                "venue": "Proceedings of Interspeech",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Ciprian Chelba, Thorsten Brants, Will Neveitt, and Peng Xu. 2010. Study on interaction between en- tropy pruning and Kneser-Ney smoothing. In Pro- ceedings of Interspeech, page 24222425.",
                "links": null
            },
            "BIBREF2": {
                "ref_id": "b2",
                "title": "An empirical study of smoothing techniques for language modeling",
                "authors": [
                    {
                        "first": "Stanley",
                        "middle": [],
                        "last": "Chen",
                        "suffix": ""
                    },
                    {
                        "first": "Joshua",
                        "middle": [],
                        "last": "Goodman",
                        "suffix": ""
                    }
                ],
                "year": 1998,
                "venue": "",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Stanley Chen and Joshua Goodman. 1998. An em- pirical study of smoothing techniques for language modeling. Technical Report, TR-10-98, Harvard University.",
                "links": null
            },
            "BIBREF3": {
                "ref_id": "b3",
                "title": "A bit of progress in language modeling",
                "authors": [
                    {
                        "first": "Joshua",
                        "middle": [],
                        "last": "Goodman",
                        "suffix": ""
                    }
                ],
                "year": 2001,
                "venue": "Computer Speech and Language",
                "volume": "15",
                "issue": "4",
                "pages": "403--434",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Joshua Goodman. 2001. A bit of progress in lan- guage modeling. Computer Speech and Language, 15(4):403-434.",
                "links": null
            },
            "BIBREF4": {
                "ref_id": "b4",
                "title": "Estimation of probabilities from sparse data for the language model component of a speech recogniser",
                "authors": [
                    {
                        "first": "M",
                        "middle": [],
                        "last": "Slava",
                        "suffix": ""
                    },
                    {
                        "first": "",
                        "middle": [],
                        "last": "Katz",
                        "suffix": ""
                    }
                ],
                "year": 1987,
                "venue": "IEEE Transactions on Acoustic, Speech, and Signal Processing",
                "volume": "35",
                "issue": "3",
                "pages": "400--401",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Slava M. Katz. 1987. Estimation of probabilities from sparse data for the language model component of a speech recogniser. IEEE Transactions on Acoustic, Speech, and Signal Processing, 35(3):400-401.",
                "links": null
            },
            "BIBREF5": {
                "ref_id": "b5",
                "title": "Improved backing-off for m-gram language modeling",
                "authors": [
                    {
                        "first": "Reinhard",
                        "middle": [],
                        "last": "Kneser",
                        "suffix": ""
                    },
                    {
                        "first": "Hermann",
                        "middle": [],
                        "last": "Ney",
                        "suffix": ""
                    }
                ],
                "year": 1995,
                "venue": "Proceedings of the International Conference on Acoustics, Speech, and Signal Processing (ICASSP)",
                "volume": "",
                "issue": "",
                "pages": "181--184",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Reinhard Kneser and Hermann Ney. 1995. Improved backing-off for m-gram language modeling. In Pro- ceedings of the International Conference on Acous- tics, Speech, and Signal Processing (ICASSP), pages 181-184.",
                "links": null
            },
            "BIBREF6": {
                "ref_id": "b6",
                "title": "On structuring probabilistic dependences in stochastic language modeling",
                "authors": [
                    {
                        "first": "Hermann",
                        "middle": [],
                        "last": "Ney",
                        "suffix": ""
                    },
                    {
                        "first": "Ute",
                        "middle": [],
                        "last": "Essen",
                        "suffix": ""
                    },
                    {
                        "first": "Reinhard",
                        "middle": [],
                        "last": "Kneser",
                        "suffix": ""
                    }
                ],
                "year": 1994,
                "venue": "Computer Speech and Language",
                "volume": "8",
                "issue": "",
                "pages": "1--38",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Hermann Ney, Ute Essen, and Reinhard Kneser. 1994. On structuring probabilistic dependences in stochas- tic language modeling. Computer Speech and Lan- guage, 8:1-38.",
                "links": null
            },
            "BIBREF7": {
                "ref_id": "b7",
                "title": "A natural law of succession",
                "authors": [
                    {
                        "first": "Eric",
                        "middle": [
                            "S"
                        ],
                        "last": "Ristad",
                        "suffix": ""
                    }
                ],
                "year": 1995,
                "venue": "",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Eric S. Ristad. 1995. A natural law of succession. Technical Report, CS-TR-495-95, Princeton Univer- sity.",
                "links": null
            },
            "BIBREF8": {
                "ref_id": "b8",
                "title": "The OpenGrm open-source finite-state grammar software libraries",
                "authors": [
                    {
                        "first": "Brian",
                        "middle": [],
                        "last": "Roark",
                        "suffix": ""
                    },
                    {
                        "first": "Richard",
                        "middle": [],
                        "last": "Sproat",
                        "suffix": ""
                    },
                    {
                        "first": "Cyril",
                        "middle": [],
                        "last": "Allauzen",
                        "suffix": ""
                    },
                    {
                        "first": "Michael",
                        "middle": [],
                        "last": "Riley",
                        "suffix": ""
                    },
                    {
                        "first": "Jeffrey",
                        "middle": [],
                        "last": "Sorensen",
                        "suffix": ""
                    },
                    {
                        "first": "Terry",
                        "middle": [],
                        "last": "Tai",
                        "suffix": ""
                    }
                ],
                "year": 2012,
                "venue": "Proceedings of the ACL 2012 System Demonstrations",
                "volume": "",
                "issue": "",
                "pages": "61--66",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Brian Roark, Richard Sproat, Cyril Allauzen, Michael Riley, Jeffrey Sorensen, and Terry Tai. 2012. The OpenGrm open-source finite-state grammar soft- ware libraries. In Proceedings of the ACL 2012 Sys- tem Demonstrations, pages 61-66.",
                "links": null
            },
            "BIBREF9": {
                "ref_id": "b9",
                "title": "Scalable backoff language models",
                "authors": [
                    {
                        "first": "Kristie",
                        "middle": [],
                        "last": "Seymore",
                        "suffix": ""
                    },
                    {
                        "first": "Ronald",
                        "middle": [],
                        "last": "Rosenfeld",
                        "suffix": ""
                    }
                ],
                "year": 1996,
                "venue": "Proceedings of the International Conference on Spoken Language Processing (ICSLP)",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Kristie Seymore and Ronald Rosenfeld. 1996. Scal- able backoff language models. In Proceedings of the International Conference on Spoken Language Processing (ICSLP).",
                "links": null
            },
            "BIBREF10": {
                "ref_id": "b10",
                "title": "On growing and pruning kneserney smoothed n-gram models",
                "authors": [
                    {
                        "first": "Vesa",
                        "middle": [],
                        "last": "Siivola",
                        "suffix": ""
                    },
                    {
                        "first": "Teemu",
                        "middle": [],
                        "last": "Hirsimaki",
                        "suffix": ""
                    },
                    {
                        "first": "Sami",
                        "middle": [],
                        "last": "Virpioja",
                        "suffix": ""
                    }
                ],
                "year": 2007,
                "venue": "IEEE Transactions on Audio, Speech, and Language Processing",
                "volume": "15",
                "issue": "5",
                "pages": "1617--1624",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Vesa Siivola, Teemu Hirsimaki, and Sami Virpioja. 2007. On growing and pruning kneserney smoothed n-gram models. IEEE Transactions on Audio, Speech, and Language Processing, 15(5):1617- 1624.",
                "links": null
            },
            "BIBREF11": {
                "ref_id": "b11",
                "title": "Numerical methods for computing stationary distributions of finite irreducible markov chains. Computational Probability",
                "authors": [
                    {
                        "first": "J",
                        "middle": [],
                        "last": "William",
                        "suffix": ""
                    },
                    {
                        "first": "",
                        "middle": [],
                        "last": "Stewart",
                        "suffix": ""
                    }
                ],
                "year": 1999,
                "venue": "",
                "volume": "",
                "issue": "",
                "pages": "81--111",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "William J Stewart. 1999. Numerical methods for com- puting stationary distributions of finite irreducible markov chains. Computational Probability, pages 81-111.",
                "links": null
            },
            "BIBREF12": {
                "ref_id": "b12",
                "title": "Srilm at sixteen: Update and outlook",
                "authors": [
                    {
                        "first": "Andreas",
                        "middle": [],
                        "last": "Stolcke",
                        "suffix": ""
                    },
                    {
                        "first": "Jing",
                        "middle": [],
                        "last": "Zheng",
                        "suffix": ""
                    },
                    {
                        "first": "Wen",
                        "middle": [],
                        "last": "Wang",
                        "suffix": ""
                    },
                    {
                        "first": "Victor",
                        "middle": [],
                        "last": "Abrash",
                        "suffix": ""
                    }
                ],
                "year": 2011,
                "venue": "Proceedings of the IEEE Automatic Speech Recognition and Understanding Workshop (ASRU)",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Andreas Stolcke, Jing Zheng, Wen Wang, and Victor Abrash. 2011. Srilm at sixteen: Update and out- look. In Proceedings of the IEEE Automatic Speech Recognition and Understanding Workshop (ASRU).",
                "links": null
            },
            "BIBREF13": {
                "ref_id": "b13",
                "title": "Entropy-based pruning of backoff language models",
                "authors": [
                    {
                        "first": "Andreas",
                        "middle": [],
                        "last": "Stolcke",
                        "suffix": ""
                    }
                ],
                "year": 1998,
                "venue": "Proc. DARPA Broadcast News Transcription and Understanding Workshop",
                "volume": "",
                "issue": "",
                "pages": "270--274",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Andreas Stolcke. 1998. Entropy-based pruning of backoff language models. In Proc. DARPA Broad- cast News Transcription and Understanding Work- shop, pages 270-274.",
                "links": null
            },
            "BIBREF14": {
                "ref_id": "b14",
                "title": "Randomized language models via perfect hash functions",
                "authors": [
                    {
                        "first": "David",
                        "middle": [],
                        "last": "Talbot",
                        "suffix": ""
                    },
                    {
                        "first": "Thorsten",
                        "middle": [],
                        "last": "Brants",
                        "suffix": ""
                    }
                ],
                "year": 2008,
                "venue": "Proceedings of ACL-08: HLT",
                "volume": "",
                "issue": "",
                "pages": "505--513",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "David Talbot and Thorsten Brants. 2008. Randomized language models via perfect hash functions. In Pro- ceedings of ACL-08: HLT, pages 505-513.",
                "links": null
            },
            "BIBREF15": {
                "ref_id": "b15",
                "title": "Smoothed Bloom filter language models: Tera-scale LMs on the cheap",
                "authors": [
                    {
                        "first": "David",
                        "middle": [],
                        "last": "Talbot",
                        "suffix": ""
                    },
                    {
                        "first": "Miles",
                        "middle": [],
                        "last": "Osborne",
                        "suffix": ""
                    }
                ],
                "year": 2007,
                "venue": "Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLP-CoNLL)",
                "volume": "",
                "issue": "",
                "pages": "468--476",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "David Talbot and Miles Osborne. 2007. Smoothed Bloom filter language models: Tera-scale LMs on the cheap. In Proceedings of the 2007 Joint Con- ference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLP-CoNLL), pages 468-476.",
                "links": null
            },
            "BIBREF16": {
                "ref_id": "b16",
                "title": "The zerofrequency problem: Estimating the probabilities of novel events in adaptive text compression",
                "authors": [
                    {
                        "first": "H",
                        "middle": [],
                        "last": "Ian",
                        "suffix": ""
                    },
                    {
                        "first": "Timothy",
                        "middle": [
                            "C"
                        ],
                        "last": "Witten",
                        "suffix": ""
                    },
                    {
                        "first": "",
                        "middle": [],
                        "last": "Bell",
                        "suffix": ""
                    }
                ],
                "year": 1991,
                "venue": "IEEE Transactions on Information Theory",
                "volume": "37",
                "issue": "4",
                "pages": "1085--1094",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Ian H. Witten and Timothy C. Bell. 1991. The zero- frequency problem: Estimating the probabilities of novel events in adaptive text compression. IEEE Transactions on Information Theory, 37(4):1085- 1094.",
                "links": null
            },
            "BIBREF17": {
                "ref_id": "b17",
                "title": "More accurate tests for the statistical significance of result differences",
                "authors": [
                    {
                        "first": "A",
                        "middle": [],
                        "last": "Yeh",
                        "suffix": ""
                    }
                ],
                "year": 2000,
                "venue": "Proceedings of the 18th International COLING",
                "volume": "",
                "issue": "",
                "pages": "947--953",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "A. Yeh. 2000. More accurate tests for the statistical significance of result differences. In Proceedings of the 18th International COLING, pages 947-953.",
                "links": null
            }
        },
        "ref_entries": {
            "FIGREF0": {
                "num": null,
                "type_str": "figure",
                "uris": null,
                "text": "N-gram weighted automaton schema. State labels are presented for convenience, to specify the history implicitly encoded by the state."
            },
            "FIGREF1": {
                "num": null,
                "type_str": "figure",
                "uris": null,
                "text": "Schemata showing failure arc handling: (a) \u03c6 removal: add w arc (red), delete \u03c6 arc; (b) \u03c6 replacement: add w arc (red), replace \u03c6 by (red)"
            },
            "FIGREF3": {
                "num": null,
                "type_str": "figure",
                "uris": null,
                "text": "Models resulting from different numbers of parameter re-estimation iterations. Iteration 0 is the baseline pruned model."
            },
            "TABREF1": {
                "html": null,
                "text": "",
                "content": "<table/>",
                "type_str": "table",
                "num": null
            },
            "TABREF3": {
                "html": null,
                "text": "Replication ofChelba et al. (2010) using provided script. Using the script, the size of the unpruned model is 31,091,219 ngrams, 4,041 fewer thanChelba et al. (2010).",
                "content": "<table><tr><td>\u2020 Kneser-Ney model pruned using -prune-history-lm switch in SRILM.</td></tr></table>",
                "type_str": "table",
                "num": null
            },
            "TABREF6": {
                "html": null,
                "text": "WER reductions achieved with marginal distribution constraints (MDC) on the heavily pruned models from",
                "content": "<table/>",
                "type_str": "table",
                "num": null
            }
        }
    }
}