File size: 89,501 Bytes
6fa4bc9
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
1001
1002
1003
1004
1005
1006
1007
1008
1009
1010
1011
1012
1013
1014
1015
1016
1017
1018
1019
1020
1021
1022
1023
1024
1025
1026
1027
1028
1029
1030
1031
1032
1033
1034
1035
1036
1037
1038
1039
1040
1041
1042
1043
1044
1045
1046
1047
1048
1049
1050
1051
1052
1053
1054
1055
1056
1057
1058
1059
1060
1061
1062
1063
1064
1065
1066
1067
1068
1069
1070
1071
1072
1073
1074
1075
1076
1077
1078
1079
1080
1081
1082
1083
1084
1085
1086
1087
1088
1089
1090
1091
1092
1093
1094
1095
1096
1097
1098
1099
1100
1101
1102
1103
1104
1105
1106
1107
1108
1109
1110
1111
1112
1113
1114
1115
1116
1117
1118
1119
1120
1121
1122
1123
1124
1125
1126
1127
1128
1129
1130
1131
1132
1133
1134
1135
1136
1137
1138
1139
1140
1141
1142
1143
1144
1145
1146
1147
1148
1149
1150
1151
1152
1153
1154
1155
1156
1157
1158
1159
1160
1161
1162
1163
1164
1165
1166
1167
1168
1169
1170
1171
1172
1173
1174
1175
1176
1177
1178
1179
1180
1181
1182
1183
1184
1185
1186
1187
1188
1189
1190
1191
1192
1193
1194
1195
1196
1197
1198
1199
1200
1201
1202
1203
1204
1205
1206
1207
1208
1209
1210
1211
1212
1213
1214
1215
1216
1217
1218
1219
1220
1221
1222
1223
1224
1225
1226
1227
1228
1229
1230
1231
1232
1233
1234
1235
1236
1237
1238
1239
1240
1241
1242
1243
1244
1245
1246
1247
1248
1249
1250
1251
1252
1253
1254
1255
1256
1257
1258
1259
1260
1261
1262
1263
1264
1265
1266
1267
1268
1269
1270
1271
1272
1273
1274
1275
1276
1277
1278
1279
1280
1281
1282
1283
1284
1285
1286
1287
1288
1289
1290
1291
1292
1293
1294
1295
1296
1297
1298
1299
1300
1301
1302
1303
1304
1305
1306
1307
1308
1309
1310
1311
1312
1313
1314
1315
1316
1317
1318
1319
1320
1321
1322
1323
1324
1325
1326
1327
1328
1329
1330
1331
1332
1333
1334
1335
1336
1337
1338
1339
1340
1341
1342
1343
1344
1345
1346
1347
1348
1349
1350
1351
1352
1353
1354
1355
1356
1357
1358
1359
1360
1361
1362
1363
1364
1365
1366
1367
1368
1369
1370
1371
1372
1373
1374
1375
1376
1377
1378
1379
1380
1381
1382
1383
1384
1385
1386
1387
1388
1389
1390
1391
1392
1393
1394
1395
1396
1397
1398
1399
1400
1401
1402
1403
1404
1405
1406
1407
1408
1409
1410
1411
1412
1413
1414
1415
1416
1417
1418
1419
1420
1421
1422
1423
1424
1425
1426
1427
1428
1429
1430
1431
1432
1433
1434
1435
1436
1437
1438
1439
1440
1441
1442
1443
1444
1445
1446
1447
1448
1449
1450
1451
1452
1453
1454
1455
1456
1457
1458
1459
1460
1461
1462
1463
1464
1465
1466
1467
1468
1469
1470
1471
1472
1473
1474
1475
1476
1477
1478
1479
1480
1481
1482
1483
1484
1485
1486
1487
1488
1489
1490
1491
1492
1493
1494
1495
1496
1497
1498
1499
1500
1501
1502
1503
1504
1505
1506
1507
1508
1509
1510
1511
1512
1513
1514
1515
1516
1517
1518
1519
1520
1521
1522
1523
1524
1525
1526
1527
1528
1529
1530
1531
1532
1533
1534
1535
1536
1537
1538
1539
1540
1541
1542
1543
1544
1545
1546
1547
1548
1549
1550
1551
1552
1553
1554
1555
1556
1557
1558
1559
1560
1561
1562
1563
1564
1565
1566
{
    "paper_id": "P11-1027",
    "header": {
        "generated_with": "S2ORC 1.0.0",
        "date_generated": "2023-01-19T08:46:27.980235Z"
    },
    "title": "Faster and Smaller N -Gram Language Models",
    "authors": [
        {
            "first": "Adam",
            "middle": [],
            "last": "Pauls",
            "suffix": "",
            "affiliation": {
                "laboratory": "",
                "institution": "University of California",
                "location": {
                    "settlement": "Berkeley"
                }
            },
            "email": "adpauls@cs.berkeley.edu"
        },
        {
            "first": "Dan",
            "middle": [],
            "last": "Klein",
            "suffix": "",
            "affiliation": {
                "laboratory": "",
                "institution": "University of California",
                "location": {
                    "settlement": "Berkeley"
                }
            },
            "email": "klein@cs.berkeley.edu"
        }
    ],
    "year": "",
    "venue": null,
    "identifiers": {},
    "abstract": "N-gram language models are a major resource bottleneck in machine translation. In this paper, we present several language model implementations that are both highly compact and fast to query. Our fastest implementation is as fast as the widely used SRILM while requiring only 25% of the storage. Our most compact representation can store all 4 billion n-grams and associated counts for the Google n-gram corpus in 23 bits per n-gram, the most compact lossless representation to date, and even more compact than recent lossy compression techniques. We also discuss techniques for improving query speed during decoding, including a simple but novel language model caching technique that improves the query speed of our language models (and SRILM) by up to 300%.",
    "pdf_parse": {
        "paper_id": "P11-1027",
        "_pdf_hash": "",
        "abstract": [
            {
                "text": "N-gram language models are a major resource bottleneck in machine translation. In this paper, we present several language model implementations that are both highly compact and fast to query. Our fastest implementation is as fast as the widely used SRILM while requiring only 25% of the storage. Our most compact representation can store all 4 billion n-grams and associated counts for the Google n-gram corpus in 23 bits per n-gram, the most compact lossless representation to date, and even more compact than recent lossy compression techniques. We also discuss techniques for improving query speed during decoding, including a simple but novel language model caching technique that improves the query speed of our language models (and SRILM) by up to 300%.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Abstract",
                "sec_num": null
            }
        ],
        "body_text": [
            {
                "text": "For modern statistical machine translation systems, language models must be both fast and compact. The largest language models (LMs) can contain as many as several hundred billion n-grams (Brants et al., 2007) , so storage is a challenge. At the same time, decoding a single sentence can trigger hundreds of thousands of queries to the language model, so speed is also critical. As always, trade-offs exist between time, space, and accuracy, with many recent papers considering smallbut-approximate noisy LMs (Chazelle et al., 2004; Guthrie and Hepple, 2010) or small-but-slow compressed LMs (Germann et al., 2009) .",
                "cite_spans": [
                    {
                        "start": 188,
                        "end": 209,
                        "text": "(Brants et al., 2007)",
                        "ref_id": "BIBREF2"
                    },
                    {
                        "start": 509,
                        "end": 532,
                        "text": "(Chazelle et al., 2004;",
                        "ref_id": "BIBREF3"
                    },
                    {
                        "start": 533,
                        "end": 558,
                        "text": "Guthrie and Hepple, 2010)",
                        "ref_id": "BIBREF10"
                    },
                    {
                        "start": 592,
                        "end": 614,
                        "text": "(Germann et al., 2009)",
                        "ref_id": "BIBREF8"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": "1"
            },
            {
                "text": "In this paper, we present several lossless methods for compactly but efficiently storing large LMs in memory. As in much previous work (Whittaker and Raj, 2001; Hsu and Glass, 2008) , our methods are conceptually based on tabular trie encodings wherein each n-gram key is stored as the concatenation of one word (here, the last) and an offset encoding the remaining words (here, the context). After presenting a bit-conscious basic system that typifies such approaches, we improve on it in several ways. First, we show how the last word of each entry can be implicitly encoded, almost entirely eliminating its storage requirements. Second, we show that the deltas between adjacent entries can be efficiently encoded with simple variable-length encodings. Third, we investigate block-based schemes that minimize the amount of compressed-stream scanning during lookup.",
                "cite_spans": [
                    {
                        "start": 135,
                        "end": 160,
                        "text": "(Whittaker and Raj, 2001;",
                        "ref_id": "BIBREF18"
                    },
                    {
                        "start": 161,
                        "end": 181,
                        "text": "Hsu and Glass, 2008)",
                        "ref_id": "BIBREF12"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": "1"
            },
            {
                "text": "To speed up our language models, we present two approaches. The first is a front-end cache. Caching itself is certainly not new to language modeling, but because well-tuned LMs are essentially lookup tables to begin with, naive cache designs only speed up slower systems. We present a direct-addressing cache with a fast key identity check that speeds up our systems (or existing fast systems like the widelyused, speed-focused SRILM) by up to 300%.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": "1"
            },
            {
                "text": "Our second speed-up comes from a more fundamental change to the language modeling interface. Where classic LMs take word tuples and produce counts or probabilities, we propose an LM that takes a word-and-context encoding (so the context need not be re-looked up) and returns both the probability and also the context encoding for the suffix of the original query. This setup substantially accelerates the scrolling queries issued by decoders, and also exploits language model state equivalence (Li and Khudanpur, 2008) .",
                "cite_spans": [
                    {
                        "start": 494,
                        "end": 518,
                        "text": "(Li and Khudanpur, 2008)",
                        "ref_id": "BIBREF14"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": "1"
            },
            {
                "text": "Overall, we are able to store the 4 billion n-grams of the Google Web1T (Brants and Franz, 2006) cor-pus, with associated counts, in 10 GB of memory, which is smaller than state-of-the-art lossy language model implementations (Guthrie and Hepple, 2010) , and significantly smaller than the best published lossless implementation (Germann et al., 2009) . We are also able to simultaneously outperform SRILM in both total size and speed. Our LM toolkit, which is implemented in Java and compatible with the standard ARPA file formats, is available on the web. 1",
                "cite_spans": [
                    {
                        "start": 72,
                        "end": 96,
                        "text": "(Brants and Franz, 2006)",
                        "ref_id": "BIBREF1"
                    },
                    {
                        "start": 226,
                        "end": 252,
                        "text": "(Guthrie and Hepple, 2010)",
                        "ref_id": "BIBREF10"
                    },
                    {
                        "start": 329,
                        "end": 351,
                        "text": "(Germann et al., 2009)",
                        "ref_id": "BIBREF8"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": "1"
            },
            {
                "text": "Our goal in this paper is to provide data structures that map n-gram keys to values, i.e. probabilities or counts. Maps are fundamental data structures and generic implementations of mapping data structures are readily available. However, because of the sheer number of keys and values needed for n-gram language modeling, generic implementations do not work efficiently \"out of the box.\" In this section, we will review existing techniques for encoding the keys and values of an n-gram language model, taking care to account for every bit of memory required by each implementation.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Preliminaries",
                "sec_num": "2"
            },
            {
                "text": "To provide absolute numbers for the storage requirements of different implementations, we will use the Google Web1T corpus as a benchmark. This corpus, which is on the large end of corpora typically employed in language modeling, is a collection of nearly 4 billion n-grams extracted from over a trillion tokens of English text, and has a vocabulary of about 13.5 million words.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Preliminaries",
                "sec_num": "2"
            },
            {
                "text": "In the Web1T corpus, the most frequent n-gram occurs about 95 billion times. Storing this count explicitly would require 37 bits, but, as noted by Guthrie and Hepple (2010) , the corpus contains only about 770 000 unique counts, so we can enumerate all counts using only 20 bits, and separately store an array called the value rank array which converts the rank encoding of a count back to its raw count. The additional array is small, requiring only about 3MB, but we save 17 bits per n-gram, reducing value storage from around 16GB to about 9GB for Web1T.",
                "cite_spans": [
                    {
                        "start": 147,
                        "end": 172,
                        "text": "Guthrie and Hepple (2010)",
                        "ref_id": "BIBREF10"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Encoding Values",
                "sec_num": "2.1"
            },
            {
                "text": "We can rank encode probabilities and back-offs in the same way, allowing us to be agnostic to whether we encode counts, probabilities and/or back-off weights in our model. In general, the number of bits per value required to encode all value ranks for a given language model will vary -we will refer to this variable as v .",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Encoding Values",
                "sec_num": "2.1"
            },
            {
                "text": "The data structure of choice for the majority of modern language model implementations is a trie (Fredkin, 1960) . Tries or variants thereof are implemented in many LM tool kits, including SRILM (Stolcke, 2002) , IRSTLM (Federico and Cettolo, 2007) , CMU SLM (Whittaker and Raj, 2001) , and MIT LM (Hsu and Glass, 2008) . Tries represent collections of n-grams using a tree. Each node in the tree encodes a word, and paths in the tree correspond to n-grams in the collection. Tries ensure that each n-gram prefix is represented only once, and are very efficient when n-grams share common prefixes. Values can also be stored in a trie by placing them in the appropriate nodes.",
                "cite_spans": [
                    {
                        "start": 97,
                        "end": 112,
                        "text": "(Fredkin, 1960)",
                        "ref_id": "BIBREF7"
                    },
                    {
                        "start": 195,
                        "end": 210,
                        "text": "(Stolcke, 2002)",
                        "ref_id": "BIBREF17"
                    },
                    {
                        "start": 220,
                        "end": 248,
                        "text": "(Federico and Cettolo, 2007)",
                        "ref_id": "BIBREF6"
                    },
                    {
                        "start": 259,
                        "end": 284,
                        "text": "(Whittaker and Raj, 2001)",
                        "ref_id": "BIBREF18"
                    },
                    {
                        "start": 298,
                        "end": 319,
                        "text": "(Hsu and Glass, 2008)",
                        "ref_id": "BIBREF12"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Trie-Based Language Models",
                "sec_num": "2.2"
            },
            {
                "text": "Conceptually, trie nodes can be implemented as records that contain two entries: one for the word in the node, and one for either a pointer to the parent of the node or a list of pointers to children. At a low level, however, naive implementations of tries can waste significant amounts of space. For example, the implementation used in SRILM represents a trie node as a C struct containing a 32-bit integer representing the word, a 64-bit memory 2 pointer to the list of children, and a 32-bit floating point number representing the value stored at a node. The total storage for a node alone is 16 bytes, with additional overhead required to store the list of children. In total, the most compact implementation in SRILM uses 33 bytes per n-gram of storage, which would require around 116 GB of memory to store Web1T.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Trie-Based Language Models",
                "sec_num": "2.2"
            },
            {
                "text": "While it is simple to implement a trie node in this (already wasteful) way in programming languages that offer low-level access to memory allocation like C/C++, the situation is even worse in higher level programming languages. In Java, for example, Cstyle structs are not available, and records are most naturally implemented as objects that carry an additional 64 bits of overhead.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Trie-Based Language Models",
                "sec_num": "2.2"
            },
            {
                "text": "Despite its relatively large storage requirements, the implementation employed by SRILM is still widely in use today, largely because of its speed -to our knowledge, SRILM is the fastest freely available language model implementation. We will show that we can achieve access speeds comparable to SRILM but using only 25% of the storage.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Trie-Based Language Models",
                "sec_num": "2.2"
            },
            {
                "text": "A more compact implementation of a trie is described in Whittaker and Raj (2001) . In their implementation, nodes in a trie are represented implicitly as entries in an array. Each entry encodes a word with enough bits to index all words in the language model (24 bits for Web1T), a quantized value, and a 32-bit 3 offset that encodes the contiguous block of the array containing the children of the node. Note that 32 bits is sufficient to index all n-grams in Web1T; for larger corpora, we can always increase the size of the offset.",
                "cite_spans": [
                    {
                        "start": 56,
                        "end": 80,
                        "text": "Whittaker and Raj (2001)",
                        "ref_id": "BIBREF18"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Implicit Tries",
                "sec_num": "2.3"
            },
            {
                "text": "Effectively, this representation replaces systemlevel memory pointers with offsets that act as logical pointers that can reference other entries in the array, rather than arbitrary bytes in RAM. This representation saves space because offsets require fewer bits than memory pointers, but more importantly, it permits straightforward implementation in any higherlevel language that provides access to arrays of integers. 4",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Implicit Tries",
                "sec_num": "2.3"
            },
            {
                "text": "Hsu and Glass (2008) describe a variant of the implicit tries of Whittaker and Raj (2001) in which each node in the trie stores the prefix (i.e. parent). This representation has the property that we can refer to each n-gram w n 1 by its last word w n and the offset c(w n\u22121 1 ) of its prefix w n\u22121 1 , often called the context. At a low-level, we can efficiently encode this pair (w n , c(w n\u22121 1 )) as a single 64-bit integer, where the first 24 bits refer to w n and the last 40 bits encode c(w n\u22121 1 ). We will refer to this encoding as a context encoding.",
                "cite_spans": [
                    {
                        "start": 65,
                        "end": 89,
                        "text": "Whittaker and Raj (2001)",
                        "ref_id": "BIBREF18"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Encoding n-grams",
                "sec_num": "2.4"
            },
            {
                "text": "Note that typically, n-grams are encoded in tries in the reverse direction (first-rest instead of lastrest), which enables a more efficient computation of back-offs. In our implementations, we found that the speed improvement from switching to a first-rest encoding and implementing more efficient queries was modest. However, as we will see in Section 4.2, the last-rest encoding allows us to exploit the scrolling nature of queries issued by decoders, which results in speedups that far outweigh those achieved by reversing the trie.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Encoding n-grams",
                "sec_num": "2.4"
            },
            {
                "text": "In the previous section, we reviewed well-known techniques in language model implementation. In this section, we combine these techniques to build simple data structures in ways that are to our knowledge novel, producing language models with stateof-the-art memory requirements and speed. We will also show that our data structures can be very effectively compressed by implicitly encoding the word w n , and further compressed by applying a variablelength encoding on context deltas.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Language Model Implementations",
                "sec_num": "3"
            },
            {
                "text": "A standard way to implement a map is to store an array of key/value pairs, sorted according to the key. Lookup is carried out by performing binary search on a key. For an n-gram language model, we can apply this implementation with a slight modification: we need n sorted arrays, one for each n-gram order. We construct keys (w n , c(w n\u22121 1 )) using the context encoding described in the previous section, where the context offsets c refer to entries in the sorted array of (n \u2212 1)-grams. This data structure is shown graphically in Figure 1 .",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 534,
                        "end": 542,
                        "text": "Figure 1",
                        "ref_id": null
                    }
                ],
                "eq_spans": [],
                "section": "Sorted Array",
                "sec_num": "3.1"
            },
            {
                "text": "Because our keys are sorted according to their context-encoded representation, we cannot straightforwardly answer queries about an n-gram w without first determining its context encoding. We can do this efficiently by building up the encoding incrementally: we start with the context offset of the unigram w 1 , which is simply its integer representation, and use that to form the context encoding of the bigram w 2 1 = (w 2 , c(w 1 )). We can find the offset of ",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Sorted Array",
                "sec_num": "3.1"
            },
            {
                "text": "\"slept\" 1933 1933 1933 1933 1933 1933 1935 1935 1935 . .",
                "cite_spans": [
                    {
                        "start": 8,
                        "end": 52,
                        "text": "1933 1933 1933 1933 1933 1933 1935 1935 1935",
                        "ref_id": null
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": ". .",
                "sec_num": null
            },
            {
                "text": "v bits . . \"dog\"",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "val val val",
                "sec_num": null
            },
            {
                "text": "3-grams 2-grams 1-grams w Figure 1 : Our SORTED implementation of a trie. The dotted paths correspond to \"the cat slept\", \"the cat ran\", and \"the dog ran\". Each node in the trie is an entry in an array with 3 parts: w represents the word at the node; val represents the (rank encoded) value; and c is an offset in the array of n \u2212 1 grams that represents the parent (prefix) of a node. Words are represented as offsets in the unigram array.",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 26,
                        "end": 34,
                        "text": "Figure 1",
                        "ref_id": null
                    }
                ],
                "eq_spans": [],
                "section": "val val val",
                "sec_num": null
            },
            {
                "text": "the bigram using binary search, and form the context encoding of the trigram, and so on. Note, however, that if our queries arrive in context-encoded form, queries are faster since they involve only one binary search in the appropriate array. We will return to this later in Section 4.2 This implementation, SORTED, uses 64 bits for the integer-encoded keys and v bits for the values. Lookup is linear in the length of the key and logarithmic in the number of n-grams. For Web1T (v = 20), the total storage is 10.5 bytes/n-gram or about 37GB.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "val val val",
                "sec_num": null
            },
            {
                "text": "Hash tables are another standard way to implement associative arrays. To enable the use of our context encoding, we require an implementation in which we can refer to entries in the hash table via array offsets. For this reason, we use an open address hash map that uses linear probing for collision resolution.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Hash Table",
                "sec_num": "3.2"
            },
            {
                "text": "As in the sorted array implementation, in order to insert an n-gram w n 1 into the hash table, we must form its context encoding incrementally from the offset of w 1 . However, unlike the sorted array implementation, at query time, we only need to be able to check equality between the query key w n 1 = (w n , c(w n\u22121 1 )) and a key w n 1 = (w n , c(w n\u22121 1 )) in the table. Equality can easily be checked by first checking if w n = w n , then recursively checking equality between w n\u22121 1 and w n\u22121 1 , though again, equality is even faster if the query is already contextencoded.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Hash Table",
                "sec_num": "3.2"
            },
            {
                "text": "This HASH data structure also uses 64 bits for integer-encoded keys and v bits for values. However, to avoid excessive hash collisions, we also allocate additional empty space according to a userdefined parameter that trades off speed and timewe used about 40% extra space in our experiments. For Web1T, the total storage for this implementation is 15 bytes/n-gram or about 53 GB total.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Hash Table",
                "sec_num": "3.2"
            },
            {
                "text": "Look up in a hash map is linear in the length of an n-gram and constant with respect to the number of n-grams. Unlike the sorted array implementation, the hash table implementation also permits efficient insertion and deletion, making it suitable for stream-based language models (Levenberg and Osborne, 2009) .",
                "cite_spans": [
                    {
                        "start": 280,
                        "end": 309,
                        "text": "(Levenberg and Osborne, 2009)",
                        "ref_id": "BIBREF13"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Hash Table",
                "sec_num": "3.2"
            },
            {
                "text": "The context encoding we have used thus far still wastes space. This is perhaps most evident in the sorted array representation (see Figure 1 ): all ngrams ending with a particular word w i are stored contiguously. We can exploit this redundancy by storing only the context offsets in the main array, using as many bits as needed to encode all context offsets (32 bits for Web1T). In auxiliary arrays, one for each n-gram order, we store the beginning and end of the range of the trie array in which all (w i , c) keys are stored for each w i . These auxiliary arrays are negligibly small -we only need to store 2n offsets for each word.",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 132,
                        "end": 140,
                        "text": "Figure 1",
                        "ref_id": null
                    }
                ],
                "eq_spans": [],
                "section": "Implicitly Encoding w n",
                "sec_num": "3.3"
            },
            {
                "text": "The same trick can be applied in the hash table implementation. We allocate contiguous blocks of the main array for n-grams which all share the same last word w i , and distribute keys within those ranges using the hashing function.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Implicitly Encoding w n",
                "sec_num": "3.3"
            },
            {
                "text": "This representation reduces memory usage for keys from 64 bits to 32 bits, reducing overall storage for Web1T to 6.5 bytes/n-gram for the sorted implementation and 9.1 bytes for the hashed implementation, or about 23GB and 32GB in total. It also increases query speed in the sorted array case, since to find (w i , c), we only need to search the range of the array over which w i applies. Because this implicit encoding reduces memory usage without a performance cost, we will assume its use for the rest of this paper.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Implicitly Encoding w n",
                "sec_num": "3.3"
            },
            {
                "text": "The distribution of value ranks in language modeling is Zipfian, with far more n-grams having low counts than high counts. If we ensure that the value rank array sorts raw values by descending order of frequency, then we expect that small ranks will occur much more frequently than large ones, which we can exploit with a variable-length encoding.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Variable-Length Coding",
                "sec_num": "3.4.1"
            },
            {
                "text": "To compress n-grams, we can exploit the context encoding of our keys. In Figure 2, of the key array used in our sorted array implementation. While we have already exploited the fact that the 24 word bits repeat in the previous section, we note here that consecutive context offsets tend to be quite close together. We found that for 5-grams, the median difference between consecutive offsets was about 50, and 90% of offset deltas were smaller than 10000. By using a variable-length encoding to represent these deltas, we should require far fewer than 32 bits to encode context offsets. We used a very simple variable-length coding to encode offset deltas, word deltas, and value ranks. Our encoding, which is referred to as \"variablelength block coding\" in Boldi and Vigna (2005) , works as follows: we pick a (configurable) radix r = 2 k . To encode a number m, we determine the number of digits d required to express m in base r. We write d in unary, i.e. d \u2212 1 zeroes followed by a one. We then write the d digits of m in base r, each of which requires k bits. For example, using k = 2, we would encode the decimal number 7 as 010111. We can choose k separately for deltas and value indices, and also tune these parameters to a given language model.",
                "cite_spans": [
                    {
                        "start": 758,
                        "end": 780,
                        "text": "Boldi and Vigna (2005)",
                        "ref_id": "BIBREF0"
                    }
                ],
                "ref_spans": [
                    {
                        "start": 73,
                        "end": 82,
                        "text": "Figure 2,",
                        "ref_id": "FIGREF1"
                    }
                ],
                "eq_spans": [],
                "section": "Variable-Length Coding",
                "sec_num": "3.4.1"
            },
            {
                "text": "We found this encoding outperformed other standard prefix codes, including Golomb codes (Golomb, 1966; Church et al., 2007) and Elias \u03b3 and \u03b4 codes. We also experimented with the \u03b6 codes of Boldi and Vigna (2005) , which modify variable-length block codes so that they are optimal for certain power law distributions. We found that \u03b6 codes performed no better than variable-length block codes and were slightly more complex. Finally, we found that Huffman codes outperformed our encoding slightly, but came at a much higher computational cost.",
                "cite_spans": [
                    {
                        "start": 88,
                        "end": 102,
                        "text": "(Golomb, 1966;",
                        "ref_id": "BIBREF9"
                    },
                    {
                        "start": 103,
                        "end": 123,
                        "text": "Church et al., 2007)",
                        "ref_id": "BIBREF5"
                    },
                    {
                        "start": 190,
                        "end": 212,
                        "text": "Boldi and Vigna (2005)",
                        "ref_id": "BIBREF0"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Variable-Length Coding",
                "sec_num": "3.4.1"
            },
            {
                "text": "We could in principle compress the entire array of key/value pairs with the encoding described above, but this would render binary search in the array impossible: we cannot jump to the mid-point of the array since in order to determine what key lies at a particular point in the compressed bit stream, we would need to know the entire history of offset deltas.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Block Compression",
                "sec_num": "3.4.2"
            },
            {
                "text": "Instead, we employ block compression, a technique also used by Harb et al. (2009) for smaller language models. In particular, we compress the key/value array in blocks of 128 bytes. At the beginning of the block, we write out a header consisting of: an explicit 64-bit key that begins the block; a 32-bit integer representing the offset of the header key in the uncompressed array; 5 the number of bits of compressed data in the block; and the variablelength encoding of the value rank of the header key. The remainder of the block is filled with as many compressed key/value pairs as possible. Once the block is full, we start a new block. See Figure 2 for a depiction.",
                "cite_spans": [
                    {
                        "start": 63,
                        "end": 81,
                        "text": "Harb et al. (2009)",
                        "ref_id": "BIBREF11"
                    }
                ],
                "ref_spans": [
                    {
                        "start": 645,
                        "end": 653,
                        "text": "Figure 2",
                        "ref_id": "FIGREF1"
                    }
                ],
                "eq_spans": [],
                "section": "Block Compression",
                "sec_num": "3.4.2"
            },
            {
                "text": "When we encode an offset delta, we store the delta of the word portion of the key separately from the delta of the context offset. When an entire block shares the same word portion of the key, we set a single bit in the header that indicates that we do not encode any word deltas.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Block Compression",
                "sec_num": "3.4.2"
            },
            {
                "text": "To find a key in this compressed array, we first perform binary search over the header blocks (which are predictably located every 128 bytes), followed by a linear search within a compressed block.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Block Compression",
                "sec_num": "3.4.2"
            },
            {
                "text": "Using k = 6 for encoding offset deltas and k = 5 for encoding value ranks, this COMPRESSED implementation stores Web1T in less than 3 bytes per n-gram, or about 10.2GB in total. This is about 6GB less than the storage required by Germann et al. (2009) , which is the best published lossless compression to date.",
                "cite_spans": [
                    {
                        "start": 230,
                        "end": 251,
                        "text": "Germann et al. (2009)",
                        "ref_id": "BIBREF8"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Block Compression",
                "sec_num": "3.4.2"
            },
            {
                "text": "In the previous section, we provided compact and efficient implementations of associative arrays that allow us to query a value for an arbitrary n-gram. However, decoders do not issue language model requests at random. In this section, we show that language model requests issued by a standard decoder exhibit two patterns we can exploit: they are highly repetitive, and also exhibit a scrolling effect.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Speeding up Decoding",
                "sec_num": "4"
            },
            {
                "text": "In a simple experiment, we recorded all of the language model queries issued by the Joshua decoder (Li et al., 2009) on a 100 sentence test set. Of the 31 million queries, only about 1 million were unique. Therefore, we expect that keeping the results of language model queries in a cache should be effective at reducing overall language model latency.",
                "cite_spans": [
                    {
                        "start": 99,
                        "end": 116,
                        "text": "(Li et al., 2009)",
                        "ref_id": "BIBREF15"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Exploiting Repetitive Queries",
                "sec_num": "4.1"
            },
            {
                "text": "To this end, we added a very simple cache to our language model. Our cache uses an array of key/value pairs with size fixed to 2 b \u2212 1 for some integer b (we used 24). We use a b-bit hash function to compute the address in an array where we will always place a given n-gram and its fully computed language model score. Querying the cache is straightforward: we check the address of a key given by its b-bit hash. If the key located in the cache array matches the query key, then we return the value stored in the cache. Otherwise, we fetch the language model probability from the language model and place the new key and value in the cache, evicting the old key in the process. This scheme is often called a direct-mapped cache because each key has exactly one possible address.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Exploiting Repetitive Queries",
                "sec_num": "4.1"
            },
            {
                "text": "Caching n-grams in this way reduces overall latency for two reasons: first, lookup in the cache is extremely fast, requiring only a single evaluation of the hash function, one memory lookup to find the cache key, and one equality check on the key. In contrast, even our fastest (HASH) implementation may have to perform multiple memory lookups and equality checks in order to resolve collisions. Second, when calculating the probability for an n-gram Figure 3 : Queries issued when scoring trigrams that are created when a state with LM context \"the cat\" combines with \"fell down\". In the standard explicit representation of an n-gram as list of words, queries are issued atomically to the language model. When using a contextencoding, a query from the n-gram \"the cat fell\" returns the context offset of \"cat fell\", which speeds up the query of \"cat fell down\".",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 451,
                        "end": 459,
                        "text": "Figure 3",
                        "ref_id": null
                    }
                ],
                "eq_spans": [],
                "section": "Exploiting Repetitive Queries",
                "sec_num": "4.1"
            },
            {
                "text": "not in the language model, language models with back-off schemes must in general perform multiple queries to fetch the necessary back-off information.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Exploiting Repetitive Queries",
                "sec_num": "4.1"
            },
            {
                "text": "Our cache retains the full result of these calculations and thus saves additional computation. Federico and Cettolo (2007) also employ a cache in their language model implementation, though based on traditional hash table cache with linear probing. Unlike our cache, which is of fixed size, their cache must be cleared after decoding a sentence. We would not expect a large performance increase from such a cache for our faster models since our HASH implementation is already a hash table with linear probing. We found in our experiments that a cache using linear probing provided marginal performance increases of about 40%, largely because of cached back-off computation, while our simpler cache increases performance by about 300% even over our HASH LM implementation. More timing results are presented in Section 5.",
                "cite_spans": [
                    {
                        "start": 95,
                        "end": 122,
                        "text": "Federico and Cettolo (2007)",
                        "ref_id": "BIBREF6"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Exploiting Repetitive Queries",
                "sec_num": "4.1"
            },
            {
                "text": "Decoders with integrated language models (Och and Ney, 2004; Chiang, 2005 ) score partial translation hypotheses in an incremental way. Each partial hypothesis maintains a language model context consisting of at most n \u2212 1 target-side words. When we combine two language model contexts, we create several new n-grams of length of n, each of which generate a query to the language model. These new WMT2010   Order  #n-grams  1gm  4,366,395  2gm  61,865,588  3gm  123,158,761  4gm  217,869,981  5gm  269,614,330  Total  676,875,055   WEB1T   Order  #n-grams  1gm  13,588,391  2gm  314,843,401  3gm  977,069,902  4gm  1,313,818,354  5gm  1,176,470,663  Total 3,795,790,711 n-grams exhibit a scrolling effect, shown in Figure 3: the n \u2212 1 suffix words of one n-gram form the n \u2212 1 prefix words of the next.",
                "cite_spans": [
                    {
                        "start": 41,
                        "end": 60,
                        "text": "(Och and Ney, 2004;",
                        "ref_id": "BIBREF16"
                    },
                    {
                        "start": 61,
                        "end": 73,
                        "text": "Chiang, 2005",
                        "ref_id": "BIBREF4"
                    }
                ],
                "ref_spans": [
                    {
                        "start": 397,
                        "end": 655,
                        "text": "WMT2010   Order  #n-grams  1gm  4,366,395  2gm  61,865,588  3gm  123,158,761  4gm  217,869,981  5gm  269,614,330  Total  676,875,055   WEB1T   Order  #n-grams  1gm  13,588,391  2gm  314,843,401  3gm  977,069,902  4gm  1,313,818,354  5gm  1,176,470,663  Total",
                        "ref_id": "TABREF2"
                    },
                    {
                        "start": 715,
                        "end": 721,
                        "text": "Figure",
                        "ref_id": null
                    }
                ],
                "eq_spans": [],
                "section": "Exploiting Scrolling Queries",
                "sec_num": "4.2"
            },
            {
                "text": "As discussed in Section 3, our LM implementations can answer queries about context-encoded ngrams faster than explicitly encoded n-grams. With this in mind, we augment the values stored in our language model so that for a key (w n , c(w n\u22121 1 )), we store the offset of the suffix c(w n 2 ) as well as the normal counts/probabilities. Then, rather than represent the LM context in the decoder as an explicit list of words, we can simply store context offsets. When we query the language model, we get back both a language model score and context offset c(\u0175 n\u22121 1 ), where\u0175 n\u22121 1 is the the longest suffix of w n\u22121 1 contained in the language model. We can then quickly form the context encoding of the next query by simply concatenating the new word with the offset c(\u0175 n\u22121 1 ) returned from the previous query.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Exploiting Scrolling Queries",
                "sec_num": "4.2"
            },
            {
                "text": "In addition to speeding up language model queries, this approach also automatically supports an equivalence of LM states (Li and Khudanpur, 2008) : in standard back-off schemes, whenever we compute the probability for an n-gram (w n , c(w n\u22121 1 )) when w n\u22121 1 is not in the language model, the result will be the same as the result of the query (w n , c(\u0175 n\u22121 1 ). It is therefore only necessary to store as much of the context as the language model contains instead of all n \u2212 1 words in the context. If a decoder maintains LM states using the context offsets returned by our language model, then the decoder will automatically exploit this equivalence and the size of the search space will be reduced. This same effect is exploited explicitly by some decoders (Li and Khudanpur, 2008 ",
                "cite_spans": [
                    {
                        "start": 121,
                        "end": 145,
                        "text": "(Li and Khudanpur, 2008)",
                        "ref_id": "BIBREF14"
                    },
                    {
                        "start": 763,
                        "end": 786,
                        "text": "(Li and Khudanpur, 2008",
                        "ref_id": "BIBREF14"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Exploiting Scrolling Queries",
                "sec_num": "4.2"
            },
            {
                "text": "To test our LM implementations, we performed experiments with two different language models. Our first language model, WMT2010, was a 5gram Kneser-Ney language model which stores probability/back-off pairs as values. We trained this language model on the English side of all French-English corpora provided 6 for use in the WMT 2010 workshop, about 2 billion tokens in total. This data was tokenized using the tokenizer.perl script provided with the data. We trained the language model using SRILM. We also extracted a countbased language model, WEB1T, from the Web1T corpus (Brants and Franz, 2006) . Since this data is provided as a collection of 1-to 5-grams and associated counts, we used this data without further preprocessing. The make up of these language models is shown in Table 1 .",
                "cite_spans": [
                    {
                        "start": 575,
                        "end": 599,
                        "text": "(Brants and Franz, 2006)",
                        "ref_id": "BIBREF1"
                    }
                ],
                "ref_spans": [
                    {
                        "start": 783,
                        "end": 790,
                        "text": "Table 1",
                        "ref_id": "TABREF2"
                    }
                ],
                "eq_spans": [],
                "section": "Experiments 5.1 Data",
                "sec_num": "5"
            },
            {
                "text": "We tested our three implementations (HASH, SORTED, and COMPRESSED) on the WMT2010 language model. For this language model, there are about 80 million unique probability/back-off pairs, so v \u2248 36. Note that here v includes both the cost per key of storing the value rank as well as the (amortized) cost of storing two 32 bit floating point numbers (probability and back-off) for each unique value. The results are shown in We compare against three baselines. The first two, SRILM-H and SRILM-S, refer to the hash tableand sorted array-based trie implementations provided by SRILM. The third baseline is the Tightly-Packed Trie (TPT) implementation of Germann et al. (2009) . Because this implementation is not freely available, we use their published memory usage in bytes per n-gram on a language model of similar size and project total usage.",
                "cite_spans": [
                    {
                        "start": 650,
                        "end": 671,
                        "text": "Germann et al. (2009)",
                        "ref_id": "BIBREF8"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Compression Experiments",
                "sec_num": "5.2"
            },
            {
                "text": "The memory usage of all of our models is considerably smaller than SRILM -our HASH implementation is about 25% the size of SRILM-H, and our SORTED implementation is about 25% the size of SRILM-S. Our COMPRESSED implementation is also smaller than the state-of-the-art compressed TPT implementation.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Compression Experiments",
                "sec_num": "5.2"
            },
            {
                "text": "In Table 3 , we show the results of our COM-PRESSED implementation on WEB1T and against two baselines. The first is compression of the ASCII text count files using gzip, and the second is the Tiered Minimal Perfect Hash (T-MPHR) of Guthrie and Hepple (2010) . The latter is a lossy compression technique based on Bloomier filters (Chazelle et al., 2004) and additional variable-length encoding that achieves the best published compression of WEB1T to date. Our COMPRESSED implementation is even smaller than T-MPHR, despite using a lossless compression technique. Note that since T-MPHR uses a lossy encoding, it is possible to reduce the storage requirements arbitrarily at the cost of additional errors in the model. We quote here the storage required when keys 7 are encoded using 12bit hash codes, which gives a false positive rate of about 2 \u221212 =0.02%. Table 5 : Full decoding times for various language model implementations. Our HASH LM is as fast as SRILM while using 25% of the memory. Our caching also reduces total decoding time by about 20% for our fastest models and speeds up COMPRESSED by a factor of 6. Times were averaged over 3 runs on the same machine.",
                "cite_spans": [
                    {
                        "start": 232,
                        "end": 257,
                        "text": "Guthrie and Hepple (2010)",
                        "ref_id": "BIBREF10"
                    },
                    {
                        "start": 330,
                        "end": 353,
                        "text": "(Chazelle et al., 2004)",
                        "ref_id": "BIBREF3"
                    }
                ],
                "ref_spans": [
                    {
                        "start": 3,
                        "end": 10,
                        "text": "Table 3",
                        "ref_id": "TABREF6"
                    },
                    {
                        "start": 859,
                        "end": 866,
                        "text": "Table 5",
                        "ref_id": null
                    }
                ],
                "eq_spans": [],
                "section": "Compression Experiments",
                "sec_num": "5.2"
            },
            {
                "text": "We first measured pure query speed by logging all LM queries issued by a decoder and measuring the time required to query those n-grams in isolation. We used the the Joshua decoder 8 with the WMT2010 model to generate queries for the first 100 sentences of the French 2008 News test set. This produced about 30 million queries. We measured the time 9 required to perform each query in order with and without our direct-mapped caching, not including any time spent on file I/O. The results are shown in Table 4 . As expected, HASH is the fastest of our implementations, and comparable 10 in speed to SRILM-H, but using sig- 8 We used a grammar trained on all French-English data provided for WMT 2010 using the make scripts provided at http://sourceforge.net/projects/joshua/files /joshua/1.3/wmt2010-experiment.tgz/download 9 All experiments were performed on an Amazon EC2 High-Memory Quadruple Extra Large instance, with an Intel Xeon X5550 CPU running at 2.67GHz and 8 MB of cache.",
                "cite_spans": [
                    {
                        "start": 623,
                        "end": 624,
                        "text": "8",
                        "ref_id": null
                    }
                ],
                "ref_spans": [
                    {
                        "start": 502,
                        "end": 509,
                        "text": "Table 4",
                        "ref_id": "TABREF8"
                    }
                ],
                "eq_spans": [],
                "section": "Timing Experiments",
                "sec_num": "5.3"
            },
            {
                "text": "10 Because we implemented our LMs in Java, we issued queries to SRILM via Java Native Interface (JNI) calls, which introduces a performance overhead. When called natively, we found that SRILM was about 200 ns/query faster. Unfortu-nificantly less space. SORTED is slower but of course more memory efficient, and COMPRESSED is the slowest but also the most compact representation. In HASH+SCROLL, we issued queries to the language model using the context encoding, which speeds up queries substantially. Finally, we note that our direct-mapped cache is very effective. The query speed of all models is boosted substantially. In particular, our COMPRESSED implementation with caching is nearly as fast as SRILM-H without caching, and even the already fast HASH implementation is 300% faster in raw query speed with caching enabled.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Timing Experiments",
                "sec_num": "5.3"
            },
            {
                "text": "We also measured the effect of LM performance on overall decoder performance. We modified Joshua to optionally use our LM implementations during decoding, and measured the time required to decode all 2051 sentences of the 2008 News test set. The results are shown in Table 5 . Without caching, SRILM-H and HASH were comparable in speed, while COMPRESSED introduces a performance penalty. With caching enabled, overall decoder speed is improved for both HASH and SRILM-H, while the COMPRESSED implementation is only about 50% slower that the others.",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 267,
                        "end": 274,
                        "text": "Table 5",
                        "ref_id": null
                    }
                ],
                "eq_spans": [],
                "section": "Timing Experiments",
                "sec_num": "5.3"
            },
            {
                "text": "We have presented several language model implementations which are state-of-the-art in both size and speed. Our experiments have demonstrated improvements in query speed over SRILM and compression rates against state-of-the-art lossy compression. We have also described a simple caching technique which leads to performance increases in overall decoding time.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Conclusion",
                "sec_num": "6"
            },
            {
                "text": "http://code.google.com/p/berkeleylm/",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "",
                "sec_num": null
            },
            {
                "text": "While 32-bit architectures are still in use today, their limited address space is insufficient for modern language models and we will assume all machines use a 64-bit architecture.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "",
                "sec_num": null
            },
            {
                "text": "The implementation described in the paper represents each 32-bit integer compactly using only 16 bits, but this representation is quite inefficient, because determining the full 32-bit offset requires a binary search in a look up table.4 Typically, programming languages only provide support for arrays of bytes, not bits, but it is of course possible to simulate arrays with arbitrary numbers of bits using byte arrays and bit manipulation.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "",
                "sec_num": null
            },
            {
                "text": "We need this because n-grams refer to their contexts using array offsets.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "",
                "sec_num": null
            },
            {
                "text": "Guthrie and Hepple (2010) also report additional savings by quantizing values, though we could perform the same quantization in our storage scheme.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "",
                "sec_num": null
            }
        ],
        "back_matter": [
            {
                "text": "This work was supported by a Google Fellowship for the first author and by BBN under DARPA contract HR0011-06-C-0022. We would like to thank David Chiang, Zhifei Li, and the anonymous reviewers for their helpful comments.nately, it is not completely fair to compare our LMs against either of these numbers: although the JNI overhead slows down SRILM, implementing our LMs in Java instead of C++ slows down our LMs. In the tables, we quote times which include the JNI overhead, since this reflects the true cost to a decoder written in Java (e.g. Joshua).",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Acknowledgements",
                "sec_num": null
            }
        ],
        "bib_entries": {
            "BIBREF0": {
                "ref_id": "b0",
                "title": "Codes for the world wide web",
                "authors": [
                    {
                        "first": "Paolo",
                        "middle": [],
                        "last": "Boldi",
                        "suffix": ""
                    },
                    {
                        "first": "Sebastiano",
                        "middle": [],
                        "last": "Vigna",
                        "suffix": ""
                    }
                ],
                "year": 2005,
                "venue": "Internet Mathematics",
                "volume": "2",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Paolo Boldi and Sebastiano Vigna. 2005. Codes for the world wide web. Internet Mathematics, 2.",
                "links": null
            },
            "BIBREF1": {
                "ref_id": "b1",
                "title": "Google web1t 5-gram corpus, version 1",
                "authors": [
                    {
                        "first": "Thorsten",
                        "middle": [],
                        "last": "Brants",
                        "suffix": ""
                    },
                    {
                        "first": "Alex",
                        "middle": [],
                        "last": "Franz",
                        "suffix": ""
                    }
                ],
                "year": 2006,
                "venue": "Linguistic Data Consortium, Philadelphia, Catalog Number LDC2006T13",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Thorsten Brants and Alex Franz. 2006. Google web1t 5-gram corpus, version 1. In Linguistic Data Consor- tium, Philadelphia, Catalog Number LDC2006T13.",
                "links": null
            },
            "BIBREF2": {
                "ref_id": "b2",
                "title": "Large language models in machine translation",
                "authors": [
                    {
                        "first": "Thorsten",
                        "middle": [],
                        "last": "Brants",
                        "suffix": ""
                    },
                    {
                        "first": "C",
                        "middle": [],
                        "last": "Ashok",
                        "suffix": ""
                    },
                    {
                        "first": "Peng",
                        "middle": [],
                        "last": "Popat",
                        "suffix": ""
                    },
                    {
                        "first": "Franz",
                        "middle": [
                            "J"
                        ],
                        "last": "Xu",
                        "suffix": ""
                    },
                    {
                        "first": "Jeffrey",
                        "middle": [],
                        "last": "Och",
                        "suffix": ""
                    },
                    {
                        "first": "",
                        "middle": [],
                        "last": "Dean",
                        "suffix": ""
                    }
                ],
                "year": 2007,
                "venue": "Proceedings of the Conference on Empirical Methods in Natural Language Processing",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Thorsten Brants, Ashok C. Popat, Peng Xu, Franz J. Och, and Jeffrey Dean. 2007. Large language models in machine translation. In Proceedings of the Conference on Empirical Methods in Natural Language Process- ing.",
                "links": null
            },
            "BIBREF3": {
                "ref_id": "b3",
                "title": "The Bloomier filter: an efficient data structure for static support lookup tables",
                "authors": [
                    {
                        "first": "Bernard",
                        "middle": [],
                        "last": "Chazelle",
                        "suffix": ""
                    },
                    {
                        "first": "Joe",
                        "middle": [],
                        "last": "Kilian",
                        "suffix": ""
                    },
                    {
                        "first": "Ronitt",
                        "middle": [],
                        "last": "Rubinfeld",
                        "suffix": ""
                    },
                    {
                        "first": "Ayellet",
                        "middle": [],
                        "last": "Tal",
                        "suffix": ""
                    }
                ],
                "year": 2004,
                "venue": "Proceedings of the fifteenth annual ACM-SIAM symposium on Discrete algorithms",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Bernard Chazelle, Joe Kilian, Ronitt Rubinfeld, and Ayellet Tal. 2004. The Bloomier filter: an efficient data structure for static support lookup tables. In Pro- ceedings of the fifteenth annual ACM-SIAM sympo- sium on Discrete algorithms.",
                "links": null
            },
            "BIBREF4": {
                "ref_id": "b4",
                "title": "A hierarchical phrase-based model for statistical machine translation",
                "authors": [
                    {
                        "first": "David",
                        "middle": [],
                        "last": "Chiang",
                        "suffix": ""
                    }
                ],
                "year": 2005,
                "venue": "The Annual Conference of the Association for Computational Linguistics",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "David Chiang. 2005. A hierarchical phrase-based model for statistical machine translation. In The Annual Con- ference of the Association for Computational Linguis- tics.",
                "links": null
            },
            "BIBREF5": {
                "ref_id": "b5",
                "title": "Compressing trigram language models with golomb coding",
                "authors": [
                    {
                        "first": "Kenneth",
                        "middle": [],
                        "last": "Church",
                        "suffix": ""
                    },
                    {
                        "first": "Ted",
                        "middle": [],
                        "last": "Hart",
                        "suffix": ""
                    },
                    {
                        "first": "Jianfeng",
                        "middle": [],
                        "last": "Gao",
                        "suffix": ""
                    }
                ],
                "year": 2007,
                "venue": "Proceedings of the Conference on Empirical Methods in Natural Language Processing",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Kenneth Church, Ted Hart, and Jianfeng Gao. 2007. Compressing trigram language models with golomb coding. In Proceedings of the Conference on Empiri- cal Methods in Natural Language Processing.",
                "links": null
            },
            "BIBREF6": {
                "ref_id": "b6",
                "title": "Efficient handling of n-gram language models for statistical machine translation",
                "authors": [
                    {
                        "first": "Marcello",
                        "middle": [],
                        "last": "Federico",
                        "suffix": ""
                    },
                    {
                        "first": "Mauro",
                        "middle": [],
                        "last": "Cettolo",
                        "suffix": ""
                    }
                ],
                "year": 2007,
                "venue": "Proceedings of the Second Workshop on Statistical Machine Translation",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Marcello Federico and Mauro Cettolo. 2007. Efficient handling of n-gram language models for statistical ma- chine translation. In Proceedings of the Second Work- shop on Statistical Machine Translation.",
                "links": null
            },
            "BIBREF7": {
                "ref_id": "b7",
                "title": "Trie memory",
                "authors": [
                    {
                        "first": "Edward",
                        "middle": [],
                        "last": "Fredkin",
                        "suffix": ""
                    }
                ],
                "year": 1960,
                "venue": "Communications of the ACM",
                "volume": "3",
                "issue": "",
                "pages": "490--499",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Edward Fredkin. 1960. Trie memory. Communications of the ACM, 3:490-499, September.",
                "links": null
            },
            "BIBREF8": {
                "ref_id": "b8",
                "title": "Tightly packed tries: how to fit large models into memory, and make them load fast, too",
                "authors": [
                    {
                        "first": "Ulrich",
                        "middle": [],
                        "last": "Germann",
                        "suffix": ""
                    },
                    {
                        "first": "Eric",
                        "middle": [],
                        "last": "Joanis",
                        "suffix": ""
                    },
                    {
                        "first": "Samuel",
                        "middle": [],
                        "last": "Larkin",
                        "suffix": ""
                    }
                ],
                "year": 2009,
                "venue": "Proceedings of the Workshop on Software Engineering, Testing, and Quality Assurance for Natural Language Processing",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Ulrich Germann, Eric Joanis, and Samuel Larkin. 2009. Tightly packed tries: how to fit large models into mem- ory, and make them load fast, too. In Proceedings of the Workshop on Software Engineering, Testing, and Quality Assurance for Natural Language Processing.",
                "links": null
            },
            "BIBREF9": {
                "ref_id": "b9",
                "title": "Run-length encodings",
                "authors": [
                    {
                        "first": "S",
                        "middle": [
                            "W"
                        ],
                        "last": "Golomb",
                        "suffix": ""
                    }
                ],
                "year": 1966,
                "venue": "IEEE Transactions on Information Theory",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "S. W. Golomb. 1966. Run-length encodings. IEEE Transactions on Information Theory, 12.",
                "links": null
            },
            "BIBREF10": {
                "ref_id": "b10",
                "title": "Storing the web in memory: space efficient language models with constant time retrieval",
                "authors": [
                    {
                        "first": "David",
                        "middle": [],
                        "last": "Guthrie",
                        "suffix": ""
                    },
                    {
                        "first": "Mark",
                        "middle": [],
                        "last": "Hepple",
                        "suffix": ""
                    }
                ],
                "year": 2010,
                "venue": "Proceedings of the Conference on Empirical Methods in Natural Language Processing",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "David Guthrie and Mark Hepple. 2010. Storing the web in memory: space efficient language models with con- stant time retrieval. In Proceedings of the Conference on Empirical Methods in Natural Language Process- ing.",
                "links": null
            },
            "BIBREF11": {
                "ref_id": "b11",
                "title": "Back-off language model compression",
                "authors": [
                    {
                        "first": "Boulos",
                        "middle": [],
                        "last": "Harb",
                        "suffix": ""
                    },
                    {
                        "first": "Ciprian",
                        "middle": [],
                        "last": "Chelba",
                        "suffix": ""
                    },
                    {
                        "first": "Jeffrey",
                        "middle": [],
                        "last": "Dean",
                        "suffix": ""
                    },
                    {
                        "first": "Sanjay",
                        "middle": [],
                        "last": "Ghemawat",
                        "suffix": ""
                    }
                ],
                "year": 2009,
                "venue": "Proceedings of Interspeech",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Boulos Harb, Ciprian Chelba, Jeffrey Dean, and Sanjay Ghemawat. 2009. Back-off language model compres- sion. In Proceedings of Interspeech.",
                "links": null
            },
            "BIBREF12": {
                "ref_id": "b12",
                "title": "Iterative language model estimation: Efficient data structure and algorithms",
                "authors": [
                    {
                        "first": "Bo-June",
                        "middle": [],
                        "last": "Hsu",
                        "suffix": ""
                    },
                    {
                        "first": "James",
                        "middle": [],
                        "last": "Glass",
                        "suffix": ""
                    }
                ],
                "year": 2008,
                "venue": "Proceedings of Interspeech",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Bo-June Hsu and James Glass. 2008. Iterative language model estimation: Efficient data structure and algo- rithms. In Proceedings of Interspeech.",
                "links": null
            },
            "BIBREF13": {
                "ref_id": "b13",
                "title": "Streambased randomised language models for smt",
                "authors": [
                    {
                        "first": "Abby",
                        "middle": [],
                        "last": "Levenberg",
                        "suffix": ""
                    },
                    {
                        "first": "Miles",
                        "middle": [],
                        "last": "Osborne",
                        "suffix": ""
                    }
                ],
                "year": 2009,
                "venue": "Proceedings of the Conference on Empirical Methods in Natural Language Processing",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Abby Levenberg and Miles Osborne. 2009. Stream- based randomised language models for smt. In Pro- ceedings of the Conference on Empirical Methods in Natural Language Processing.",
                "links": null
            },
            "BIBREF14": {
                "ref_id": "b14",
                "title": "A scalable decoder for parsing-based machine translation with equivalent language model state maintenance",
                "authors": [
                    {
                        "first": "Zhifei",
                        "middle": [],
                        "last": "Li",
                        "suffix": ""
                    },
                    {
                        "first": "Sanjeev",
                        "middle": [],
                        "last": "Khudanpur",
                        "suffix": ""
                    }
                ],
                "year": 2008,
                "venue": "Proceedings of the Second Workshop on Syntax and Structure in Statistical Translation",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Zhifei Li and Sanjeev Khudanpur. 2008. A scalable decoder for parsing-based machine translation with equivalent language model state maintenance. In Pro- ceedings of the Second Workshop on Syntax and Struc- ture in Statistical Translation.",
                "links": null
            },
            "BIBREF15": {
                "ref_id": "b15",
                "title": "Joshua: an open source toolkit for parsingbased machine translation",
                "authors": [
                    {
                        "first": "Zhifei",
                        "middle": [],
                        "last": "Li",
                        "suffix": ""
                    },
                    {
                        "first": "Chris",
                        "middle": [],
                        "last": "Callison-Burch",
                        "suffix": ""
                    },
                    {
                        "first": "Chris",
                        "middle": [],
                        "last": "Dyer",
                        "suffix": ""
                    },
                    {
                        "first": "Juri",
                        "middle": [],
                        "last": "Ganitkevitch",
                        "suffix": ""
                    },
                    {
                        "first": "Sanjeev",
                        "middle": [],
                        "last": "Khudanpur",
                        "suffix": ""
                    },
                    {
                        "first": "Lane",
                        "middle": [],
                        "last": "Schwartz",
                        "suffix": ""
                    },
                    {
                        "first": "N",
                        "middle": [
                            "G"
                        ],
                        "last": "Wren",
                        "suffix": ""
                    },
                    {
                        "first": "Jonathan",
                        "middle": [],
                        "last": "Thornton",
                        "suffix": ""
                    },
                    {
                        "first": "Omar",
                        "middle": [
                            "F"
                        ],
                        "last": "Weese",
                        "suffix": ""
                    },
                    {
                        "first": "",
                        "middle": [],
                        "last": "Zaidan",
                        "suffix": ""
                    }
                ],
                "year": 2009,
                "venue": "Proceedings of the Fourth Workshop on Statistical Machine Translation",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Zhifei Li, Chris Callison-Burch, Chris Dyer, Juri Gan- itkevitch, Sanjeev Khudanpur, Lane Schwartz, Wren N. G. Thornton, Jonathan Weese, and Omar F. Zaidan. 2009. Joshua: an open source toolkit for parsing- based machine translation. In Proceedings of the Fourth Workshop on Statistical Machine Translation.",
                "links": null
            },
            "BIBREF16": {
                "ref_id": "b16",
                "title": "The alignment template approach to statistical machine translation",
                "authors": [
                    {
                        "first": "Josef",
                        "middle": [],
                        "last": "Franz",
                        "suffix": ""
                    },
                    {
                        "first": "Hermann",
                        "middle": [],
                        "last": "Och",
                        "suffix": ""
                    },
                    {
                        "first": "",
                        "middle": [],
                        "last": "Ney",
                        "suffix": ""
                    }
                ],
                "year": 2004,
                "venue": "Computationl Linguistics",
                "volume": "30",
                "issue": "",
                "pages": "417--449",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Franz Josef Och and Hermann Ney. 2004. The align- ment template approach to statistical machine transla- tion. Computationl Linguistics, 30:417-449, Decem- ber.",
                "links": null
            },
            "BIBREF17": {
                "ref_id": "b17",
                "title": "SRILM: An extensible language modeling toolkit",
                "authors": [
                    {
                        "first": "Andreas",
                        "middle": [],
                        "last": "Stolcke",
                        "suffix": ""
                    }
                ],
                "year": 2002,
                "venue": "Proceedings of Interspeech",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Andreas Stolcke. 2002. SRILM: An extensible language modeling toolkit. In Proceedings of Interspeech.",
                "links": null
            },
            "BIBREF18": {
                "ref_id": "b18",
                "title": "Quantizationbased language model compression",
                "authors": [
                    {
                        "first": "E",
                        "middle": [
                            "W D"
                        ],
                        "last": "Whittaker",
                        "suffix": ""
                    },
                    {
                        "first": "B",
                        "middle": [],
                        "last": "Raj",
                        "suffix": ""
                    }
                ],
                "year": 2001,
                "venue": "Proceedings of Eurospeech",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "E. W. D. Whittaker and B. Raj. 2001. Quantization- based language model compression. In Proceedings of Eurospeech.",
                "links": null
            }
        },
        "ref_entries": {
            "FIGREF1": {
                "num": null,
                "uris": null,
                "type_str": "figure",
                "text": "Compression using variable-length encoding. (a) A snippet of an (uncompressed) context-encoded array. (b) The context and word deltas. (c) The number of bits required to encode the context and word deltas as well as the value ranks. Word deltas use variable-length block coding with k = 1, while context deltas and value ranks use k = 2. (d) A snippet of the compressed encoding array. The header is outlined in bold."
            },
            "TABREF2": {
                "content": "<table/>",
                "text": "Sizes of the two language models used in our experiments.",
                "num": null,
                "html": null,
                "type_str": "table"
            },
            "TABREF4": {
                "content": "<table><tr><td>: Memory usages of several language model im-</td></tr><tr><td>plementations on the WMT2010 language model. A</td></tr><tr><td>*  *  indicates that the storage in bytes per n-gram is re-</td></tr><tr><td>ported for a different language model of comparable size,</td></tr><tr><td>and the total size is thus a rough projection.</td></tr></table>",
                "text": "",
                "num": null,
                "html": null,
                "type_str": "table"
            },
            "TABREF5": {
                "content": "<table><tr><td/><td/><td>WEB1T</td><td/><td/></tr><tr><td>LM Type</td><td colspan=\"3\">bytes/ bytes/ bytes/</td><td>Total</td></tr><tr><td/><td>key</td><td colspan=\"3\">value n-gram Size</td></tr><tr><td>Gzip</td><td>-</td><td>-</td><td>7.0</td><td>24.7G</td></tr><tr><td>T-MPHR  \u2020</td><td>-</td><td>-</td><td>3.0</td><td>10.5G</td></tr><tr><td colspan=\"2\">COMPRESSED 1.3</td><td>1.6</td><td>2.9</td><td>10.2G</td></tr></table>",
                "text": "",
                "num": null,
                "html": null,
                "type_str": "table"
            },
            "TABREF6": {
                "content": "<table/>",
                "text": "Memory usages of several language model implementations on the WEB1T. A \u2020 indicates lossy compression.",
                "num": null,
                "html": null,
                "type_str": "table"
            },
            "TABREF8": {
                "content": "<table><tr><td>LM Type</td><td>No Cache</td><td>Cache</td><td>Size</td></tr><tr><td colspan=\"3\">COMPRESSED 9880\u00b182s 1547\u00b17s</td><td>3.7G</td></tr><tr><td>SRILM-H</td><td>1120\u00b126s</td><td colspan=\"2\">938\u00b111s 26.6G</td></tr><tr><td>HASH</td><td>1146\u00b18s</td><td>943\u00b116s</td><td>7.5G</td></tr></table>",
                "text": "Raw query speeds of various language model implementations. Times were averaged over 3 runs on the same machine. For HASH+SCROLL, all queries were issued to the decoder in context-encoded form, which speeds up queries that exhibit scrolling behaviour. Note that memory usage is higher than for HASH because we store suffix offsets along with the values for an n-gram.",
                "num": null,
                "html": null,
                "type_str": "table"
            }
        }
    }
}