File size: 102,176 Bytes
6fa4bc9
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
1001
1002
1003
1004
1005
1006
1007
1008
1009
1010
1011
1012
1013
1014
1015
1016
1017
1018
1019
1020
1021
1022
1023
1024
1025
1026
1027
1028
1029
1030
1031
1032
1033
1034
1035
1036
1037
1038
1039
1040
1041
1042
1043
1044
1045
1046
1047
1048
1049
1050
1051
1052
1053
1054
1055
1056
1057
1058
1059
1060
1061
1062
1063
1064
1065
1066
1067
1068
1069
1070
1071
1072
1073
1074
1075
1076
1077
1078
1079
1080
1081
1082
1083
1084
1085
1086
1087
1088
1089
1090
1091
1092
1093
1094
1095
1096
1097
1098
1099
1100
1101
1102
1103
1104
1105
1106
1107
1108
1109
1110
1111
1112
1113
1114
1115
1116
1117
1118
1119
1120
1121
1122
1123
1124
1125
1126
1127
1128
1129
1130
1131
1132
1133
1134
1135
1136
1137
1138
1139
1140
1141
1142
1143
1144
1145
1146
1147
1148
1149
1150
1151
1152
1153
1154
1155
1156
1157
1158
1159
1160
1161
1162
1163
1164
1165
1166
1167
1168
1169
1170
1171
1172
1173
1174
1175
1176
1177
1178
1179
1180
1181
1182
1183
1184
1185
1186
1187
1188
1189
1190
1191
1192
1193
1194
1195
1196
1197
1198
1199
1200
1201
1202
1203
1204
1205
1206
1207
1208
1209
1210
1211
1212
1213
1214
1215
1216
1217
1218
1219
1220
1221
1222
1223
1224
1225
1226
1227
1228
1229
1230
1231
1232
1233
1234
1235
1236
1237
1238
1239
1240
1241
1242
1243
1244
1245
1246
1247
1248
1249
1250
1251
1252
1253
1254
1255
1256
1257
1258
1259
1260
1261
1262
1263
1264
1265
1266
1267
1268
1269
1270
1271
1272
1273
1274
1275
1276
1277
1278
1279
1280
1281
1282
1283
1284
1285
1286
1287
1288
1289
1290
1291
1292
1293
1294
1295
1296
1297
1298
1299
1300
1301
1302
1303
1304
1305
1306
1307
1308
1309
1310
1311
1312
1313
1314
1315
1316
1317
1318
1319
1320
1321
1322
1323
1324
1325
1326
1327
1328
1329
1330
1331
1332
1333
1334
1335
1336
1337
1338
1339
1340
1341
1342
1343
1344
1345
1346
1347
1348
1349
1350
1351
1352
1353
1354
1355
1356
1357
1358
1359
1360
1361
1362
1363
1364
1365
1366
1367
1368
1369
1370
1371
1372
1373
1374
1375
1376
1377
1378
1379
1380
1381
1382
1383
1384
1385
1386
1387
1388
1389
1390
1391
1392
1393
1394
1395
1396
1397
1398
1399
1400
1401
1402
1403
1404
1405
1406
1407
1408
1409
1410
1411
1412
1413
1414
1415
1416
1417
1418
1419
1420
1421
1422
1423
1424
1425
1426
1427
1428
1429
1430
1431
1432
1433
1434
1435
1436
1437
1438
1439
1440
1441
1442
1443
1444
1445
1446
1447
1448
1449
1450
1451
1452
1453
1454
1455
1456
1457
1458
1459
1460
1461
1462
1463
1464
1465
1466
1467
1468
1469
1470
1471
1472
1473
1474
1475
1476
1477
1478
1479
1480
1481
1482
1483
1484
1485
1486
1487
1488
1489
1490
1491
1492
1493
1494
1495
1496
1497
1498
1499
1500
1501
1502
1503
1504
1505
1506
1507
1508
1509
1510
1511
1512
1513
1514
1515
1516
1517
1518
1519
1520
1521
1522
1523
1524
1525
1526
1527
1528
1529
1530
1531
1532
1533
1534
1535
1536
1537
1538
1539
1540
1541
1542
1543
1544
1545
1546
1547
1548
1549
1550
1551
1552
1553
1554
1555
1556
1557
1558
1559
1560
1561
1562
1563
1564
1565
1566
1567
1568
1569
1570
1571
1572
1573
1574
1575
1576
1577
1578
1579
1580
1581
1582
1583
1584
1585
1586
1587
1588
1589
1590
1591
1592
1593
1594
1595
1596
1597
1598
1599
1600
1601
1602
1603
1604
1605
1606
1607
1608
1609
1610
1611
1612
1613
1614
1615
1616
1617
1618
1619
1620
1621
1622
1623
1624
1625
1626
1627
1628
1629
1630
1631
1632
1633
1634
1635
1636
1637
1638
1639
1640
1641
1642
1643
1644
1645
1646
1647
1648
1649
1650
1651
1652
1653
1654
1655
1656
1657
1658
1659
1660
1661
1662
1663
1664
1665
1666
1667
1668
1669
1670
1671
1672
1673
1674
1675
1676
1677
1678
1679
1680
1681
1682
1683
1684
1685
1686
1687
1688
1689
1690
1691
1692
1693
1694
1695
1696
1697
1698
1699
1700
1701
1702
1703
1704
1705
1706
1707
1708
1709
1710
1711
1712
1713
1714
1715
1716
1717
1718
1719
1720
1721
1722
1723
1724
1725
1726
1727
1728
1729
1730
1731
1732
1733
1734
1735
1736
1737
1738
1739
1740
1741
1742
1743
1744
1745
1746
1747
1748
1749
1750
1751
1752
1753
1754
1755
1756
1757
1758
1759
1760
1761
1762
1763
1764
1765
1766
1767
1768
1769
1770
1771
1772
1773
1774
1775
1776
{
    "paper_id": "D13-1019",
    "header": {
        "generated_with": "S2ORC 1.0.0",
        "date_generated": "2023-01-19T16:40:32.399831Z"
    },
    "title": "Joint Learning of Phonetic Units and Word Pronunciations for ASR",
    "authors": [
        {
            "first": "Chia-Ying",
            "middle": [],
            "last": "Lee",
            "suffix": "",
            "affiliation": {
                "laboratory": "Artificial Intelligence Laboratory",
                "institution": "Massachusetts Institute of Technology Cambridge",
                "location": {
                    "postCode": "02139",
                    "region": "MA",
                    "country": "USA"
                }
            },
            "email": "chiaying@csail.mit.edu"
        },
        {
            "first": "Yu",
            "middle": [],
            "last": "Zhang",
            "suffix": "",
            "affiliation": {
                "laboratory": "Artificial Intelligence Laboratory",
                "institution": "Massachusetts Institute of Technology Cambridge",
                "location": {
                    "postCode": "02139",
                    "region": "MA",
                    "country": "USA"
                }
            },
            "email": "yzhang87@csail.mit.edu"
        },
        {
            "first": "James",
            "middle": [],
            "last": "Glass",
            "suffix": "",
            "affiliation": {
                "laboratory": "Artificial Intelligence Laboratory",
                "institution": "Massachusetts Institute of Technology Cambridge",
                "location": {
                    "postCode": "02139",
                    "region": "MA",
                    "country": "USA"
                }
            },
            "email": ""
        }
    ],
    "year": "",
    "venue": null,
    "identifiers": {},
    "abstract": "The creation of a pronunciation lexicon remains the most inefficient process in developing an Automatic Speech Recognizer (ASR). In this paper, we propose an unsupervised alternative-requiring no language-specific knowledge-to the conventional manual approach for creating pronunciation dictionaries. We present a hierarchical Bayesian model, which jointly discovers the phonetic inventory and the Letter-to-Sound (L2S) mapping rules in a language using only transcribed data. When tested on a corpus of spontaneous queries, the results demonstrate the superiority of the proposed joint learning scheme over its sequential counterpart, in which the latent phonetic inventory and L2S mappings are learned separately. Furthermore, the recognizers built with the automatically induced lexicon consistently outperform grapheme-based recognizers and even approach the performance of recognition systems trained using conventional supervised procedures.",
    "pdf_parse": {
        "paper_id": "D13-1019",
        "_pdf_hash": "",
        "abstract": [
            {
                "text": "The creation of a pronunciation lexicon remains the most inefficient process in developing an Automatic Speech Recognizer (ASR). In this paper, we propose an unsupervised alternative-requiring no language-specific knowledge-to the conventional manual approach for creating pronunciation dictionaries. We present a hierarchical Bayesian model, which jointly discovers the phonetic inventory and the Letter-to-Sound (L2S) mapping rules in a language using only transcribed data. When tested on a corpus of spontaneous queries, the results demonstrate the superiority of the proposed joint learning scheme over its sequential counterpart, in which the latent phonetic inventory and L2S mappings are learned separately. Furthermore, the recognizers built with the automatically induced lexicon consistently outperform grapheme-based recognizers and even approach the performance of recognition systems trained using conventional supervised procedures.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Abstract",
                "sec_num": null
            }
        ],
        "body_text": [
            {
                "text": "Modern automatic speech recognizers require a few essential ingredients such as a signal representation of the speech signal, a search component, and typically a set of stochastic models that capture 1) the acoustic realizations of the basic sounds of a language, for example, phonemes, 2) the realization of words in terms of these sounds, and 3) how words are combined in spoken language. When creating a speech recognizer for a new language the usual requirements are: first, a large speech corpus with word-level annotations; second, a pronunciation dictionary that essentially defines a phonetic inventory for the language as well as word-level pronunciations, and third, optional additional text data that can be used to train the language model. Given these data and some decision about the signal representation, e.g., centi-second Mel-Frequency Cepstral Coefficients (MFCCs) (Davis and Mermelstein, 1980) with various derivatives, as well as the nature of the acoustic and language model such as 3-state HMMs and n-grams, iterative training methods can be used to effectively learn the model parameters for the acoustic and language models. Although the details of the components have changed through the years, this basic ASR formulation was well established by the late 1980's, and has not really changed much since then.",
                "cite_spans": [
                    {
                        "start": 884,
                        "end": 913,
                        "text": "(Davis and Mermelstein, 1980)",
                        "ref_id": "BIBREF2"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": "1"
            },
            {
                "text": "One of the interesting aspects of this formulation is the inherent dependence on the dictionary, which defines both the phonetic inventory of a language, and the pronunciations of all the words in the vocabulary. The dictionary is arguably the cornerstone of a speech recognizer as it provides the essential transduction from sounds to words. Unfortunately, the dependency on this resource is a significant impediment to the creation of speech recognizers for new languages, since they are typically created by experts, whereas annotated corpora can be relatively more easily created by native speakers of a language.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": "1"
            },
            {
                "text": "The existence of an expert-derived dictionary in the midst of stochastic speech recognition models is somewhat ironic, and it is natural to ask why it continues to receive special status after all these years. Why can we not learn the inventory of sounds of a language and associated word pronunciations automatically, much as we learn our acoustic model parameters? If successful, we would move one step forward towards breaking the language barrier that limits us from having speech recognizers for all languages of the world, instead of the less than 2% that currently exist.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": "1"
            },
            {
                "text": "In this paper, we investigate the problem of inferring a pronunciation lexicon from an annotated corpus without exploiting any language-specific knowledge. We formulate our approach as a hierarchical Bayesian model, which jointly discovers the acoustic inventory and the latent encoding scheme between the letters and the sounds of a language. We evaluate the quality of the induced lexicon and acoustic model through a series of speech recognition experiments on a conversational weather query corpus (Zue et al., 2000) . The results demonstrate that our model consistently generates close performance to recognizers that are trained with expertdefined phonetic inventory and lexicon. Compared to grapheme-based recognizers, our model is capable of improving the Word Error Rates (WERs) by at least 15.3%. Finally, the joint learning framework proposed in this paper is proven to be much more effective than modeling the acoustic units and the letter-to-sound mappings separately, as shown in a 45% WER deduction our model achieves compared to a sequential approach.",
                "cite_spans": [
                    {
                        "start": 502,
                        "end": 520,
                        "text": "(Zue et al., 2000)",
                        "ref_id": "BIBREF19"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": "1"
            },
            {
                "text": "Various algorithms for learning sub-word based pronunciations were proposed in (Lee et al., 1988; Fukada et al., 1996; Bacchiani and Ostendorf, 1999; Paliwal, 1990) . In these previous approaches, spoken samples of a word are gathered, and usually only one single pronunciation for the word is derived based on the acoustic evidence observed in the spoken samples. The major difference between our work and these previous works is that our model learns word pronunciations in the context of letter sequences. More specifically, our model learns letter pronunciations first and then concatenates the pronunciation of each letter in a word to form the word pronunciation. The advantage of our approach is that pronunciation knowledge learned for a particular letter in some arbitrary word can subsequently be used to help learn the letter's pronunciation in other words. This property allows our model to potentially learn better pronunciations for less frequent words.",
                "cite_spans": [
                    {
                        "start": 79,
                        "end": 97,
                        "text": "(Lee et al., 1988;",
                        "ref_id": "BIBREF11"
                    },
                    {
                        "start": 98,
                        "end": 118,
                        "text": "Fukada et al., 1996;",
                        "ref_id": "BIBREF3"
                    },
                    {
                        "start": 119,
                        "end": 149,
                        "text": "Bacchiani and Ostendorf, 1999;",
                        "ref_id": "BIBREF0"
                    },
                    {
                        "start": 150,
                        "end": 164,
                        "text": "Paliwal, 1990)",
                        "ref_id": "BIBREF14"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Related Work",
                "sec_num": "2"
            },
            {
                "text": "The more recent work by Garcia and Gish (2006) and Siu et al. (2013) has made extensive use of self-organizing units for keyword spotting and other tasks for languages with limited linguistic resources. Others who have more recently explored the unsupervised space include (Varadarajan et al., 2008; Jansen and Church, 2011; Lee and Glass, 2012) . The latter work introduced a nonparametric Bayesian inference procedure for automatically learning acoustic units that is most similar to our current work except that our model also infers word pronunciations simultaneously. The concept of creating a speech recognizer for a language with only orthographically annotated speech data has also been explored previously by means of graphemes. This approach has been shown to be effective for alphabetic languages with relatively straightforward grapheme to phoneme transformations and does not require any unsupervised learning of units or pronunciations (Killer et al., 2003; St\u00fcker and Schultz, 2004) . As we explain in later sections, grapheme-based systems can actually be regarded as a special case of our model; therefore, we expect our model to have greater flexibilities for capturing pronunciation rules of graphemes.",
                "cite_spans": [
                    {
                        "start": 24,
                        "end": 46,
                        "text": "Garcia and Gish (2006)",
                        "ref_id": "BIBREF4"
                    },
                    {
                        "start": 51,
                        "end": 68,
                        "text": "Siu et al. (2013)",
                        "ref_id": "BIBREF15"
                    },
                    {
                        "start": 273,
                        "end": 299,
                        "text": "(Varadarajan et al., 2008;",
                        "ref_id": "BIBREF17"
                    },
                    {
                        "start": 300,
                        "end": 324,
                        "text": "Jansen and Church, 2011;",
                        "ref_id": "BIBREF7"
                    },
                    {
                        "start": 325,
                        "end": 345,
                        "text": "Lee and Glass, 2012)",
                        "ref_id": "BIBREF10"
                    },
                    {
                        "start": 950,
                        "end": 971,
                        "text": "(Killer et al., 2003;",
                        "ref_id": null
                    },
                    {
                        "start": 972,
                        "end": 997,
                        "text": "St\u00fcker and Schultz, 2004)",
                        "ref_id": "BIBREF16"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Related Work",
                "sec_num": "2"
            },
            {
                "text": "The goal of our model is to induce a word pronunciation lexicon from spoken utterances and their corresponding word transcriptions. No other languagespecific knowledge is assumed to be available, including the phonetic inventory of the language. To achieve the goal, our model needs to solve the following two tasks:",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Model",
                "sec_num": "3"
            },
            {
                "text": "\u2022 Discover the phonetic inventory.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Model",
                "sec_num": "3"
            },
            {
                "text": "\u2022 Reveal the latent mapping between the letters and the discovered phonetic units.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Model",
                "sec_num": "3"
            },
            {
                "text": "We propose a hierarchical Bayesian model for jointly discovering the two latent structures from an annotated speech corpus. Before presenting our model, we first describe the key latent and observed variables of the problem.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Model",
                "sec_num": "3"
            },
            {
                "text": "Letter (l m i ) We use l m i to denote the i th letter observed in the word transcription of the m th training sample.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Model",
                "sec_num": "3"
            },
            {
                "text": "To be sure, a training sample involves a speech utterance and its corresponding text transcription. The letter sequence composed of l m i and its context, namely",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Model",
                "sec_num": "3"
            },
            {
                "text": "l m i\u2212\u03ba , \u2022 \u2022 \u2022 , l m i\u22121 , l m i , l m i+1 , \u2022 \u2022 \u2022 , l m i+\u03ba , is denoted as l m i,\u03ba . Although l m",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Model",
                "sec_num": "3"
            },
            {
                "text": "i is referred to as a letter in this paper, it can represent any character observed in the text data, including space and symbols indicating sentence boundaries. The set of unique characters observed in the data set is denoted as G. For notation simplicity, we use L \u03ba to denote the set of letter sequences of length 2\u03ba + 1 that appear in the dataset and use l \u03ba to denote the elements in L \u03ba . Finally, P( l \u03ba ) is used to represent the parent of l \u03ba , which is a substring of l \u03ba with the first and the last characters truncated.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Model",
                "sec_num": "3"
            },
            {
                "text": "Number of Mapped Acoustic Units (n m i ) Each letter l m i in the transcriptions is assumed to be mapped to a certain number of phonetic units. For example, the letter x in the word fox is mapped to 2 phonetic units /k/ and /s/, while the letter e in the word lake is mapped to 0 phonetic units. We denote this number as n m i and limit its value to be 0, 1 or 2 in our model. The value of n m i is always unobserved and needs to be inferred by the our model.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Model",
                "sec_num": "3"
            },
            {
                "text": "(c m i,p )",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Identity of the Acoustic Unit",
                "sec_num": null
            },
            {
                "text": "For each phonetic unit that l m i maps to, we use c m i,p , for 1 \u2264 p \u2264 n m i , to denote the identity of the phonetic unit. Note that the phonetic inventory that describes the data set is unknown to our model, and the identities of the phonetic units are associated with the acoustic units discovered automatically by our model.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Identity of the Acoustic Unit",
                "sec_num": null
            },
            {
                "text": "The observed speech data in our problem are converted to a series of 25 ms 13dimensional MFCCs (Davis and Mermelstein, 1980) and their first-and second-order time derivatives at a 10 ms analysis rate. We use x m t \u2208 R 39 to denote the t th feature frame of the m th utterance.",
                "cite_spans": [
                    {
                        "start": 95,
                        "end": 124,
                        "text": "(Davis and Mermelstein, 1980)",
                        "ref_id": "BIBREF2"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Speech Feature x m t",
                "sec_num": null
            },
            {
                "text": "We present the generative process for a single training sample (i.e., a speech utterance and its corresponding text transcription); to keep notation simple, we discard the index variable m in this section.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Generative Process",
                "sec_num": "3.1"
            },
            {
                "text": "For each l i in the transcription, the model generates n i , given l i,\u03ba , from the 3-dimensional categorical distribution \u03c6 l i,\u03ba (n i ). Note that for every unique",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Generative Process",
                "sec_num": "3.1"
            },
            {
                "text": "l i,\u03ba letter sequence, there is an associated \u03c6 l i,\u03ba (n i ) l j 1\u2264 p \u2264 n i \u03b1 0 c i, p \u03b8 0 K \u03b8 c d i,p \u03b7 1 \u2264 i \u2264 L m n i x t 1 \u2264 m \u2264 M \u03c0 l 2,n,p \u03b3 \u03b2 \u03c0 l,n,p G \u00d7{(n,p) | 0 \u2264 n \u2264 2, 1 \u2264 p \u2264 n} \u03c0 l 1,n,p G \u00d7G G \u00d7G \u03b1 1 \u03b1 2 i-2 \u2264 j \u2264 i+2",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Generative Process",
                "sec_num": "3.1"
            },
            {
                "text": "Figure 1: The graphical representation of the proposed hierarchical Bayesian model. The shaded circle denotes the observed text and speech data, and the squares denote the hyperparameters of the priors in our model. See Sec. 3 for a detailed explanation of the generative process of our model. distribution, which captures the fact that the number of phonetic units a letter maps to may depend on its context. In our model, we impose a Dirichlet distribution prior Dir(\u03b7) on \u03c6 l i,\u03ba (n i ).",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Generative Process",
                "sec_num": "3.1"
            },
            {
                "text": "If n i = 0, l i is not mapped to any acoustic units and the generative process stops for l i ; otherwise, for 1 \u2264 p \u2264 n i , the model generates c i,p from:",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Generative Process",
                "sec_num": "3.1"
            },
            {
                "text": "EQUATION",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [
                    {
                        "start": 0,
                        "end": 8,
                        "text": "EQUATION",
                        "ref_id": "EQREF",
                        "raw_str": "c i,p \u223c \u03c0 l i,\u03ba ,n i ,p",
                        "eq_num": "(1)"
                    }
                ],
                "section": "Generative Process",
                "sec_num": "3.1"
            },
            {
                "text": "where \u03c0 l i,\u03ba ,n i ,p is a K-dimensional categorical distribution, whose outcomes correspond to the phonetic units discovered by the model from the given speech data. Eq. 1 shows that for each combination of l i,\u03ba , n i and p, there is an unique categorical distribution. An important property of these categorical distributions is that they are coupled together such that their outcomes point to a consistent set of phonetic units. In order to enforce the coupling, we construct",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Generative Process",
                "sec_num": "3.1"
            },
            {
                "text": "EQUATION",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [
                    {
                        "start": 0,
                        "end": 8,
                        "text": "EQUATION",
                        "ref_id": "EQREF",
                        "raw_str": "\u03c0 l i,\u03ba ,n i ,p through a hierarchical process. \u03b2 \u223c Dir(\u03b3) (2) \u03c0 l i,\u03ba ,n i ,p \u223c Dir(\u03b1 \u03ba \u03b2) for \u03ba = 0 (3) \u03c0 l i,\u03ba ,n i ,p \u223c Dir(\u03b1 \u03ba \u03c0 l i,\u03ba\u22121 ,n i ,p ) for \u03ba \u2265 1",
                        "eq_num": "(4)"
                    }
                ],
                "section": "Generative Process",
                "sec_num": "3.1"
            },
            {
                "text": "To interpret Eq. 2 to Eq. 4, we envision that the observed speech data are generated by a Kcomponent mixture model, of which the components correspond to the phonetic units in the language. As a result, \u03b2 in Eq. 2 can be viewed as the mixture weight over the components, which indicates how likely we are to observe each acoustic unit in the data overall. By adopting this point of view, we can also regard the mapping between l i and the phonetic units as a mixture model, and",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Generative Process",
                "sec_num": "3.1"
            },
            {
                "text": "\u03c0 l i ,n i ,p",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Generative Process",
                "sec_num": "3.1"
            },
            {
                "text": "1 represents how probable l i is mapped to each phonetic unit given n i and p. We apply a Dirichlet distribution prior parametrized by \u03b1 0 \u03b2 to \u03c0 l i ,n i ,p as shown in Eq. 3. With this parameterization, the mean of \u03c0 l i ,n i ,p is the global mixture weight \u03b2, and \u03b1 0 controls how similar \u03c0 l i ,n i ,p is to the mean. More specifically, for large \u03b1 0 K, the Dirichlet distribution is highly peaked around the mean; on the contrary, for \u03b1 0 K, the mean lies in a valley. The parameters of a Dirichlet distribution can also be viewed as pseudo-counts for each category. Eq. 4 shows that the prior for \u03c0 l i,\u03ba ,n i ,p is seeded by pseudo-counts that are proportional to the mapping weights over the phonetic units of l i in a shorter context. In other words, the mapping distribution of l i in a shorter context can be thought of as a back-off distribution of l i 's mapping weights in a longer context.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Generative Process",
                "sec_num": "3.1"
            },
            {
                "text": "Each component of the K-dimensional mixture model is linked to a 3-state Hidden Markov Model (HMM). These K HMMs are used to model the phonetic units in the language (Jelinek, 1976) . The emission probability of each HMM state is modeled by a diagonal Gaussian Mixture Model (GMM). We use \u03b8 c to represent the set of parameters that define the c th HMM, which includes the state transition probability and the GMM parameters of each state emission distribution. The conjugate prior of \u03b8 c is denoted as H(\u03b8 0 ) 2 .",
                "cite_spans": [
                    {
                        "start": 166,
                        "end": 181,
                        "text": "(Jelinek, 1976)",
                        "ref_id": "BIBREF8"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Generative Process",
                "sec_num": "3.1"
            },
            {
                "text": "Finally, to finish the generative process, for each c i,p we use the corresponding HMM \u03b8 c i,p to generate the observed speech data x t , and the generative process of the HMM determines the duration, d i,p , of the speech segment. The complete generative model, with \u03ba set to 2, is depicted in Fig. 1 ; M is the total number of transcribed utterances in the corpus, and L m is the number of letters in utterance m. The shaded circles denote the observed data, and the squares denote the hyperparameters of the priors used in our model. Lastly, the unshaded circles denote the latent variables of our model, for which we derive inference algorithms in the next section.",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 295,
                        "end": 301,
                        "text": "Fig. 1",
                        "ref_id": null
                    }
                ],
                "eq_spans": [],
                "section": "Generative Process",
                "sec_num": "3.1"
            },
            {
                "text": "We employ Gibbs sampling (Gelman et al., 2004) to approximate the posterior distribution of the latent variables in our model. In the following sections, we first present a message-passing algorithm for blocksampling n i and c i,p , and then describe how we leverage acoustic cues to accelerate the computation of the message-passing algorithm. Note that the block-sampling algorithm for n i and c i,p can be parallelized across utterances. Finally, we briefly discuss the inference procedures for \u03c6 l\u03ba , \u03c0 l\u03ba,n,p , \u03b2, \u03b8 c .",
                "cite_spans": [
                    {
                        "start": 25,
                        "end": 46,
                        "text": "(Gelman et al., 2004)",
                        "ref_id": "BIBREF5"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Inference",
                "sec_num": "4"
            },
            {
                "text": "To understand the message-passing algorithm in this study, it is helpful to think of our model as a simplified Hidden Semi-Markov Model (HSMM), in which the letters represent the states and the speech features are the observations. However, unlike in a regular HSMM, where the state sequence is hidden, in our case, the state sequence is fixed to be the given letter sequence. With this point of view, we can modify the message-passing algorithms of Murphy (2002) and Johnson and Willsky (2013) to compute the posterior information required for blocksampling n i and c i,p .",
                "cite_spans": [
                    {
                        "start": 450,
                        "end": 463,
                        "text": "Murphy (2002)",
                        "ref_id": "BIBREF13"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Block-sampling n i and c i,p",
                "sec_num": "4.1"
            },
            {
                "text": "Let L(x t ) be a function that returns the index of the letter from which x t is generated; also, let F t = 1 be a tag indicating that a new phone segment starts at t + 1. Given the constraint that 0 \u2264 n i \u2264 2, for 0 \u2264 i \u2264 L m and 0 \u2264 t \u2264 T m , the backwards messages B t (i) and B * t (i) for the m th training sample can be defined and computed as in Eq. 5 and Eq. 7. Note that for clarity we discard the index variable m in the derivation of the algorithm.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Block-sampling n i and c i,p",
                "sec_num": "4.1"
            },
            {
                "text": "EQUATION",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [
                    {
                        "start": 0,
                        "end": 8,
                        "text": "EQUATION",
                        "ref_id": "EQREF",
                        "raw_str": "B t (i) p(x t+1:T |L(x t ) = i, F t = 1) = min{L,i+1+U } j=i+1 B * t (j) j\u22121 k=i+1 p(n k = 0| l i,\u03ba ) = min{L,i+1+U } j=i+1 B * t (j) j\u22121 k=i+1 \u03c6 l i,\u03ba (0)",
                        "eq_num": "(5)"
                    }
                ],
                "section": "Block-sampling n i and c i,p",
                "sec_num": "4.1"
            },
            {
                "text": "EQUATION",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [
                    {
                        "start": 0,
                        "end": 8,
                        "text": "EQUATION",
                        "ref_id": "EQREF",
                        "raw_str": "B * t (i) p(x t+1:T |L(x t+1 ) = i, F t = 1) = T \u2212t d=1 p(x t+1:t+d | l i,\u03ba )B t+d (i) (6) = T \u2212t d=1 { K c i,1 =1 \u03c6 l i,\u03ba (1)\u03c0 l i,\u03ba ,1,1 (c i,1 )p(x t+1:t+d |\u03b8 c i,1 ) + d\u22121 v=1 K c i,1 K c i,2 \u03c6 l i,\u03ba (2)\u03c0 l i,\u03ba ,2,1 (c i,1 )\u03c0 l i,\u03ba ,2,2 (c i,2 ) \u00d7 p(x t+1:t+v |\u03b8 c i,1 )p(x t+v+1:t+d |\u03b8 c i,2 )}B t+d (i)",
                        "eq_num": "(7)"
                    }
                ],
                "section": "Block-sampling n i and c i,p",
                "sec_num": "4.1"
            },
            {
                "text": "We use x t 1 :t 2 to denote the segment consisting of",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Block-sampling n i and c i,p",
                "sec_num": "4.1"
            },
            {
                "text": "x t 1 , \u2022 \u2022 \u2022 , x t 2 .",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Block-sampling n i and c i,p",
                "sec_num": "4.1"
            },
            {
                "text": "Our inference algorithm only allows up to U letters to emit 0 acoustic units in a row. The value of U is set to 2 for our experiments. B t (i) represents the total probability of all possible alignments between x t+1:T and l i+1:L . B * t (i) contains the probability of all the alignments between x t+1:T and l i+1:L that map x t+1 to l i particularly. This alignment constraint between x t+1 and l i is explicitly shown in the first term of Eq. 6, which represents how likely the speech segment x t+1:t+d is generated by l i given l i 's context. This likelihood is simply the marginal probability of p(x t+1:t+d , n i , c i,p | l i,\u03ba ) with n i and c i,p integrated out, which can be expanded and computed as shown in the last three rows of Eq. 7. The index v specifies where the phone boundary is between the two acoustic units that l i is aligned with when n i = 2. Eq. 8 to Eq. 10 are the boundary conditions of the message passing algorithm. B 0 (0) carries the total probably of all possible alignments between l 1:L and x 1:T . Eq. 9 specifies that at most U letters at the end of an sentence can be left unaligned with any speech features, while Eq. 10 indicates that all of the speech features in an utterance must be assigned to a letter.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Block-sampling n i and c i,p",
                "sec_num": "4.1"
            },
            {
                "text": "Algorithm 1 Block-sample n i and c i,p from B t (i) and",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Block-sampling n i and c i,p",
                "sec_num": "4.1"
            },
            {
                "text": "B * t (i) 1: i \u2190 0 2: t \u2190 0 3: while i < L \u2227 t < T do 4: next i \u2190 SampleF romB t (i) 5: if next i > i + 1 then 6: for k = i + 1 to k = next i \u2212 1 do 7: n k \u2190 0 8: end for 9: end if 10: d, n i , c i,p , v \u2190 SampleF romB * t (next i )",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Block-sampling n i and c i,p",
                "sec_num": "4.1"
            },
            {
                "text": "11:",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Block-sampling n i and c i,p",
                "sec_num": "4.1"
            },
            {
                "text": "EQUATION",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [
                    {
                        "start": 0,
                        "end": 8,
                        "text": "EQUATION",
                        "ref_id": "EQREF",
                        "raw_str": "t \u2190 t + d 12: i \u2190 next i 13: end while B 0 (0) = min{L,U +1} j=1 B * 0 (j) j\u22121 k=1 \u03c6 l i,\u03ba (0) (8) B T (i) \uf8f1 \uf8f4 \uf8f2 \uf8f4 \uf8f3 1 if i = L L j=i+1 \u03c6 l i,\u03ba (0) if L \u2212 U \u2264 i < L 0 if i < L \u2212 U (9) B t (L) 1 if t = T 0 otherwise",
                        "eq_num": "(10)"
                    }
                ],
                "section": "Block-sampling n i and c i,p",
                "sec_num": "4.1"
            },
            {
                "text": "Given B t (i) and B * t (i), n i and c i,p for each letter in the utterance can be sampled using Alg. 1. The SampleF romB t (i) function in line 4 returns a random sample from the relative probability distribution composed by entries of the summation in Eq. 5. Line 5 to line 9 check whether l i (and maybe l i+1 ) is mapped to zero phonetic units. next i points to the letter that needs to be aligned with 1 or 2 phone segments starting from x t . The number of phonetic units that l next i maps to and the identities of the units are sampled in SampleF romB * t (i). This subroutine generates a tuple of d, n i , c i,p as well as v (if n i = 2) from all the entries of the summation shown in Eq. 7 3 .",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Block-sampling n i and c i,p",
                "sec_num": "4.1"
            },
            {
                "text": "The variables d and v in Eq. 7 enumerate through every frame index in a sentence, treating each feature frame as a potential boundary between acoustic units. However, it is possible to exploit acoustic cues to avoid checking feature frames that are unlikely to be phonetic boundaries. We follow the presegmentation method described in Glass (2003) to skip roughly 80% of the feature frames and greatly speed up the computation of B * t (i). Another heuristic applied to our algorithm to reduce the search space for d and v is based on the observation that the average duration of phonetic units is usually no longer than 300 ms. Therefore, when computing B * t (i), we only consider speech segments that are shorter than 300 ms to avoid aligning letters to speech segments that are too long to be phonetic units.",
                "cite_spans": [
                    {
                        "start": 335,
                        "end": 347,
                        "text": "Glass (2003)",
                        "ref_id": "BIBREF6"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Heuristic Phone Boundary Elimination",
                "sec_num": "4.2"
            },
            {
                "text": "\u03c6 l\u03ba , \u03c0 l\u03ba,n i ,p , \u03b2 and \u03b8 c",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Sampling",
                "sec_num": "4.3"
            },
            {
                "text": "Sampling \u03c6 l\u03ba To compute the posterior distribution of \u03c6 l\u03ba , we count how many times l \u03ba is mapped to 0, 1 and 2 phonetic units from n m i . More specifically, we define N l\u03ba (j) for 0 \u2264 j \u2264 2 as follows:",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Sampling",
                "sec_num": "4.3"
            },
            {
                "text": "N l\u03ba (j) = M m=1 Lm i=1 \u03b4(n m i , j)\u03b4( l m i,\u03ba , l \u03ba )",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Sampling",
                "sec_num": "4.3"
            },
            {
                "text": "where we use \u03b4(\u2022) to denote the discrete Kronecker delta. With N l\u03ba , we can simply sample a new value for \u03c6 l\u03ba from the following distribution:",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Sampling",
                "sec_num": "4.3"
            },
            {
                "text": "\u03c6 l\u03ba \u223c Dir(\u03b7 + N l\u03ba )",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Sampling",
                "sec_num": "4.3"
            },
            {
                "text": "Sampling \u03c0 l\u03ba,n,p and \u03b2 The posterior distributions of \u03c0 l\u03ba,n,p and \u03b2 are constructed recursively due to the hierarchical structure imposed on \u03c0 l\u03ba,n,p and \u03b2. We start with gathering counts for updating the \u03c0 variables at the lowest level, i.e., \u03c0 l 2 ,n,p given that \u03ba is set to 2 in our model implementation, and then sample pseudo-counts for the \u03c0 variables at higher hierarchies as well as \u03b2. With the pseudo-counts, a new \u03b2 can be generated, which allows \u03c0 l\u03ba,n,p to be re-sampled sequentially. More specifically, we define C l 2 ,n,p (k) to be the number of times that l 2 is mapped to n units and the unit in position p is the k th phonetic unit. This value can be counted from the current values of c m i,p as follows.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Sampling",
                "sec_num": "4.3"
            },
            {
                "text": "C l 2 ,n,p (k) = M m=1 Lm i=1 \u03b4( l i,2 , l 2 )\u03b4(n m i , n)\u03b4(c m i,p , k)",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Sampling",
                "sec_num": "4.3"
            },
            {
                "text": "To derive the posterior distribution of \u03c0 l 1 ,n,p analytically, we need to sample pseudo-counts C l 1 ,n,p , which is defined as follows.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Sampling",
                "sec_num": "4.3"
            },
            {
                "text": "C l 1 ,n,p (k) = l 2 \u2208U l 1 C l 2 ,n,p (k) i=1 I[\u03bd i < \u03b1 2 \u03c0 l 1 ,n,p (k) i + \u03b1 2 \u03c0 l 1 ,n,p (k) ]",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Sampling",
                "sec_num": "4.3"
            },
            {
                "text": "(11) We use U l 1 = { l 2 |P( l 2 ) = l 1 } to denote the set of l 2 whose parent is l 1 and \u03bd i to represent random variables sampled from a uniform distribution between 0 and 1. Eq. 11 can be applied recursively to compute C l 0 ,n,p (k) and C ,n,p (k), the pseudo-counts that are applied to the conjugate priors of \u03c0 l 0 ,n,p and \u03b2. With the pseudo-count variables computed, new values for \u03b2 and \u03c0 l\u03ba,n,p can be sampled sequentially as shown in Eq. 12 to Eq. 14.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Sampling",
                "sec_num": "4.3"
            },
            {
                "text": "EQUATION",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [
                    {
                        "start": 0,
                        "end": 8,
                        "text": "EQUATION",
                        "ref_id": "EQREF",
                        "raw_str": "\u03b2 \u223c Dir(\u03b3 + C ,n,p ) (12) \u03c0 l\u03ba,n,p \u223c Dir(\u03b1 \u03ba \u03b2 + C l\u03ba,n,p ) for \u03ba = 0 (13) \u03c0 l\u03ba,n,p \u223c Dir(\u03b1 \u03ba \u03c0 l \u03ba\u22121 ,n,p + C l\u03ba,n,p ) for \u03ba \u2265 1",
                        "eq_num": "(14)"
                    }
                ],
                "section": "Sampling",
                "sec_num": "4.3"
            },
            {
                "text": "5 Experimental Setup",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Sampling",
                "sec_num": "4.3"
            },
            {
                "text": "To test the effectiveness of our model for joint learning phonetic units and word pronunciations from an annotated speech corpus, we construct speech recognizers out of the training results of our model. The performance of the recognizers is evaluated and compared against three baselines: first, a graphemebased speech recognizer; second, a recognizer built by using an expert-crafted lexicon, which is referred to as an expert lexicon in the rest of the paper for simplicity; and third, a recognizer built by discovering the phonetic units and L2S pronunciation rules sequentially without using a lexicon. In this section, we provide a detailed description of the experimental setup. \u03b7 \u03b3 \u03b1 0 \u03b1 1 \u03b1 2 \u03b8 0 \u03ba K 0.1 3 10 100 1 0.1 0.2 * 2 100 Table 1 : The values of the hyperparameters of our model. We use a D to denote a D-dimensional vector with all entries being a. *We follow the procedure reported in (Lee and Glass, 2012) to set up the HMM prior \u03b8 0 .",
                "cite_spans": [
                    {
                        "start": 906,
                        "end": 927,
                        "text": "(Lee and Glass, 2012)",
                        "ref_id": "BIBREF10"
                    }
                ],
                "ref_spans": [
                    {
                        "start": 741,
                        "end": 748,
                        "text": "Table 1",
                        "ref_id": null
                    }
                ],
                "eq_spans": [],
                "section": "Sampling",
                "sec_num": "4.3"
            },
            {
                "text": "All the speech recognition experiments reported in this paper are performed on a weather query dataset, which consists of narrow-band, conversational telephone speech (Zue et al., 2000) . We follow the experimental setup of McGraw et al. (2013) and split the corpus into a training set of 87,351 utterances, a dev set of 1,179 utterances and a test set of 3,497 utterances. A subset of 10,000 utterances is randomly selected from the training set. We use this subset of data for training our model to demonstrate that our model is able to discover the phonetic composition and the pronunciation rules of a language even from just a few hours of data.",
                "cite_spans": [
                    {
                        "start": 167,
                        "end": 185,
                        "text": "(Zue et al., 2000)",
                        "ref_id": "BIBREF19"
                    },
                    {
                        "start": 224,
                        "end": 244,
                        "text": "McGraw et al. (2013)",
                        "ref_id": "BIBREF12"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Dataset",
                "sec_num": "5.1"
            },
            {
                "text": "The values of the hyperparameters of our model are listed in Table 1 . We run the inference procedure described in Sec. 4 for 10,000 times on the randomly selected 10,000 utterances. The samples of \u03c6 l\u03ba and \u03c0 l\u03ban,p from the last iteration are used to decode n m i and c m i,p for each sentence in the entire training set by following the block-sampling algorithm described in Sec. 4.1. Since c m i,p is the phonetic mapping of l m i , by concatenating the phonetic mapping of every letter in a word, we can obtain a pronunciation of the word represented in the labels of discovered phonetic units. For example, assume that word w appears in sentence m and consists of l 3 l 4 l 5 (the sentence index m is ignored for simplicity). Also, assume that after decoding, n 3 = 1, n 4 = 2 and n 5 = 1. A pronunciation of w is then encoded by the sequence of phonetic labels c 3,1 c 4,1 c 4,2 c 5,1 . By repeating this process for each word in every sentence for the training set, a list of word pronunciations can be compiled and used as a stochastic lexicon to build a speech recognizer. In theory, the HMMs inferred by our model can be directly used as the acoustic model of a monophone speech recognizer. However, if we regard the c i,p labels of each utterance as the phone transcription of the sentence, then a new acoustic model can be easily re-trained on the entire data set. More conveniently, the phone boundaries corresponding to the c i,p labels are the by-products of the block-sampling algorithm, which are indicated by the values of d and v in line 10 of Alg. 1 and can be easily saved during the sampling procedure. Since these data are readily available, we re-build a context-independent model on the entire data set. In this new acoustic model, a 3-state HMM is used to model each phonetic unit, and the emission probability of each state is modeled by a 32-mixture GMM.",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 61,
                        "end": 68,
                        "text": "Table 1",
                        "ref_id": null
                    }
                ],
                "eq_spans": [],
                "section": "Building a Recognizer from Our Model",
                "sec_num": "5.2"
            },
            {
                "text": "Finally, a trigram language model is built by using the word transcriptions in the full training set. This language model is utilized in all speech recognition experiments reported in this paper. Finite State Transducers (FSTs) are used to build all the recognizers used in this study. With the language model, the lexicon and the context-independent acoustic model constructed by the methods described in this section, we can build a speech recognizer from the learning output of the proposed model without the need of a pre-defined phone inventory and any expert-crafted lexicons.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Building a Recognizer from Our Model",
                "sec_num": "5.2"
            },
            {
                "text": "McGraw et al. (2013) presented the Pronunciation Mixture Model (PMM) for composing stochastic lexicons that outperform pronunciation dictionaries created by experts. Although the PMM framework was designed to incorporate and augment expert lexicons, we found that it can be adapted to polish the pronunciation list generated by our model.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Pronunciation Mixture Model Retraining",
                "sec_num": "5.2.1"
            },
            {
                "text": "In particular, the training procedure for PMMs includes three steps. First, train a L2S model from a manually specified expert-pronunciation lexicon; second, generate a list of pronunciations for each word in the dataset using the L2S model; and finally, use an acoustic model to re-weight the pronunciations based on the acoustic scores of the spoken examples of each word.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Pronunciation Mixture Model Retraining",
                "sec_num": "5.2.1"
            },
            {
                "text": "To adapt this procedure for our purposes, we simply plug in the word pronunciations and the acoustic model generated by our model. Once we obtain the re-weighted lexicon, we re-generate forced phone alignments and retrain the acoustic model, which can be utilized to repeat the PMM lexicon reweighting procedure. For our experiments, we iterate through this model refining process until the recognition performance converges.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Pronunciation Mixture Model Retraining",
                "sec_num": "5.2.1"
            },
            {
                "text": "Conventionally, to train a context-dependent acoustic model, a list of questions based on the linguistic properties of phonetic units is required for growing decision tree classifiers (Young et al., 1994) . However, such language-specific knowledge is not available for our training framework; therefore, our strategy is to compile a question list that treats each phonetic unit as a unique linguistic class. In other words, our approach to training a contextdependent acoustic model for the automatically discovered units is to let the decision trees grow fully based on acoustic evidence.",
                "cite_spans": [
                    {
                        "start": 184,
                        "end": 204,
                        "text": "(Young et al., 1994)",
                        "ref_id": "BIBREF18"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Triphone Model",
                "sec_num": "5.2.2"
            },
            {
                "text": "We compare the recognizers trained by following the procedures described in Sec. 5.2 against three baselines. The first baseline is a grapheme-based speech recognizer. We follow the procedure described in Killer et al. (2003) and train a 3-state HMM for each grapheme, which we refer to as the monophone grapheme model. Furthermore, we create a singleton question set (Killer et al., 2003) , in which each grapheme is listed as a question, to train a triphone grapheme model. Note that to enforce better initial alignments between the graphemes and the speech data, we use a pre-trained acoustic model to identify the non-speech segments at the beginning and the end of each utterance before starting training the monophone grapheme model.",
                "cite_spans": [
                    {
                        "start": 205,
                        "end": 225,
                        "text": "Killer et al. (2003)",
                        "ref_id": null
                    },
                    {
                        "start": 368,
                        "end": 389,
                        "text": "(Killer et al., 2003)",
                        "ref_id": null
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Baselines",
                "sec_num": "5.3"
            },
            {
                "text": "Our model jointly discovers the phonetic inventory and the L2S mapping rules from a set of transcribed data. An alternative of our approach is to learn the two latent structures sequentially. We follow the training procedure of Lee and Glass (2012) to learn a set of acoustic models from the speech data and use these acoustic models to generate a phone transcription for each utterance. The phone transcriptions along with the corresponding word transcriptions are fed as inputs to the L2S model proposed in Bisani and Ney (2008) . A stochastic lexicon can be learned by applying the L2S model unit(%) Monophone Our model 17.0 Oracle 13.8 Grapheme 32.7 Sequential model 31.4 Table 2 : Word error rates generated by the four monophone recognizers described in Sec. 5.2 and Sec. 5.3 on the weather query corpus. and the discovered acoustic models to PMM. This two-stage approach for training a speech recognizer without an expert lexicon is referred to as the sequential model in this paper. Finally, we compare our system against a recognizer trained from an oracle recognition system. We build the oracle recognizer on the same weather query corpus by following the procedure presented in McGraw et al. (2013) . This oracle recognizer is then applied to generate forced-aligned phone transcriptions for the training utterances, from which we can build both monophone and triphone acoustic models. The expert-crafted lexicon used in the oracle recognizer is also used in this baseline. Note that for training the triphone model, we compose a singleton question list (Killer et al., 2003) that has every expert-defined phonetic unit as a question. We use this singleton question list instead of a more sophisticated one to ensure that this baseline and our system differ only in the acoustic model and the lexicon used to generate the initial phone transcriptions. We call this baseline the oracle baseline.",
                "cite_spans": [
                    {
                        "start": 228,
                        "end": 248,
                        "text": "Lee and Glass (2012)",
                        "ref_id": "BIBREF10"
                    },
                    {
                        "start": 509,
                        "end": 530,
                        "text": "Bisani and Ney (2008)",
                        "ref_id": "BIBREF1"
                    },
                    {
                        "start": 1190,
                        "end": 1210,
                        "text": "McGraw et al. (2013)",
                        "ref_id": "BIBREF12"
                    },
                    {
                        "start": 1566,
                        "end": 1587,
                        "text": "(Killer et al., 2003)",
                        "ref_id": null
                    }
                ],
                "ref_spans": [
                    {
                        "start": 676,
                        "end": 683,
                        "text": "Table 2",
                        "ref_id": null
                    }
                ],
                "eq_spans": [],
                "section": "Baselines",
                "sec_num": "5.3"
            },
            {
                "text": "6 Results and Analysis 6.1 Monophone Systems Table 2 shows the WERs produced by the four monophone recognizers described in Sec. 5.2 and Sec. 5.3. It can be seen that our model outperforms the grapheme and the sequential model baselines significantly while approaching the performance of the supervised oracle baseline. The improvement over the sequential baseline demonstrates the strength of the proposed joint learning framework. More specifically, unlike the sequential baseline, in which the acoustic units are discovered independently from the text data, our model is able to exploit the L2S mapping constraints provided by the word transcriptions to cluster speech segments. By comparing our model to the grapheme baseline, we can see the advantage of modeling the pronunciations of a letter using a mixture model, especially for a language like English which has many pronunciation irregularities. However, even for languages with straightforward pronunciation rules, the concept of modeling letter pronunciations using mixture models still applies. The main difference is that the mixture weights for letters of languages with simple pronunciation rules will be sparser and spikier. In other words, in theory, our model should always perform comparable to, if not better than, grapheme recognizers.",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 45,
                        "end": 52,
                        "text": "Table 2",
                        "ref_id": null
                    }
                ],
                "eq_spans": [],
                "section": "Baselines",
                "sec_num": "5.3"
            },
            {
                "text": "Last but not least, the recognizer trained with the automatically induced lexicon performs similarly to the recognizer initialized by an oracle recognition system, which demonstrates the effectiveness of the proposed model for discovering the phonetic inventory and a pronunciation lexicon from an annotated corpus. In the next section, we provide some insights into the quality of the learned lexicon and into what could have caused the performance gap between our model and the conventionally trained recognizer.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Baselines",
                "sec_num": "5.3"
            },
            {
                "text": "The major difference between the recognizer that is trained by using our model and the recognizer that is seeded by an oracle recognition system is that the former uses an automatically discovered lexicon, while the latter exploits an expert-defined pronunciation dictionary. In order to quantify, as well as to gain insights into, the difference between these two lexicons, we define the average pronunciation entropy,\u0124, of a lexicon as follows.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Pronunciation Entropy",
                "sec_num": "6.2"
            },
            {
                "text": "EQUATION",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [
                    {
                        "start": 0,
                        "end": 8,
                        "text": "EQUATION",
                        "ref_id": "EQREF",
                        "raw_str": "H \u2261 \u22121 |V | w\u2208V b\u2208B(w) p(b) log p(b)",
                        "eq_num": "(15)"
                    }
                ],
                "section": "Pronunciation Entropy",
                "sec_num": "6.2"
            },
            {
                "text": "where V denotes the vocabulary of a lexicon, B(w) represents the set of pronunciations of a word w and p(b) stands for the weight of a certain pronunciation b. Intuitively, we can regard\u0124 as an indicator of how much pronunciation variation that each word in a lexicon has on average. Table 3 shows that the\u0124 values of the lexicon induced by our model and the expert-defined lexicon as well as their respective PMM-refined versions 4 . In Table 3 , we can see that the automatically-discovered lexicon and its PMM-reweighted versions have much higher\u0124 values than their expert-defined counterparts. These higher\u0124 values imply that the lexicon induced by our model contains more pronunciation variation than the expert-defined lexicon. Therefore, the lattices constructed during the decoding process for our recognizer tend to be larger than those constructed for the oracle baseline, which explains the performance gap between the two systems in Table 2 and Table 3 . As shown in Table 3 , even though the lexicon induced by our model is noisier than the expertdefined dictionary, the PMM retraining framework consistently refines the induced lexicon and improves the performance of the recognizers 5 . To the best of our knowledge, we are the first to apply PMM to lexicons that are created by a fully unsu-pronunciations pronunciation probabilities pervised method. Therefore, in this paper, we provide further analysis on how PMM helps enhance the performance of our model. We compare the pronunciation lists for the word Burma generated by our model and refined iteratively by PMM in Table 4 . The first column of Table 4 shows all the pronunciations of Burma discovered by our model, to which our model assigns equal probabilities to create a stochastic list 6 . As demonstrated in the third and the fourth columns of Table 4 , the PMM framework is able to iteratively re-distribute the pronunciation weights and filter out less-likely pronunciations, which effectively reduces both the size and the entropy of the stochastic lexicon generated by our model. The benefits of using the PMM to refine the induced lexicon are twofold. First, the search space constructed during the recognition decoding process with the refined lexicon is more constrained, which is the main reason why the PMM is capable of improving the performance of the monophone recognizer that is trained with the output of our model. Secondly, and more importantly, the refined lexicon can greatly reduce the size of the FST built for the triphone recognizer of our model. These two observations illustrate why the PMM framework can be an useful tool for enhancing the lexicon discovered automatically by our model.",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 284,
                        "end": 291,
                        "text": "Table 3",
                        "ref_id": "TABREF1"
                    },
                    {
                        "start": 438,
                        "end": 445,
                        "text": "Table 3",
                        "ref_id": "TABREF1"
                    },
                    {
                        "start": 945,
                        "end": 952,
                        "text": "Table 2",
                        "ref_id": null
                    },
                    {
                        "start": 957,
                        "end": 964,
                        "text": "Table 3",
                        "ref_id": "TABREF1"
                    },
                    {
                        "start": 979,
                        "end": 986,
                        "text": "Table 3",
                        "ref_id": "TABREF1"
                    },
                    {
                        "start": 1587,
                        "end": 1594,
                        "text": "Table 4",
                        "ref_id": "TABREF3"
                    },
                    {
                        "start": 1822,
                        "end": 1829,
                        "text": "Table 4",
                        "ref_id": "TABREF3"
                    }
                ],
                "eq_spans": [],
                "section": "Pronunciation Entropy",
                "sec_num": "6.2"
            },
            {
                "text": "The best monophone systems of the grapheme baseline, the oracle baseline and our model are used to 6 It is also possible to assign probabilities proportional to the decoding scores of the word tokens. Unit(%) Triphone Our model 13.4 Oracle 10.0 Grapheme 15.7 Table 5 : Word error rates of the triphone recognizers. The triphone recognizers are all built by using the phone transcriptions generated by their best monohpone system. For the oracle initialized baseline and for our model, the PMM-refined lexicons are used to build the triphone recognizers.",
                "cite_spans": [
                    {
                        "start": 99,
                        "end": 100,
                        "text": "6",
                        "ref_id": null
                    }
                ],
                "ref_spans": [
                    {
                        "start": 259,
                        "end": 266,
                        "text": "Table 5",
                        "ref_id": null
                    }
                ],
                "eq_spans": [],
                "section": "Triphone Systems",
                "sec_num": "6.3"
            },
            {
                "text": "generate forced-aligned phone transcriptions, which are used to train the triphone models described in Sec. 5.2.2 and Sec. 5.3. Table 5 shows the WERs of the triphone recognition systems. Note that if a more conventional question list, for example, a list that contains rules to classify phones into different broad classes, is used to build the oracle triphone system, the WER can be reduced to 6.5%. However, as mentioned earlier, in order to gain insights into the quality of the induced lexicon and the discovered phonetic set, we compare our model against an oracle triphone system that is built by using a singleton question set. By comparing Table 2 and Table 5 , we can see that the grapheme triphone improves by a large margin compared to its monophone counterpart, which is consistent with the results reported in (Killer et al., 2003) . However, even though the grapheme baseline achieves a great performance gain with context-dependent acoustic models, the recognizer trained using the lexicon learned by our model and subsequently refined by PMM still outperforms the grapheme baseline. The consistently better performance our model achieves over the grapheme baseline demonstrates the strength of modeling the pronunciation of each letter with a mixture model that is presented in this paper.",
                "cite_spans": [
                    {
                        "start": 824,
                        "end": 845,
                        "text": "(Killer et al., 2003)",
                        "ref_id": null
                    }
                ],
                "ref_spans": [
                    {
                        "start": 128,
                        "end": 135,
                        "text": "Table 5",
                        "ref_id": null
                    },
                    {
                        "start": 649,
                        "end": 668,
                        "text": "Table 2 and Table 5",
                        "ref_id": null
                    }
                ],
                "eq_spans": [],
                "section": "Triphone Systems",
                "sec_num": "6.3"
            },
            {
                "text": "Last but not least, by comparing Table 2 and  Table 5 , it can be seen that the relative performance gain achieved by our model is similar to that obtained by the oracle baseline. Both Table 2 and Table 5 show that even without exploiting any language-specific knowledge during training, our recognizer is able to perform comparably with the recognizer trained using an expert lexicon. The ability of our model to obtain such similar performance further supports the effectiveness of the joint learning framework proposed in this paper for discovering the phonetic inventory and the word pronunciations from simply an annotated speech corpus.",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 33,
                        "end": 53,
                        "text": "Table 2 and  Table 5",
                        "ref_id": null
                    },
                    {
                        "start": 185,
                        "end": 192,
                        "text": "Table 2",
                        "ref_id": null
                    },
                    {
                        "start": 197,
                        "end": 204,
                        "text": "Table 5",
                        "ref_id": null
                    }
                ],
                "eq_spans": [],
                "section": "Triphone Systems",
                "sec_num": "6.3"
            },
            {
                "text": "We present a hierarchical Bayesian model for simultaneously discovering acoustic units and learning word pronunciations from transcribed spoken utterances. Both monophone and triphone recognizers can be built on the discovered acoustic units and the inferred lexicon. The recognizers trained with the proposed unsupervised method consistently outperforms grapheme-based recognizers and approach the performance of recognizers trained with expertdefined lexicons. In the future, we plan to apply this technology to develop ASRs for more languages.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Conclusion",
                "sec_num": "7"
            },
            {
                "text": "An abbreviation of \u03c0 l i,0 ,n i ,p 2 H(\u03b80) includes a Dirichlet prior for the transition probability of each state, and a Dirichlet prior for each mixture weight of the three GMMs, and a normal-Gamma distribution for the mean and precision of each Gaussian mixture in the 3-state HMM.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "",
                "sec_num": null
            },
            {
                "text": "We use ci,p to denote that ci,p may consist of two numbers, ci,1 and ci,2, when ni = 2.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "",
                "sec_num": null
            },
            {
                "text": "We build the PMM-refined version of the expert-defined lexicon by following the L2P-PMM framework described inMcGraw et al. (2013).5 The recognition results all converge in 2 \u223c 3 PMM retraining iterations.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "",
                "sec_num": null
            }
        ],
        "back_matter": [
            {
                "text": "The authors would like to thank Ian McGraw and Ekapol Chuangsuwanich for their advice on the PMM and recognition experiments presented in this paper. Thanks to the anonymous reviewers for helpful comments. Finally, the authors would like to thank Stephen Shum for proofreading and editing the early drafts of this paper.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Acknowledgements",
                "sec_num": null
            }
        ],
        "bib_entries": {
            "BIBREF0": {
                "ref_id": "b0",
                "title": "Joint lexicon, acoustic unit inventory and model design",
                "authors": [
                    {
                        "first": "Michiel",
                        "middle": [],
                        "last": "Bacchiani",
                        "suffix": ""
                    },
                    {
                        "first": "Mari",
                        "middle": [],
                        "last": "Ostendorf",
                        "suffix": ""
                    }
                ],
                "year": 1999,
                "venue": "Speech Communication",
                "volume": "29",
                "issue": "",
                "pages": "99--114",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Michiel Bacchiani and Mari Ostendorf. 1999. Joint lexi- con, acoustic unit inventory and model design. Speech Communication, 29:99 -114.",
                "links": null
            },
            "BIBREF1": {
                "ref_id": "b1",
                "title": "Jointsequence models for grapheme-to-phoneme conversion",
                "authors": [
                    {
                        "first": "Maximilian",
                        "middle": [],
                        "last": "Bisani",
                        "suffix": ""
                    },
                    {
                        "first": "Hermann",
                        "middle": [],
                        "last": "Ney",
                        "suffix": ""
                    }
                ],
                "year": 2008,
                "venue": "Speech Communication",
                "volume": "50",
                "issue": "5",
                "pages": "434--451",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Maximilian Bisani and Hermann Ney. 2008. Joint- sequence models for grapheme-to-phoneme conver- sion. Speech Communication, 50(5):434-451, May.",
                "links": null
            },
            "BIBREF2": {
                "ref_id": "b2",
                "title": "Comparison of parametric representations for monosyllabic word recognition in continuously spoken sentences",
                "authors": [
                    {
                        "first": "B",
                        "middle": [],
                        "last": "Steven",
                        "suffix": ""
                    },
                    {
                        "first": "Paul",
                        "middle": [],
                        "last": "Davis",
                        "suffix": ""
                    },
                    {
                        "first": "",
                        "middle": [],
                        "last": "Mermelstein",
                        "suffix": ""
                    }
                ],
                "year": 1980,
                "venue": "IEEE Trans. on Acoustics, Speech, and Signal Processing",
                "volume": "28",
                "issue": "4",
                "pages": "357--366",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Steven B. Davis and Paul Mermelstein. 1980. Com- parison of parametric representations for monosyllabic word recognition in continuously spoken sentences. IEEE Trans. on Acoustics, Speech, and Signal Pro- cessing, 28(4):357-366.",
                "links": null
            },
            "BIBREF3": {
                "ref_id": "b3",
                "title": "Speech recognition based on acoustically derived segment units",
                "authors": [
                    {
                        "first": "Toshiaki",
                        "middle": [],
                        "last": "Fukada",
                        "suffix": ""
                    },
                    {
                        "first": "Michiel",
                        "middle": [],
                        "last": "Bacchiani",
                        "suffix": ""
                    },
                    {
                        "first": "Kuldip",
                        "middle": [],
                        "last": "Paliwal",
                        "suffix": ""
                    },
                    {
                        "first": "Yoshinori",
                        "middle": [],
                        "last": "Sagisaka",
                        "suffix": ""
                    }
                ],
                "year": 1996,
                "venue": "Proceedings of ICSLP",
                "volume": "",
                "issue": "",
                "pages": "1077--1080",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Toshiaki Fukada, Michiel Bacchiani, Kuldip Paliwal, and Yoshinori Sagisaka. 1996. Speech recognition based on acoustically derived segment units. In Proceedings of ICSLP, pages 1077 -1080.",
                "links": null
            },
            "BIBREF4": {
                "ref_id": "b4",
                "title": "Keyword spotting of arbitrary words using minimal speech resources",
                "authors": [
                    {
                        "first": "Alvin",
                        "middle": [],
                        "last": "Garcia",
                        "suffix": ""
                    },
                    {
                        "first": "Herbert",
                        "middle": [],
                        "last": "Gish",
                        "suffix": ""
                    }
                ],
                "year": 2006,
                "venue": "Proceedings of ICASSP",
                "volume": "",
                "issue": "",
                "pages": "949--952",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Alvin Garcia and Herbert Gish. 2006. Keyword spotting of arbitrary words using minimal speech resources. In Proceedings of ICASSP, pages 949-952.",
                "links": null
            },
            "BIBREF5": {
                "ref_id": "b5",
                "title": "Bayesian Data Analysis. Texts in Statistical Science. Chapman & Hall/CRC",
                "authors": [
                    {
                        "first": "Andrew",
                        "middle": [],
                        "last": "Gelman",
                        "suffix": ""
                    },
                    {
                        "first": "John",
                        "middle": [
                            "B"
                        ],
                        "last": "Carlin",
                        "suffix": ""
                    },
                    {
                        "first": "Hal",
                        "middle": [
                            "S"
                        ],
                        "last": "Stern",
                        "suffix": ""
                    },
                    {
                        "first": "Donald",
                        "middle": [
                            "B"
                        ],
                        "last": "Rubin",
                        "suffix": ""
                    }
                ],
                "year": 2004,
                "venue": "",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Andrew Gelman, John B. Carlin, Hal S. Stern, and Don- ald B. Rubin. 2004. Bayesian Data Analysis. Texts in Statistical Science. Chapman & Hall/CRC, second edition.",
                "links": null
            },
            "BIBREF6": {
                "ref_id": "b6",
                "title": "A probabilistic framework for segment-based speech recognition",
                "authors": [
                    {
                        "first": "James",
                        "middle": [],
                        "last": "Glass",
                        "suffix": ""
                    }
                ],
                "year": 2003,
                "venue": "Computer Speech and Language",
                "volume": "17",
                "issue": "",
                "pages": "137--152",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "James Glass. 2003. A probabilistic framework for segment-based speech recognition. Computer Speech and Language, 17:137 -152.",
                "links": null
            },
            "BIBREF7": {
                "ref_id": "b7",
                "title": "Towards unsupervised training of speaker independent acoustic models",
                "authors": [
                    {
                        "first": "Aren",
                        "middle": [],
                        "last": "Jansen",
                        "suffix": ""
                    },
                    {
                        "first": "Kenneth",
                        "middle": [],
                        "last": "Church",
                        "suffix": ""
                    }
                ],
                "year": 2011,
                "venue": "Proceedings of INTERSPEECH",
                "volume": "",
                "issue": "",
                "pages": "1693--1696",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Aren Jansen and Kenneth Church. 2011. Towards un- supervised training of speaker independent acoustic models. In Proceedings of INTERSPEECH, pages 1693 -1696.",
                "links": null
            },
            "BIBREF8": {
                "ref_id": "b8",
                "title": "Continuous speech recognition by statistical methods",
                "authors": [
                    {
                        "first": "Frederick",
                        "middle": [],
                        "last": "Jelinek",
                        "suffix": ""
                    }
                ],
                "year": 1976,
                "venue": "Proceedings of the IEEE",
                "volume": "64",
                "issue": "",
                "pages": "532--556",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Frederick Jelinek. 1976. Continuous speech recogni- tion by statistical methods. Proceedings of the IEEE, 64:532 -556.",
                "links": null
            },
            "BIBREF9": {
                "ref_id": "b9",
                "title": "Mirjam Killer, Sebastian St\u00fcker, and Tanja Schultz",
                "authors": [
                    {
                        "first": "J",
                        "middle": [],
                        "last": "Matthew",
                        "suffix": ""
                    },
                    {
                        "first": "Alan",
                        "middle": [
                            "S"
                        ],
                        "last": "Johnson",
                        "suffix": ""
                    },
                    {
                        "first": "",
                        "middle": [],
                        "last": "Willsky",
                        "suffix": ""
                    }
                ],
                "year": 2003,
                "venue": "Proceeding of the Eurospeech",
                "volume": "14",
                "issue": "",
                "pages": "3141--3144",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Matthew J. Johnson and Alan S. Willsky. 2013. Bayesian nonparametric hidden semi-markov models. Journal of Machine Learning Research, 14:673-701, February. Mirjam Killer, Sebastian St\u00fcker, and Tanja Schultz. 2003. Grapheme based speech recognition. In Pro- ceeding of the Eurospeech, pages 3141-3144.",
                "links": null
            },
            "BIBREF10": {
                "ref_id": "b10",
                "title": "A nonparametric Bayesian approach to acoustic model discovery",
                "authors": [
                    {
                        "first": "Chia-Ying",
                        "middle": [],
                        "last": "Lee",
                        "suffix": ""
                    },
                    {
                        "first": "James",
                        "middle": [],
                        "last": "Glass",
                        "suffix": ""
                    }
                ],
                "year": 2012,
                "venue": "Proceedings of ACL",
                "volume": "",
                "issue": "",
                "pages": "40--49",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Chia-ying Lee and James Glass. 2012. A nonparamet- ric Bayesian approach to acoustic model discovery. In Proceedings of ACL, pages 40-49.",
                "links": null
            },
            "BIBREF11": {
                "ref_id": "b11",
                "title": "A segment model based approach to speech recognition",
                "authors": [
                    {
                        "first": "Chin-Hui",
                        "middle": [],
                        "last": "Lee",
                        "suffix": ""
                    },
                    {
                        "first": "Frank",
                        "middle": [],
                        "last": "Soong",
                        "suffix": ""
                    },
                    {
                        "first": "Biing-Hwang",
                        "middle": [],
                        "last": "Juang",
                        "suffix": ""
                    }
                ],
                "year": 1988,
                "venue": "Proceedings of ICASSP",
                "volume": "",
                "issue": "",
                "pages": "501--504",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Chin-Hui Lee, Frank Soong, and Biing-Hwang Juang. 1988. A segment model based approach to speech recognition. In Proceedings of ICASSP, pages 501- 504.",
                "links": null
            },
            "BIBREF12": {
                "ref_id": "b12",
                "title": "Learning lexicons from speech using a pronunciation mixture model",
                "authors": [
                    {
                        "first": "Ian",
                        "middle": [],
                        "last": "Mcgraw",
                        "suffix": ""
                    },
                    {
                        "first": "Ibrahim",
                        "middle": [],
                        "last": "Badr",
                        "suffix": ""
                    },
                    {
                        "first": "James",
                        "middle": [],
                        "last": "Glass",
                        "suffix": ""
                    }
                ],
                "year": 2013,
                "venue": "IEEE Trans. on Speech and Audio Processing",
                "volume": "21",
                "issue": "2",
                "pages": "357--366",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Ian McGraw, Ibrahim Badr, and James Glass. 2013. Learning lexicons from speech using a pronunciation mixture model. IEEE Trans. on Speech and Audio Processing, 21(2):357-366.",
                "links": null
            },
            "BIBREF13": {
                "ref_id": "b13",
                "title": "Hidden semi-Markov models (hsmms)",
                "authors": [
                    {
                        "first": "Kevin",
                        "middle": [
                            "P"
                        ],
                        "last": "Murphy",
                        "suffix": ""
                    }
                ],
                "year": 2002,
                "venue": "",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Kevin P. Murphy. 2002. Hidden semi-Markov mod- els (hsmms). Technical report, University of British Columbia.",
                "links": null
            },
            "BIBREF14": {
                "ref_id": "b14",
                "title": "Lexicon-building methods for an acoustic sub-word based speech recognizer",
                "authors": [
                    {
                        "first": "Kuldip",
                        "middle": [],
                        "last": "Paliwal",
                        "suffix": ""
                    }
                ],
                "year": 1990,
                "venue": "Proceedings of ICASSP",
                "volume": "",
                "issue": "",
                "pages": "729--732",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Kuldip Paliwal. 1990. Lexicon-building methods for an acoustic sub-word based speech recognizer. In Pro- ceedings of ICASSP, pages 729-732.",
                "links": null
            },
            "BIBREF15": {
                "ref_id": "b15",
                "title": "Unsupervised training of an HMM-based self-organizing unit recgonizer with applications to topic classification and keyword discovery",
                "authors": [
                    {
                        "first": "Man-Hung",
                        "middle": [],
                        "last": "Siu",
                        "suffix": ""
                    },
                    {
                        "first": "Herbert",
                        "middle": [],
                        "last": "Gish",
                        "suffix": ""
                    },
                    {
                        "first": "Arthur",
                        "middle": [],
                        "last": "Chan",
                        "suffix": ""
                    },
                    {
                        "first": "William",
                        "middle": [],
                        "last": "Belfield",
                        "suffix": ""
                    },
                    {
                        "first": "Steve",
                        "middle": [],
                        "last": "Lowe",
                        "suffix": ""
                    }
                ],
                "year": 2013,
                "venue": "Computer, Speech, and Language",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Man-hung Siu, Herbert Gish, Arthur Chan, William Belfield, and Steve Lowe. 2013. Unsupervised train- ing of an HMM-based self-organizing unit recgonizer with applications to topic classification and keyword discovery. Computer, Speech, and Language.",
                "links": null
            },
            "BIBREF16": {
                "ref_id": "b16",
                "title": "A grapheme based speech recognition system for Russian",
                "authors": [
                    {
                        "first": "Sebastian",
                        "middle": [],
                        "last": "St\u00fcker",
                        "suffix": ""
                    },
                    {
                        "first": "Tanja",
                        "middle": [],
                        "last": "Schultz",
                        "suffix": ""
                    }
                ],
                "year": 2004,
                "venue": "Proceedings of the 9th Conference Speech and Computer",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Sebastian St\u00fcker and Tanja Schultz. 2004. A grapheme based speech recognition system for Russian. In Pro- ceedings of the 9th Conference Speech and Computer.",
                "links": null
            },
            "BIBREF17": {
                "ref_id": "b17",
                "title": "Unsupervised learning of acoustic sub-word units",
                "authors": [
                    {
                        "first": "Sanjeev",
                        "middle": [],
                        "last": "Balakrishnan Varadarajan",
                        "suffix": ""
                    },
                    {
                        "first": "Emmanuel",
                        "middle": [],
                        "last": "Khudanpur",
                        "suffix": ""
                    },
                    {
                        "first": "",
                        "middle": [],
                        "last": "Dupoux",
                        "suffix": ""
                    }
                ],
                "year": 2008,
                "venue": "Proceedings of ACL-08: HLT, Short Papers",
                "volume": "",
                "issue": "",
                "pages": "165--168",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Balakrishnan Varadarajan, Sanjeev Khudanpur, and Em- manuel Dupoux. 2008. Unsupervised learning of acoustic sub-word units. In Proceedings of ACL-08: HLT, Short Papers, pages 165-168.",
                "links": null
            },
            "BIBREF18": {
                "ref_id": "b18",
                "title": "Tree-based state tying for high accuracy acoustic modelling",
                "authors": [
                    {
                        "first": "Steve",
                        "middle": [
                            "J"
                        ],
                        "last": "Young",
                        "suffix": ""
                    },
                    {
                        "first": "J",
                        "middle": [
                            "J"
                        ],
                        "last": "Odell",
                        "suffix": ""
                    },
                    {
                        "first": "Philip",
                        "middle": [
                            "C"
                        ],
                        "last": "Woodland",
                        "suffix": ""
                    }
                ],
                "year": 1994,
                "venue": "Proceedings of HLT",
                "volume": "",
                "issue": "",
                "pages": "307--312",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Steve J. Young, J.J. Odell, and Philip C. Woodland. 1994. Tree-based state tying for high accuracy acoustic mod- elling. In Proceedings of HLT, pages 307-312.",
                "links": null
            },
            "BIBREF19": {
                "ref_id": "b19",
                "title": "Jupiter: A telephone-based conversational interface for weather information",
                "authors": [
                    {
                        "first": "Stephanie",
                        "middle": [],
                        "last": "Victor Zue",
                        "suffix": ""
                    },
                    {
                        "first": "James",
                        "middle": [],
                        "last": "Seneff",
                        "suffix": ""
                    },
                    {
                        "first": "Joseph",
                        "middle": [],
                        "last": "Glass",
                        "suffix": ""
                    },
                    {
                        "first": "Christine",
                        "middle": [],
                        "last": "Polifroni",
                        "suffix": ""
                    },
                    {
                        "first": "Timothy",
                        "middle": [
                            "J"
                        ],
                        "last": "Pao",
                        "suffix": ""
                    },
                    {
                        "first": "Lee",
                        "middle": [],
                        "last": "Hazen",
                        "suffix": ""
                    },
                    {
                        "first": "",
                        "middle": [],
                        "last": "Hetherington",
                        "suffix": ""
                    }
                ],
                "year": 2000,
                "venue": "IEEE Trans. on Speech and Audio Processing",
                "volume": "8",
                "issue": "",
                "pages": "85--96",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Victor Zue, Stephanie Seneff, James Glass, Joseph Po- lifroni, Christine Pao, Timothy J. Hazen, and Lee Het- herington. 2000. Jupiter: A telephone-based con- versational interface for weather information. IEEE Trans. on Speech and Audio Processing, 8:85-96.",
                "links": null
            }
        },
        "ref_entries": {
            "TABREF1": {
                "text": "The upper-half of the table shows the average pronunciation entropies,\u0124, of the lexicons induced by our model and refined by PMM as well as the WERs of the monophone recognizers built with the corresponding lexicons for the weather query corpus. The definition of\u0124 can be found in Sec. 6.2. The first row of the lower-half of the table lists the average pronunciation entropies,\u0124, of the expert-defined lexicon and the lexicons generated and weighted by the L2P-PMM frameworkdescribed in McGraw et al. (2013). The second row of the lower-half of the table shows the WERs of the recognizers that are trained with the expert-lexicon and its PMM-refined versions.",
                "html": null,
                "content": "<table/>",
                "type_str": "table",
                "num": null
            },
            "TABREF3": {
                "text": "Pronunciation lists of the word Burma produced by our model and refined by PMM after 1 and 2 iterations.",
                "html": null,
                "content": "<table/>",
                "type_str": "table",
                "num": null
            }
        }
    }
}