File size: 88,135 Bytes
6fa4bc9
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
1001
1002
1003
1004
1005
1006
1007
1008
1009
1010
1011
1012
1013
1014
1015
1016
1017
1018
1019
1020
1021
1022
1023
1024
1025
1026
1027
1028
1029
1030
1031
1032
1033
1034
1035
1036
1037
1038
1039
1040
1041
1042
1043
1044
1045
1046
1047
1048
1049
1050
1051
1052
1053
1054
1055
1056
1057
1058
1059
1060
1061
1062
1063
1064
1065
1066
1067
1068
1069
1070
1071
1072
1073
1074
1075
1076
1077
1078
1079
1080
1081
1082
1083
1084
1085
1086
1087
1088
1089
1090
1091
1092
1093
1094
1095
1096
1097
1098
1099
1100
1101
1102
1103
1104
1105
1106
1107
1108
1109
1110
1111
1112
1113
1114
1115
1116
1117
1118
1119
1120
1121
1122
1123
1124
1125
1126
1127
1128
1129
1130
1131
1132
1133
1134
1135
1136
1137
1138
1139
1140
1141
1142
1143
1144
1145
1146
1147
1148
1149
1150
1151
1152
1153
1154
1155
1156
1157
1158
1159
1160
1161
1162
1163
1164
1165
1166
1167
1168
1169
1170
1171
1172
1173
1174
1175
1176
1177
1178
1179
1180
1181
1182
1183
1184
1185
1186
1187
1188
1189
1190
1191
1192
1193
1194
1195
1196
1197
1198
1199
1200
1201
1202
1203
1204
1205
1206
1207
1208
1209
1210
1211
1212
1213
1214
1215
1216
1217
1218
1219
1220
1221
1222
1223
1224
1225
1226
1227
1228
1229
1230
1231
1232
1233
1234
1235
1236
1237
1238
1239
1240
1241
1242
1243
1244
1245
1246
1247
1248
1249
1250
1251
1252
1253
1254
1255
1256
1257
1258
1259
1260
1261
1262
1263
1264
1265
1266
1267
1268
1269
1270
1271
1272
1273
1274
1275
1276
1277
1278
1279
1280
1281
1282
1283
1284
1285
1286
1287
1288
1289
1290
1291
1292
1293
1294
1295
1296
1297
1298
1299
1300
1301
1302
1303
1304
1305
1306
1307
1308
1309
1310
1311
1312
1313
1314
1315
1316
1317
1318
1319
1320
1321
1322
1323
1324
1325
1326
1327
1328
1329
1330
1331
1332
1333
1334
1335
1336
1337
1338
1339
1340
1341
1342
1343
1344
1345
1346
1347
1348
1349
1350
1351
1352
1353
1354
1355
1356
1357
1358
1359
1360
1361
1362
1363
1364
1365
1366
1367
1368
1369
1370
1371
1372
1373
1374
1375
1376
1377
1378
1379
1380
1381
1382
1383
1384
1385
1386
1387
1388
1389
1390
1391
1392
1393
1394
1395
1396
1397
1398
1399
1400
1401
1402
1403
1404
1405
1406
1407
1408
1409
1410
1411
1412
1413
1414
1415
1416
1417
1418
1419
1420
1421
1422
1423
1424
1425
1426
1427
1428
1429
1430
1431
1432
1433
1434
1435
1436
1437
1438
1439
1440
1441
1442
1443
1444
1445
1446
1447
1448
1449
1450
1451
1452
1453
1454
1455
1456
1457
1458
1459
1460
1461
1462
1463
1464
1465
1466
1467
1468
1469
1470
1471
1472
1473
1474
1475
1476
1477
1478
1479
1480
1481
1482
1483
1484
1485
1486
1487
1488
1489
1490
1491
1492
1493
1494
1495
1496
1497
1498
1499
1500
1501
1502
1503
1504
1505
1506
1507
1508
1509
1510
1511
1512
1513
1514
1515
1516
1517
1518
1519
1520
1521
1522
1523
1524
1525
1526
1527
1528
1529
1530
1531
1532
1533
1534
1535
1536
1537
1538
1539
1540
1541
1542
1543
1544
1545
1546
1547
1548
1549
1550
1551
1552
1553
1554
1555
1556
1557
1558
1559
1560
1561
1562
1563
1564
1565
1566
1567
1568
1569
1570
1571
1572
1573
1574
1575
1576
1577
1578
1579
1580
1581
1582
1583
1584
1585
1586
1587
1588
1589
1590
1591
1592
1593
1594
1595
1596
1597
1598
1599
1600
1601
1602
1603
1604
1605
1606
1607
1608
1609
1610
1611
1612
1613
1614
1615
1616
1617
1618
1619
1620
1621
1622
1623
1624
1625
1626
1627
1628
1629
1630
1631
1632
1633
1634
1635
1636
1637
1638
1639
1640
1641
1642
1643
1644
1645
1646
1647
1648
1649
1650
1651
1652
{
    "paper_id": "2022",
    "header": {
        "generated_with": "S2ORC 1.0.0",
        "date_generated": "2023-01-19T01:11:50.821135Z"
    },
    "title": "Using ASR-Generated Text for Spoken Language Modeling",
    "authors": [
        {
            "first": "Nicolas",
            "middle": [],
            "last": "Herv\u00e9",
            "suffix": "",
            "affiliation": {
                "laboratory": "",
                "institution": "Institut National de l'Audiovisuel (INA)",
                "location": {
                    "country": "France"
                }
            },
            "email": "nherve@ina.fr"
        },
        {
            "first": "Valentin",
            "middle": [],
            "last": "Pelloin",
            "suffix": "",
            "affiliation": {
                "laboratory": "Laboratoire d'Informatique de l'Universit\u00e9 du Mans (LIUM)",
                "institution": "",
                "location": {
                    "country": "France"
                }
            },
            "email": ""
        },
        {
            "first": "Benoit",
            "middle": [],
            "last": "Favre",
            "suffix": "",
            "affiliation": {
                "laboratory": "",
                "institution": "CNRS",
                "location": {
                    "settlement": "Marseille",
                    "region": "LIS",
                    "country": "France"
                }
            },
            "email": ""
        },
        {
            "first": "Franck",
            "middle": [],
            "last": "Dary",
            "suffix": "",
            "affiliation": {
                "laboratory": "",
                "institution": "CNRS",
                "location": {
                    "settlement": "Marseille",
                    "region": "LIS",
                    "country": "France"
                }
            },
            "email": ""
        },
        {
            "first": "Antoine",
            "middle": [],
            "last": "Laurent",
            "suffix": "",
            "affiliation": {
                "laboratory": "Laboratoire d'Informatique de l'Universit\u00e9 du Mans (LIUM)",
                "institution": "",
                "location": {
                    "country": "France"
                }
            },
            "email": ""
        },
        {
            "first": "Sylvain",
            "middle": [],
            "last": "Meignier",
            "suffix": "",
            "affiliation": {
                "laboratory": "Laboratoire d'Informatique de l'Universit\u00e9 du Mans (LIUM)",
                "institution": "",
                "location": {
                    "country": "France"
                }
            },
            "email": ""
        },
        {
            "first": "Laurent",
            "middle": [],
            "last": "Besacier",
            "suffix": "",
            "affiliation": {
                "laboratory": "",
                "institution": "Naver Labs Europe (NLE)",
                "location": {
                    "settlement": "Meylan",
                    "country": "France"
                }
            },
            "email": ""
        }
    ],
    "year": "",
    "venue": null,
    "identifiers": {},
    "abstract": "This papers aims at improving spoken language modeling (LM) using very large amount of automatically transcribed speech. We leverage the INA (French National Audiovisual Institute 1) collection and obtain 19GB of text after applying ASR on 350,000 hours of diverse TV shows. From this, spoken language models are trained either by fine-tuning an existing LM (FlauBERT 2) or through training a LM from scratch. The new models (FlauBERT-Oral) are shared with the community 3 and are evaluated not only in terms of word prediction accuracy but also for two downstream tasks: classification of TV shows and syntactic parsing of speech. Experimental results show that FlauBERT-Oral is better than its initial FlauBERT version demonstrating that, despite its inherent noisy nature, ASR-Generated text can be useful to improve spoken language modeling.",
    "pdf_parse": {
        "paper_id": "2022",
        "_pdf_hash": "",
        "abstract": [
            {
                "text": "This papers aims at improving spoken language modeling (LM) using very large amount of automatically transcribed speech. We leverage the INA (French National Audiovisual Institute 1) collection and obtain 19GB of text after applying ASR on 350,000 hours of diverse TV shows. From this, spoken language models are trained either by fine-tuning an existing LM (FlauBERT 2) or through training a LM from scratch. The new models (FlauBERT-Oral) are shared with the community 3 and are evaluated not only in terms of word prediction accuracy but also for two downstream tasks: classification of TV shows and syntactic parsing of speech. Experimental results show that FlauBERT-Oral is better than its initial FlauBERT version demonstrating that, despite its inherent noisy nature, ASR-Generated text can be useful to improve spoken language modeling.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Abstract",
                "sec_num": null
            }
        ],
        "body_text": [
            {
                "text": "Large language models are trained with massive texts which do not reflect well the specific aspects of spoken language. Hence, modeling spoken language is challenging as crawling 'oralstyle' transcripts is a difficult task. To overcome this, our pilot study investigates the use of massive automatic speech recognition (ASR) generated text for spoken language modeling. We believe that this methodology could bring diversity (oral/spontaneous style, different topics) to the language modeling data. This might be also useful for languages with fewer text resources but potential high availability of speech recordings. We also see long-term benefits to using ASR generated text as speech recordings convey potentially useful metadata (ex: male/female speech) that could be leveraged for building LMs from more balanced data. Finally, as speech transcripts are naturally grounded with other modalities (if extracted from videos for instance), ASR could help building large scale multimodal language understanding corpora.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": "1"
            },
            {
                "text": "The contributions of this paper are the following:",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": "1"
            },
            {
                "text": "\u2022 we build and share FlauBERT-Oral models from a massive amount (350,000 hours) of French TV shows,",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": "1"
            },
            {
                "text": "\u2022 we evaluate them on word prediction (on both written and spoken corpora), automatic classification of TV shows and speech syntactic parsing,",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": "1"
            },
            {
                "text": "\u2022 we demonstrate that ASR-Generated text can be useful for spoken LM.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": "1"
            },
            {
                "text": "We mention here related works to better position our approach: learning LMs from spoken transcripts, multimodal models and using LMs to rescore ASR. Learning LMs from spoken transcripts. probes BERT based language models (BERT, RoBERTa) trained on spoken transcripts to investigate their ability to encode properties of spoken language. Their empirical results show that LM is surprisingly good at capturing conversational properties such as pause prediction and overtalk detection from lexical tokens. But their LMs evaluated are mostly trained on clean (non ASR) spoken transcripts except one called ASRoBERTa which is trained on 2000h of transcribed speech only (1k Librispeech + 1k proprietary dataset). As a comparison with this study, we train our models on 175x more ASR data.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Related Works",
                "sec_num": "2"
            },
            {
                "text": "Multimodal models. While our approach uses ASR to build text-based spoken language models, Chuang et al. (2019) proposed an audio-andtext jointly learned SpeechBERT model for spoken question answering task. They show their model is able to extract information out of audio data that is complementary to (noisy) ASR output text. The architecture proposed by is different in the sense that it learns a joint language model with phoneme sequence and ASR transcript to learn phonetic-aware representations that are robust to ASR errors (not exactly a multimodal model). While speech or multimodal unsupervised representation learning is an interesting direction, this is out of the scope of this paper which focuses on language modeling from text transcripts only.",
                "cite_spans": [
                    {
                        "start": 91,
                        "end": 111,
                        "text": "Chuang et al. (2019)",
                        "ref_id": "BIBREF2"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Related Works",
                "sec_num": "2"
            },
            {
                "text": "BERT for ASR re-ranking. We also mention here LMs to rescore ASR as this could be an interesting application of our proposed spoken language models. Chiu and Chen (2021) used BERT models for reranking of N-best hypotheses produced by automatic speech recognition (ASR). Their experiments on the AMI benchmark demonstrate the effectiveness of the approach in comparison to RNNbased re-ranking. A similar idea is introduced by Fohr and Illina (2021) where BERT features are added to the neural re-ranker used to rescore ASR hypotheses. Even more recently, Xu et al. (2022) showed how to train a BERT-based rescoring model to incorporate a discriminative loss into the finetuning step of deep bidirectional pretrained models for ASR.",
                "cite_spans": [
                    {
                        "start": 425,
                        "end": 447,
                        "text": "Fohr and Illina (2021)",
                        "ref_id": "BIBREF5"
                    },
                    {
                        "start": 554,
                        "end": 570,
                        "text": "Xu et al. (2022)",
                        "ref_id": null
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Related Works",
                "sec_num": "2"
            },
            {
                "text": "3 From FlauBERT to FlauBERT-Oral",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Related Works",
                "sec_num": "2"
            },
            {
                "text": "The speech recognition system used to produce the text transcripts for this study was built using Kaldi (Povey et al., 2011 ). The acoustic model is based on the lattice-free MMI, so-called \"chain\" model (Povey et al., 2016) . We used a time-delay neural network (Peddinti et al., 2015 ) and a discriminative training on the top of it using the state-level minimum Bayes risk (sMBR) criterion (Vesel\u1ef3 et al., 2013) . For the acoustic model training, we used several TV and RADIO corpora (ESTER 1&2 (Galliano et al., 2009) , REPERE (Giraudel et al., 2012) and VERA (Goryainova et al., 2014) ). A regular backoff n-gram model was estimated using the speech transcripts augmented with several French newspapers (see section 4.2.3 in Del\u00e9glise et al. (2009) ) using SRILM.",
                "cite_spans": [
                    {
                        "start": 104,
                        "end": 123,
                        "text": "(Povey et al., 2011",
                        "ref_id": "BIBREF18"
                    },
                    {
                        "start": 204,
                        "end": 224,
                        "text": "(Povey et al., 2016)",
                        "ref_id": "BIBREF19"
                    },
                    {
                        "start": 263,
                        "end": 285,
                        "text": "(Peddinti et al., 2015",
                        "ref_id": "BIBREF17"
                    },
                    {
                        "start": 393,
                        "end": 414,
                        "text": "(Vesel\u1ef3 et al., 2013)",
                        "ref_id": "BIBREF21"
                    },
                    {
                        "start": 498,
                        "end": 521,
                        "text": "(Galliano et al., 2009)",
                        "ref_id": "BIBREF6"
                    },
                    {
                        "start": 531,
                        "end": 554,
                        "text": "(Giraudel et al., 2012)",
                        "ref_id": "BIBREF7"
                    },
                    {
                        "start": 564,
                        "end": 589,
                        "text": "(Goryainova et al., 2014)",
                        "ref_id": "BIBREF9"
                    },
                    {
                        "start": 730,
                        "end": 753,
                        "text": "Del\u00e9glise et al. (2009)",
                        "ref_id": "BIBREF4"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "ASR system",
                "sec_num": "3.1"
            },
            {
                "text": "A 2-gram decoding is performed, followed by a 3-gram and a 4-gram rescoring step. The LM interpolation weights between the different data sources were optimized on the REPERE (Giraudel et al., 2012) development corpus. The vocabulary contains the 160k most frequents words in the manually transcribed train corpus. Automatic speech diarization of the INA collection was performed using the open source toolkit LIUMSpkDiarization (Meignier and Merlin, 2010 The transcripts used in these experiments were taken from time slots corresponding to news programmes on French television and radio between 2013 and 2020. We transcribed the continuous news media between 6am and midnight each day (BFMTV, LCI, CNews, France 24, France Info and franceinfo). For radio, the morning news were used (Europe1, RMC, RTL, France Inter) and for generalist television channels we transcribed the evening news (TF1, France 2, France 3, M6). A total of 350,000 hours were automatically transcribed. The system we use provides us with raw text, without punctuation or capitalization. In order to have a pseudo sentence tokenization, we leverage the speaker diarization output to segment our transcriptions into \"sentences\". We end up with a total of 51M unique speech segments for a total of 3.5G words (19GB of data). The ASR generated text is strongly biased towards news content.",
                "cite_spans": [
                    {
                        "start": 175,
                        "end": 198,
                        "text": "(Giraudel et al., 2012)",
                        "ref_id": "BIBREF7"
                    },
                    {
                        "start": 429,
                        "end": 455,
                        "text": "(Meignier and Merlin, 2010",
                        "ref_id": "BIBREF14"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "ASR system",
                "sec_num": "3.1"
            },
            {
                "text": "The initial French language model (FBU), trained in 2020 on natural text, is FlauBERT (Le et al., 2020) . Models of different sizes were trained using masked language modeling (MLM) following a RoBERTa architecture and using the CNRS Jean Zay supercomputer. They were shared on HuggingFace. 4 For comparison, these models were trained on 71GB of natural text. Following the architecture of Le et al. (2020) , we propose several learning configurations in order to observe the impact of different parameters on the performance of the models obtained. Since we only have lowercase transcripts, we consider the flaubert-base-uncased model as our reference. 5 The first configuration, FlauBERT-O-base_uncased (FT), consists in fine-tuning the public flaubert-base-uncased model for some epochs using our ASR transcripts.",
                "cite_spans": [
                    {
                        "start": 86,
                        "end": 103,
                        "text": "(Le et al., 2020)",
                        "ref_id": "BIBREF12"
                    },
                    {
                        "start": 291,
                        "end": 292,
                        "text": "4",
                        "ref_id": null
                    },
                    {
                        "start": 390,
                        "end": 406,
                        "text": "Le et al. (2020)",
                        "ref_id": "BIBREF12"
                    },
                    {
                        "start": 654,
                        "end": 655,
                        "text": "5",
                        "ref_id": null
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Fine-tuning or re-training FlauBERT-Oral",
                "sec_num": "3.3"
            },
            {
                "text": "The second configuration FlauBERT-O-mixed (MIX) is a full model re-trained using a mix of ASR text and written text, as training data. Written text comes from two main sources: the French wikipedia dump and press articles captured by the OTMedia research platform (Herv\u00e9, 2019 ) (online press and AFP agency for the same time period). Overall, this learning dataset is also strongly newsoriented. For the written text, we use the same sentence segmentation tool as the one used for FlauBERT. Our dataset is balanced between ASR and written text: we use 94M randomly selected written text sentences representing 13G of data to which we removed the punctuation and capitalization to make it consistent with our ASR data. For this mixed model, we also retrain the BPE tokenizer (50K sub-word units).",
                "cite_spans": [
                    {
                        "start": 264,
                        "end": 276,
                        "text": "(Herv\u00e9, 2019",
                        "ref_id": "BIBREF10"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Fine-tuning or re-training FlauBERT-Oral",
                "sec_num": "3.3"
            },
            {
                "text": "The third configuration, FlauBERT-O-asr, consists in re-training LMs from scratch using ASR data only. For the first model (ORAL), we use the tokenizer provided with the flaubert-base-uncased model and for the second one (ORAL_NB) we retrain a BPE tokenizer (50K sub-word units). Both tokenizers share 35088 (overlap) out of 67536 (FlauBERT initial) tokens, only 52% overlap.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Fine-tuning or re-training FlauBERT-Oral",
                "sec_num": "3.3"
            },
            {
                "text": "These different configurations therefore provide us with 4 language models to evaluate. Training was done on a single server with 2 Xeon CPUs of 12 cores each, 256 GB of RAM and 8 Nvidia GeForce RTX 2080 Ti graphics cards with 11 GB of memory. With this hardware, it took us 15 days to train 50 epochs of each model in the flaubert-base configuration (137M parameters) using FlauBERT code.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Fine-tuning or re-training FlauBERT-Oral",
                "sec_num": "3.3"
            },
            {
                "text": "The first step in evaluating our models is to look at their behaviour for the word prediction task. In addition to the performance on the trained models, we also want to have an idea of the performance on texts of different nature (written style or oral style). We therefore assembled several datasets to measure the word prediction performance of the models we trained.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Word Prediction Experiments",
                "sec_num": "4"
            },
            {
                "text": "We make sure that these datasets are not included in the training data of the default FlauBERT model nor in our own. We have a first corpus (afp2021) of AFP dispatches from the year 2021, i.e. after the period of our training data collected from the online press. This will allow us to have a measure of performance on written text. Secondly, we want to evaluate our models on oral texts. We use the transcripts of the French National Assembly sessions. 6 We are using the 13th (under Sarkozy parl_13) and 15th (currently under Macron parl_15) mandates. These texts are a manual transcription of what is said in the hemicycle, which are prepared speeches with some degree of spontaneous style as well. A second corpus is constituted with, once again, the manual transcriptions made for educational videos 7 and interviews 8 that INA makes available via its web studio (studio_manual). These transcriptions are of very good quality. We also transcribed these videos from the studio with our ASR system (studio_asr) in order to be able to compare the performance on both types of data.",
                "cite_spans": [
                    {
                        "start": 454,
                        "end": 455,
                        "text": "6",
                        "ref_id": null
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Word Prediction Experiments",
                "sec_num": "4"
            },
            {
                "text": "We report in the graphs the accuracy obtained on the different datasets for a word prediction task after a word has been masked. The masking parameters are the same as those used during training with MLM loss. flaubert_base_uncased model (FBU). For the finetuned FlauBERT-O-base_uncased model, we notice a slight improvement in performance for afp and studio datasets, obtained from the first epoch, which means that adding ASR generated text improves word prediction task on these datasets. We observe that globally, whatever the model, the datasets of the parliamentary sessions are those for which the best performances are obtained on the word prediction task, even exceeding that of the training dataset for the FlauBERT-O-base_uncased and FlauBERT-O-mixed models. These models are trained on written and spoken texts and it is not surprising that the performance is good since the very nature of the parliamentary data is a mix-ture of prepared and spontaneous speech. There is no significant difference between parl_13 and parl_15. On these parlementary speeches, there is no significant performance difference between the 3 models that have seen written text during their training (FBU, FT and MIX). As we observed also that our FlauBERT-O models improve also on written text (afp2021), we explain this by the fact that those texts are strongly related to news events, so they are in a similar context to our ASR data which is focused on news slot transcripts. For the last corpus, from the INA web studio, we have educational videos or interviews of personalities which are more distant from news data. There is a great disparity in performance depending on whether we consider manual (studio_manual) or automatic (stu-dio_asr) transcription. We believe that the different sentence segmentation algorithms have a very clear impact on this corpus. Finally, we notice that the ORAL_NB model performs slightly worse than the ORAL model. The BPE tokenizer obviously has an impact on the overall performance of the LMs and it seems, from this result, that using BPE units extracted from clean data (and not noisy ASR data) is beneficial even if the training material is itself ASR generated text. We evaluate our different models on a news classification task. For the main generalist channels, INA's documentalists finely segment the newscasts and annotate them in order to describe their content. This very rich metadata is used in particular to establish quantitative studies on the news in France. The InaStat barometer 9 has set up a stable method-ology over time to classify these news items into 14 categories (such as society, French politics, sport or environment). We use the news items of 4 channels (TF1, France 2, France 3 and M6) for the years 2017, 2018 and 2019, which gives us a total of 47 867 short TV shows. The average length of these shows is 92 seconds.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Word Prediction Experiments",
                "sec_num": "4"
            },
            {
                "text": "The objective is to assess to what extent it is possible to classify these topics into the 14 categories solely on the basis of what is pronounced, i.e. from the ASR transcripts. We establish a baseline using a simple SVM classifier (with a non-parametric triangular kernel) on TF-IDF vectors with two vocabulary sizes of 5K and 20K words. To test the FlauBERT models, we use the HuggingFace Transformers library and the FlaubertForSequenceClassification class, which adds a simple dense classification layer on top of our models. To obtain a vector representation of our texts before this classification layer, we use the 'mean' summary type. We do not make any model selection and report the results for all learning epochs. Since the 14 categories are not well-balanced, we use the weighted F1 measure to evaluate the performance. The experiments are systematically performed on 10 different random splits of the dataset, taking into account the cardinality of the 14 categories, so as to have 38K examples for the training set and 5K for the test set. We show the average results and the standard deviation in figure 5. We can see in this configuration the contribution of the LMs compared to the SVMs along the training epochs of the classifier. If we look at the performance at the first epoch, we can see that the flaubert_base_uncased model has almost equivalent performance to the SVM (0.78). It is only after a few iterations of learning that the model fits the ASR data and reaches 0.81. On the other hand, the models that have already seen ASR data during ina-stat-sommaire.html their training have a better performance from the first epoch. The model trained only on ASR data is the best performing (ORAL). After 10 epochs, the 3 FlauBERT-Oral models converge and are equivalent for this task.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Standard Learning Setting",
                "sec_num": "5.1"
            },
            {
                "text": "In order to test the LMs under more challenging conditions, we progressively reduce the number of training examples to get closer to few-shot learning conditions. We thus restart the classification with 5K training examples, then 500 and finally 200. Again, we take into account the cardinality of the 14 categories. For the last experiment with only 200 training examples, the vocabulary is too small and we can only test the SVM baseline with a vocabulary of 5K words, but not the version with 20K words. Moreover, we push to 30 epochs in this latter case. As the number of training examples decreases, the performance gain over SVMs becomes more obvious. This is an expected result. In all cases, the models trained on ASR only text (ORAL) are the best of the FlauBERT-O models. Compared to the ORAL_NB model, only the tokenizer is different. This result may appear counter-intuitive in a first place, as one would expect a model entirely learned on ASR data to perform better on a classification task using only ASR data as input. However, this is probably counterbalanced by the fact that using BPE units extracted from clean data is important (as we have seen in the word prediction experiments). This invites us to further investigate the role of the tokenizer in spoken language modeling. As in the previous case, the Flaubert models converge almost with a 2 F1 point difference in favour of the FlauBERT-O models over the initial FlauBERT model.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Few Shot Learning Setting",
                "sec_num": "5.2"
            },
            {
                "text": "q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Few Shot Learning Setting",
                "sec_num": "5.2"
            },
            {
                "text": "This section is about the downstream task of jointly predicting part of speech tags (POS) and building a labelled dependency tree. The models performing these tasks typically rely on word representations, that are often pretrained, especially when the data is scarce. We will use our different spoken language models to obtain contextual word representations of a syntactically annotated and manually transcribed oral French corpus. For each of these representations, a model will be trained to perform the joint prediction of POS tags and labelled dependencies. We also use as baseline a model trained using non-contextual representations obtained with FastText, 10 and a model learning its own representations without any pretraining.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Downstream Task 2: Syntactic Analysis of Spoken Conversations",
                "sec_num": "6"
            },
            {
                "text": "We used the annotated subset of the speech corpus of the Orfeo project (Benzitoun et al., 2016; Nasr et al., 2020) , gathered with the goal of reflecting the contemporary usage of the French language.",
                "cite_spans": [
                    {
                        "start": 71,
                        "end": 95,
                        "text": "(Benzitoun et al., 2016;",
                        "ref_id": "BIBREF0"
                    },
                    {
                        "start": 96,
                        "end": 114,
                        "text": "Nasr et al., 2020)",
                        "ref_id": "BIBREF15"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Data",
                "sec_num": "6.1"
            },
            {
                "text": "The audio extracts on which this corpus is based come from various origins and modalities: from one to multiple speakers, work meetings, family dinner conversations, narration, political meeting, interview, goal-oriented telephone conversations. Their duration varies from four minutes to an hour.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Data",
                "sec_num": "6.1"
            },
            {
                "text": "The reference audio transcripts have been obtained after correcting the output of an ASR system. The corpus is annotated in part of speech (POS) tags, lemmas, labeled dependency trees and sentences boundaries. There are 20 possible POS tags and 12 syntactic functions.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Data",
                "sec_num": "6.1"
            },
            {
                "text": "We randomly split the corpus into train/dev/test sets of respective sizes 134,716/27,937/29,529 words; we sampled from each source so that the various origins of the audios are equally represented in each split.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Data",
                "sec_num": "6.1"
            },
            {
                "text": "The model is a transition based parser using the arc-eager transition system (Nivre, 2008) , which has been extended for the joint prediction of POS tags and parsing transitions (Dary and Nasr, 2021) .",
                "cite_spans": [
                    {
                        "start": 77,
                        "end": 90,
                        "text": "(Nivre, 2008)",
                        "ref_id": "BIBREF16"
                    },
                    {
                        "start": 178,
                        "end": 199,
                        "text": "(Dary and Nasr, 2021)",
                        "ref_id": "BIBREF3"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Parsing Model",
                "sec_num": "6.2"
            },
            {
                "text": "It consists of a single classifier, taking as input a numeric representation of the current state of the analysis, called a configuration. The classifier predicts a probability distribution over the set of POS tagging actions or parsing actions, depending of the current state of the configuration. The analysis assume that the text is already tokenized and segmented into sentences; the words of each sentence are considered one by one, in the reading order; a POS action is predicted for the current word, then a sequence of arc-eager actions is predicted until the current word is either attached to a word on its left or shifted to a stack for future attachment to a word on its right. The predictions are greedy: it is always the top scoring action among the allowed ones. We do not use beam search for decoding.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Parsing Model",
                "sec_num": "6.2"
            },
            {
                "text": "The numeric representation of the current configuration is comprised of:",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Parsing Model",
                "sec_num": "6.2"
            },
            {
                "text": "\u2022 The concatenation of the word embeddings, reduced from dimension 768 to dimension 64 by a linear layer, of the following context: the current word, the three preceding ones, the two following ones, the three topmost stack elements and the rightmost and leftmost dependents of the three topmost stack elements,",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Parsing Model",
                "sec_num": "6.2"
            },
            {
                "text": "\u2022 The output of three different BiLSTM processing sequences of tags of the same nature. The first one is taking as input the sequence of POS tags and syntactic function of the current word, the three previous ones and the three topmost stack elements. The second one is taking the sequence of the last 10 actions that have been applied to this configuration. The last one is taking the sequence of distances (in number of words) between the current word and the three topmost stack elements. In each case, the sequence elements are encoded by learnable and randomly initialized embeddings of size 128, and the output of the BiLSTM is a vector of size 128,",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Parsing Model",
                "sec_num": "6.2"
            },
            {
                "text": "\u2022 A learnable and randomly initialized embedding encoding the current state of the configuration (POS tagging or dependency parsing).",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Parsing Model",
                "sec_num": "6.2"
            },
            {
                "text": "A dropout of 50% is applied to the resulting vector; then it passes through two hidden layers of respective sizes 3200 and 1600, both with a dropout of 40% and a ReLU activation. Finally, the network is ended by one of the two decision layers, depending on the current state, which is simply a linear layer of dimension the number of possible actions followed by a softmax.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Parsing Model",
                "sec_num": "6.2"
            },
            {
                "text": "Each model was trained for 40 epochs; after every epoch the model was evaluated on the dev set and was saved if it was an improvement. After the fourth epoch, the entire train set was decoded using the model that was being trained, in order to generate and integrate novel configurations in the dataset for the epochs to come. This technique allows the model to be more robust, exploring nonoptimal configurations during its training. It is based on the dynamical oracle model of Goldberg and Nivre (2012) .",
                "cite_spans": [
                    {
                        "start": 480,
                        "end": 505,
                        "text": "Goldberg and Nivre (2012)",
                        "ref_id": "BIBREF8"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Parsing Model",
                "sec_num": "6.2"
            },
            {
                "text": "The first set of experiments compares input representations from the FlauBERT variants (FBU, MIX, ORAL) to uncontextual word embeddings (Fasttext) and randomly initialized embeddings. Except for random embeddings, token representations are frozen when the parsing system is trained.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Experiments",
                "sec_num": "6.3"
            },
            {
                "text": "As pre-processing, we deanonymize the transcripts by replacing masked proper name tokens with non-ambiguous names randomly chosen for each recording. In the fasttext setting, representations are computed for unknown words from their character n-gram factors. Contextual representations are computed at the whole recording level in chunks of 512 tokens without overlap. The parser is applied on the reference transcript and reference segmentation. We use mean pooling for words that are split in multiple tokens by BPE.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Experiments",
                "sec_num": "6.3"
            },
            {
                "text": "Parsing performance is evaluated with Labeled Attachment Score (LAS), the accuracy of predicting the governor of each word and its dependency label, Unlabeled Attachment Score (UAS), which ignores the dependency label, and Part-of-speech tagging accuracy (UPOS). The scoring script is from CoNLL campaigns. Results presented in Table 3 show that pretraining is valuable for syntactic parsing in that setting and that pretraining on ASR (MIX and ORAL) leads to a substancial improvement in LAS over the text-only FlauBERT model (FBU) even though there is no domain overlap between the TV shows on which the earlier is trained and the data of the Orfeo corpus. There is no benefit from retraining BPE (ORAL_NB). As noted earlier, speech recordings do not have punctuation and it is debated whether punctuation is suitable for spontaenous conversations. As punctuation is rather regular in text, it would make sense for LMs trained on text to over-rely on the cues it brings, and representations to be affected by a lack of punctuation. Table 4 shows syntactic parsing results on representations where a simple heuristic is applied to add a period at the end of each sentence prior to extracting representations. This punctuation is stripped before passing the tokens to the syntactic parser and only used at the encoding stage. Results show that most of the difference in performance between the FBU and ORAL models can be compensated by this use of virtual punctuation. Using accurately predicted punctuation with diverse symbols and intra-sentence marks is Repr. 95.55 79.00 -16.55 Table 5 : Syntactic parsing performance on OOV words according to automatic transcription system. The \u2206 column contains the difference between the global accuracy and the accuracy on OOVs only.",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 328,
                        "end": 335,
                        "text": "Table 3",
                        "ref_id": "TABREF5"
                    },
                    {
                        "start": 1034,
                        "end": 1041,
                        "text": "Table 4",
                        "ref_id": "TABREF7"
                    },
                    {
                        "start": 1582,
                        "end": 1589,
                        "text": "Table 5",
                        "ref_id": null
                    }
                ],
                "eq_spans": [],
                "section": "Experiments",
                "sec_num": "6.3"
            },
            {
                "text": "LAS UAS UPOS Global OOV \u2206 Global OOV \u2206 Global OOV \u2206 FBU",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Repr",
                "sec_num": null
            },
            {
                "text": "left as future work, but we conjecture that it will marginally improve over this crude heuristic. Gauging the impact of speech-to-text errors on representations from LMs trained on such data is difficult since there are no manual references available for large quantities of speech transcripts.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Repr",
                "sec_num": null
            },
            {
                "text": "Since the system used to transcribe the recordings is closed vocabulary, one way to look at this problem is to compute the accuracy of the syntactic parser on words that are out-of-vocabulary (OOV) for the LM training data. Due to BPE, those words are necesseraly tokenized in smaller units which are pooled prior to passing them to the parser, and might hamper the quality of the associated representations. Table 5 details the performance of the syntactic parser on OOVs. Due to their infrequent nature, OOVs are mainly swear words, proper names, and tokenization artifacts. They are difficult to handle for all models, and suffer from a large performance reduction compared to the global figure, even for the FBU model which has seen a much larger variety of texts. The system fed with representations of the model trained on ASR data only (ORAL) is the most affected despite its better global performance. Finally, Figure 9 shows the learning curve when reducing the training data available to the syntactic parser. For this, we randomly sampled 10 subsets of the training data at the recording level in order to fit a target ratio from 2.5% to 100%. The figure shows that LAS is always better for ORAL representations and that MIX is closer to FBU when less data is available.",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 409,
                        "end": 416,
                        "text": "Table 5",
                        "ref_id": null
                    },
                    {
                        "start": 919,
                        "end": 927,
                        "text": "Figure 9",
                        "ref_id": "FIGREF7"
                    }
                ],
                "eq_spans": [],
                "section": "Repr",
                "sec_num": null
            },
            {
                "text": "It seems that exploiting ASR transcripts for learning LMs is beneficial for syntactic parsing of speech transcripts. Analyses presented show that punctuation plays an important role in representations. Our analysis of parsing performance on OOV words (according to the speech-to-text system) reveals that our FlauBERT-O-asr (ORAL) model is more affected than its initial FlauBERT baseline (FBU), despite overall better performance.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Takeaways",
                "sec_num": "6.4"
            },
            {
                "text": "We investigated spoken language modeling using ASR generated text (350,000 hours of diverse TV shows). The new models for French (FlauBERT-O) are shared with the community. Experimental results show that FlauBERT-O is generally better than its initial FlauBERT version for the downstream speech tasks we experimented with. However we should also check its performance on text downstream tasks (such as (Le et al., 2020) ) and on more downstream speech tasks (SLU or ASR re-scoring).",
                "cite_spans": [
                    {
                        "start": 402,
                        "end": 419,
                        "text": "(Le et al., 2020)",
                        "ref_id": "BIBREF12"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Conclusion and future work",
                "sec_num": "7"
            },
            {
                "text": "In this work, all our texts were uncased as our ASR only generates lowercased transcripts. We believe that applying massively re-capitalisation (and restoring punctuation as well) might be beneficial to train stronger spoken LMs. We also plan to analyze more the specificities of our ASR-generated texts (do they contain more oral features such as word repetitions, more interjections?). Finally, some of the results obtained lead us to believe that it is important to further evaluate the impact of BPE units for spoken language modeling.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Conclusion and future work",
                "sec_num": "7"
            },
            {
                "text": "https://www.ina.fr 2 https://github.com/getalp/Flaubert 3 https://huggingface.co/nherve",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "",
                "sec_num": null
            },
            {
                "text": "https://huggingface.co/flaubert",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "",
                "sec_num": null
            },
            {
                "text": "https://huggingface.co/flaubert/ flaubert_base_uncased",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "",
                "sec_num": null
            },
            {
                "text": "https://data.assemblee-nationale.fr/ 7 https://www.ina.fr/ offres-et-services/fresques-numeriques 8 https://entretiens.ina.fr/",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "",
                "sec_num": null
            },
            {
                "text": "http://www.inatheque.fr/ publications-evenements/ina-stat/",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "",
                "sec_num": null
            },
            {
                "text": "https://fasttext.cc",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "",
                "sec_num": null
            }
        ],
        "back_matter": [],
        "bib_entries": {
            "BIBREF0": {
                "ref_id": "b0",
                "title": "Le projet orf\u00e9o: un corpus d'\u00e9tude pour le fran\u00e7ais contemporain",
                "authors": [
                    {
                        "first": "Christophe",
                        "middle": [],
                        "last": "Benzitoun",
                        "suffix": ""
                    },
                    {
                        "first": "Jeanne-Marie",
                        "middle": [],
                        "last": "Debaisieux",
                        "suffix": ""
                    },
                    {
                        "first": "Henri-Jos\u00e9",
                        "middle": [],
                        "last": "Deulofeu",
                        "suffix": ""
                    }
                ],
                "year": 2016,
                "venue": "Corpus",
                "volume": "",
                "issue": "15",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Christophe Benzitoun, Jeanne-Marie Debaisieux, and Henri-Jos\u00e9 Deulofeu. 2016. Le projet orf\u00e9o: un cor- pus d'\u00e9tude pour le fran\u00e7ais contemporain. Corpus, (15).",
                "links": null
            },
            "BIBREF1": {
                "ref_id": "b1",
                "title": "Innovative bert-based reranking language models for speech recognition",
                "authors": [
                    {
                        "first": "",
                        "middle": [],
                        "last": "Shih-Hsuan",
                        "suffix": ""
                    },
                    {
                        "first": "Berlin",
                        "middle": [],
                        "last": "Chiu",
                        "suffix": ""
                    },
                    {
                        "first": "",
                        "middle": [],
                        "last": "Chen",
                        "suffix": ""
                    }
                ],
                "year": 2021,
                "venue": "IEEE Spoken Language Technology Workshop (SLT)",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {
                    "DOI": [
                        "10.1109/slt48900.2021.9383557"
                    ]
                },
                "num": null,
                "urls": [],
                "raw_text": "Shih-Hsuan Chiu and Berlin Chen. 2021. Innovative bert-based reranking language models for speech recognition. 2021 IEEE Spoken Language Technol- ogy Workshop (SLT).",
                "links": null
            },
            "BIBREF2": {
                "ref_id": "b2",
                "title": "Speechbert: Cross-modal pre-trained language model for end-to-end spoken question answering",
                "authors": [
                    {
                        "first": "Yung-Sung",
                        "middle": [],
                        "last": "Chuang",
                        "suffix": ""
                    },
                    {
                        "first": "Chi-Liang",
                        "middle": [],
                        "last": "Liu",
                        "suffix": ""
                    },
                    {
                        "first": "Hung-Yi",
                        "middle": [],
                        "last": "Lee",
                        "suffix": ""
                    }
                ],
                "year": 2019,
                "venue": "",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Yung-Sung Chuang, Chi-Liang Liu, and Hung-yi Lee. 2019. Speechbert: Cross-modal pre-trained language model for end-to-end spoken question answering. CoRR, abs/1910.11559.",
                "links": null
            },
            "BIBREF3": {
                "ref_id": "b3",
                "title": "The reading machine: A versatile framework for studying incremental parsing strategies",
                "authors": [
                    {
                        "first": "Franck",
                        "middle": [],
                        "last": "Dary",
                        "suffix": ""
                    },
                    {
                        "first": "Alexis",
                        "middle": [],
                        "last": "Nasr",
                        "suffix": ""
                    }
                ],
                "year": 2021,
                "venue": "Proceedings of the 17th International Conference on Parsing Technologies and the IWPT 2021 Shared Task on Parsing into Enhanced Universal Dependencies (IWPT 2021)",
                "volume": "",
                "issue": "",
                "pages": "26--37",
                "other_ids": {
                    "DOI": [
                        "10.18653/v1/2021.iwpt-1.3"
                    ]
                },
                "num": null,
                "urls": [],
                "raw_text": "Franck Dary and Alexis Nasr. 2021. The reading ma- chine: A versatile framework for studying incremen- tal parsing strategies. In Proceedings of the 17th International Conference on Parsing Technologies and the IWPT 2021 Shared Task on Parsing into En- hanced Universal Dependencies (IWPT 2021), pages 26-37, Online. Association for Computational Lin- guistics.",
                "links": null
            },
            "BIBREF4": {
                "ref_id": "b4",
                "title": "Improvements to the lium french asr system based on cmu sphinx: what helps to significantly reduce the word error rate?",
                "authors": [
                    {
                        "first": "Paul",
                        "middle": [],
                        "last": "Del\u00e9glise",
                        "suffix": ""
                    },
                    {
                        "first": "Yannick",
                        "middle": [],
                        "last": "Esteve",
                        "suffix": ""
                    },
                    {
                        "first": "Sylvain",
                        "middle": [],
                        "last": "Meignier",
                        "suffix": ""
                    },
                    {
                        "first": "Teva",
                        "middle": [],
                        "last": "Merlin",
                        "suffix": ""
                    }
                ],
                "year": 2009,
                "venue": "Tenth Annual Conference of the International Speech Communication Association",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Paul Del\u00e9glise, Yannick Esteve, Sylvain Meignier, and Teva Merlin. 2009. Improvements to the lium french asr system based on cmu sphinx: what helps to signif- icantly reduce the word error rate? In Tenth Annual Conference of the International Speech Communica- tion Association.",
                "links": null
            },
            "BIBREF5": {
                "ref_id": "b5",
                "title": "BERT-based Semantic Model for Rescoring N-best Speech Recognition List",
                "authors": [
                    {
                        "first": "Dominique",
                        "middle": [],
                        "last": "Fohr",
                        "suffix": ""
                    },
                    {
                        "first": "Irina",
                        "middle": [],
                        "last": "Illina",
                        "suffix": ""
                    }
                ],
                "year": 2021,
                "venue": "INTERSPEECH 2021, Proceedings of INTERSPEECH 2021",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Dominique Fohr and Irina Illina. 2021. BERT-based Semantic Model for Rescoring N-best Speech Recog- nition List. In INTERSPEECH 2021, Proceedings of INTERSPEECH 2021, Brno, Czech Republic.",
                "links": null
            },
            "BIBREF6": {
                "ref_id": "b6",
                "title": "The ester 2 evaluation campaign for the rich transcription of french radio broadcasts",
                "authors": [
                    {
                        "first": "Sylvain",
                        "middle": [],
                        "last": "Galliano",
                        "suffix": ""
                    },
                    {
                        "first": "Guillaume",
                        "middle": [],
                        "last": "Gravier",
                        "suffix": ""
                    },
                    {
                        "first": "Laura",
                        "middle": [],
                        "last": "Chaubard",
                        "suffix": ""
                    }
                ],
                "year": 2009,
                "venue": "Tenth Annual Conference of the International Speech Communication Association",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Sylvain Galliano, Guillaume Gravier, and Laura Chaubard. 2009. The ester 2 evaluation campaign for the rich transcription of french radio broadcasts. In Tenth Annual Conference of the International Speech Communication Association.",
                "links": null
            },
            "BIBREF7": {
                "ref_id": "b7",
                "title": "The repere corpus: a multimodal corpus for person recognition",
                "authors": [
                    {
                        "first": "Aude",
                        "middle": [],
                        "last": "Giraudel",
                        "suffix": ""
                    },
                    {
                        "first": "Matthieu",
                        "middle": [],
                        "last": "Carr\u00e9",
                        "suffix": ""
                    },
                    {
                        "first": "Val\u00e9rie",
                        "middle": [],
                        "last": "Mapelli",
                        "suffix": ""
                    },
                    {
                        "first": "Juliette",
                        "middle": [],
                        "last": "Kahn",
                        "suffix": ""
                    },
                    {
                        "first": "Olivier",
                        "middle": [],
                        "last": "Galibert",
                        "suffix": ""
                    },
                    {
                        "first": "Ludovic",
                        "middle": [],
                        "last": "Quintard",
                        "suffix": ""
                    }
                ],
                "year": 2012,
                "venue": "LREC",
                "volume": "",
                "issue": "",
                "pages": "1102--1107",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Aude Giraudel, Matthieu Carr\u00e9, Val\u00e9rie Mapelli, Juliette Kahn, Olivier Galibert, and Ludovic Quintard. 2012. The repere corpus: a multimodal corpus for person recognition. In LREC, pages 1102-1107.",
                "links": null
            },
            "BIBREF8": {
                "ref_id": "b8",
                "title": "A dynamic oracle for arc-eager dependency parsing",
                "authors": [
                    {
                        "first": "Yoav",
                        "middle": [],
                        "last": "Goldberg",
                        "suffix": ""
                    },
                    {
                        "first": "Joakim",
                        "middle": [],
                        "last": "Nivre",
                        "suffix": ""
                    }
                ],
                "year": 2012,
                "venue": "Proceedings of COLING 2012",
                "volume": "",
                "issue": "",
                "pages": "959--976",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Yoav Goldberg and Joakim Nivre. 2012. A dynamic ora- cle for arc-eager dependency parsing. In Proceedings of COLING 2012, pages 959-976.",
                "links": null
            },
            "BIBREF9": {
                "ref_id": "b9",
                "title": "Morpho-syntactic study of errors from speech recognition system",
                "authors": [
                    {
                        "first": "Maria",
                        "middle": [],
                        "last": "Goryainova",
                        "suffix": ""
                    },
                    {
                        "first": "Cyril",
                        "middle": [],
                        "last": "Grouin",
                        "suffix": ""
                    },
                    {
                        "first": "Sophie",
                        "middle": [],
                        "last": "Rosset",
                        "suffix": ""
                    },
                    {
                        "first": "Ioana",
                        "middle": [],
                        "last": "Vasilescu",
                        "suffix": ""
                    }
                ],
                "year": 2014,
                "venue": "LREC",
                "volume": "14",
                "issue": "",
                "pages": "3050--3056",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Maria Goryainova, Cyril Grouin, Sophie Rosset, and Ioana Vasilescu. 2014. Morpho-syntactic study of errors from speech recognition system. In LREC, volume 14, pages 3050-3056.",
                "links": null
            },
            "BIBREF10": {
                "ref_id": "b10",
                "title": "OTMedia, the TransMedia news observatory",
                "authors": [
                    {
                        "first": "Nicolas",
                        "middle": [],
                        "last": "Herv\u00e9",
                        "suffix": ""
                    }
                ],
                "year": 2019,
                "venue": "FIAT/IFTA Media Management Seminar",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Nicolas Herv\u00e9. 2019. OTMedia, the TransMedia news observatory. In FIAT/IFTA Media Management Sem- inar 2019.",
                "links": null
            },
            "BIBREF11": {
                "ref_id": "b11",
                "title": "What BERT based language model learns in spoken transcripts: An empirical study",
                "authors": [
                    {
                        "first": "Ayush",
                        "middle": [],
                        "last": "Kumar",
                        "suffix": ""
                    },
                    {
                        "first": "Jithendra",
                        "middle": [],
                        "last": "Mukuntha Narayanan Sundararaman",
                        "suffix": ""
                    },
                    {
                        "first": "",
                        "middle": [],
                        "last": "Vepa",
                        "suffix": ""
                    }
                ],
                "year": 2021,
                "venue": "Proceedings of the Fourth BlackboxNLP Workshop on Analyzing and Interpreting Neural Networks for NLP",
                "volume": "",
                "issue": "",
                "pages": "322--336",
                "other_ids": {
                    "DOI": [
                        "10.18653/v1/2021.blackboxnlp-1.25"
                    ]
                },
                "num": null,
                "urls": [],
                "raw_text": "Ayush Kumar, Mukuntha Narayanan Sundararaman, and Jithendra Vepa. 2021. What BERT based lan- guage model learns in spoken transcripts: An empiri- cal study. In Proceedings of the Fourth BlackboxNLP Workshop on Analyzing and Interpreting Neural Net- works for NLP, pages 322-336, Punta Cana, Do- minican Republic. Association for Computational Linguistics.",
                "links": null
            },
            "BIBREF12": {
                "ref_id": "b12",
                "title": "FlauBERT: Unsupervised language model pre-training for French",
                "authors": [
                    {
                        "first": "Hang",
                        "middle": [],
                        "last": "Le",
                        "suffix": ""
                    },
                    {
                        "first": "Lo\u00efc",
                        "middle": [],
                        "last": "Vial",
                        "suffix": ""
                    },
                    {
                        "first": "Jibril",
                        "middle": [],
                        "last": "Frej",
                        "suffix": ""
                    },
                    {
                        "first": "Vincent",
                        "middle": [],
                        "last": "Segonne",
                        "suffix": ""
                    },
                    {
                        "first": "Maximin",
                        "middle": [],
                        "last": "Coavoux",
                        "suffix": ""
                    },
                    {
                        "first": "Benjamin",
                        "middle": [],
                        "last": "Lecouteux",
                        "suffix": ""
                    },
                    {
                        "first": "Alexandre",
                        "middle": [],
                        "last": "Allauzen",
                        "suffix": ""
                    },
                    {
                        "first": "Benoit",
                        "middle": [],
                        "last": "Crabb\u00e9",
                        "suffix": ""
                    },
                    {
                        "first": "Laurent",
                        "middle": [],
                        "last": "Besacier",
                        "suffix": ""
                    },
                    {
                        "first": "Didier",
                        "middle": [],
                        "last": "Schwab",
                        "suffix": ""
                    }
                ],
                "year": 2020,
                "venue": "Proceedings of the 12th Language Resources and Evaluation Conference",
                "volume": "",
                "issue": "",
                "pages": "2479--2490",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Hang Le, Lo\u00efc Vial, Jibril Frej, Vincent Segonne, Max- imin Coavoux, Benjamin Lecouteux, Alexandre Al- lauzen, Benoit Crabb\u00e9, Laurent Besacier, and Didier Schwab. 2020. FlauBERT: Unsupervised language model pre-training for French. In Proceedings of the 12th Language Resources and Evaluation Confer- ence, pages 2479-2490, Marseille, France. European Language Resources Association.",
                "links": null
            },
            "BIBREF14": {
                "ref_id": "b14",
                "title": "Lium spkdiarization: an open source toolkit for diarization",
                "authors": [
                    {
                        "first": "Sylvain",
                        "middle": [],
                        "last": "Meignier",
                        "suffix": ""
                    },
                    {
                        "first": "Teva",
                        "middle": [],
                        "last": "Merlin",
                        "suffix": ""
                    }
                ],
                "year": 2010,
                "venue": "CMU SPUD Workshop",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Sylvain Meignier and Teva Merlin. 2010. Lium spkdi- arization: an open source toolkit for diarization. In CMU SPUD Workshop.",
                "links": null
            },
            "BIBREF15": {
                "ref_id": "b15",
                "title": "Annotation syntaxique automatique de la partie orale du orf\u00e9o",
                "authors": [
                    {
                        "first": "Alexis",
                        "middle": [],
                        "last": "Nasr",
                        "suffix": ""
                    },
                    {
                        "first": "Franck",
                        "middle": [],
                        "last": "Dary",
                        "suffix": ""
                    }
                ],
                "year": 2020,
                "venue": "Langages",
                "volume": "",
                "issue": "3",
                "pages": "87--102",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Alexis Nasr, Franck Dary, Fr\u00e9d\u00e9ric Bechet, and Beno\u00eet Fabre. 2020. Annotation syntaxique automatique de la partie orale du orf\u00e9o. Langages, (3):87-102.",
                "links": null
            },
            "BIBREF16": {
                "ref_id": "b16",
                "title": "Algorithms for deterministic incremental dependency parsing",
                "authors": [
                    {
                        "first": "Joakim",
                        "middle": [],
                        "last": "Nivre",
                        "suffix": ""
                    }
                ],
                "year": 2008,
                "venue": "Computational Linguistics",
                "volume": "34",
                "issue": "4",
                "pages": "513--553",
                "other_ids": {
                    "DOI": [
                        "10.1162/coli.07-056-R1-07-027"
                    ]
                },
                "num": null,
                "urls": [],
                "raw_text": "Joakim Nivre. 2008. Algorithms for deterministic incre- mental dependency parsing. Computational Linguis- tics, 34(4):513-553.",
                "links": null
            },
            "BIBREF17": {
                "ref_id": "b17",
                "title": "A time delay neural network architecture for efficient modeling of long temporal contexts",
                "authors": [
                    {
                        "first": "Vijayaditya",
                        "middle": [],
                        "last": "Peddinti",
                        "suffix": ""
                    },
                    {
                        "first": "Daniel",
                        "middle": [],
                        "last": "Povey",
                        "suffix": ""
                    },
                    {
                        "first": "Sanjeev",
                        "middle": [],
                        "last": "Khudanpur",
                        "suffix": ""
                    }
                ],
                "year": 2015,
                "venue": "Sixteenth Annual Conference of the International Speech Communication Association",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Vijayaditya Peddinti, Daniel Povey, and Sanjeev Khu- danpur. 2015. A time delay neural network architec- ture for efficient modeling of long temporal contexts. In Sixteenth Annual Conference of the International Speech Communication Association.",
                "links": null
            },
            "BIBREF18": {
                "ref_id": "b18",
                "title": "The kaldi speech recognition toolkit",
                "authors": [
                    {
                        "first": "Daniel",
                        "middle": [],
                        "last": "Povey",
                        "suffix": ""
                    },
                    {
                        "first": "Arnab",
                        "middle": [],
                        "last": "Ghoshal",
                        "suffix": ""
                    },
                    {
                        "first": "Gilles",
                        "middle": [],
                        "last": "Boulianne",
                        "suffix": ""
                    },
                    {
                        "first": "Lukas",
                        "middle": [],
                        "last": "Burget",
                        "suffix": ""
                    },
                    {
                        "first": "Ondrej",
                        "middle": [],
                        "last": "Glembek",
                        "suffix": ""
                    },
                    {
                        "first": "Nagendra",
                        "middle": [],
                        "last": "Goel",
                        "suffix": ""
                    },
                    {
                        "first": "Mirko",
                        "middle": [],
                        "last": "Hannemann",
                        "suffix": ""
                    },
                    {
                        "first": "Petr",
                        "middle": [],
                        "last": "Motlicek",
                        "suffix": ""
                    },
                    {
                        "first": "Yanmin",
                        "middle": [],
                        "last": "Qian",
                        "suffix": ""
                    },
                    {
                        "first": "Petr",
                        "middle": [],
                        "last": "Schwarz",
                        "suffix": ""
                    }
                ],
                "year": 2011,
                "venue": "",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Daniel Povey, Arnab Ghoshal, Gilles Boulianne, Lukas Burget, Ondrej Glembek, Nagendra Goel, Mirko Hannemann, Petr Motlicek, Yanmin Qian, Petr Schwarz, et al. 2011. The kaldi speech recognition toolkit. Technical report, IEEE Signal Processing Society.",
                "links": null
            },
            "BIBREF19": {
                "ref_id": "b19",
                "title": "Purely sequence-trained neural networks for asr based on lattice-free mmi",
                "authors": [
                    {
                        "first": "Daniel",
                        "middle": [],
                        "last": "Povey",
                        "suffix": ""
                    },
                    {
                        "first": "Vijayaditya",
                        "middle": [],
                        "last": "Peddinti",
                        "suffix": ""
                    },
                    {
                        "first": "Daniel",
                        "middle": [],
                        "last": "Galvez",
                        "suffix": ""
                    },
                    {
                        "first": "Pegah",
                        "middle": [],
                        "last": "Ghahremani",
                        "suffix": ""
                    },
                    {
                        "first": "Vimal",
                        "middle": [],
                        "last": "Manohar",
                        "suffix": ""
                    },
                    {
                        "first": "Xingyu",
                        "middle": [],
                        "last": "Na",
                        "suffix": ""
                    },
                    {
                        "first": "Yiming",
                        "middle": [],
                        "last": "Wang",
                        "suffix": ""
                    },
                    {
                        "first": "Sanjeev",
                        "middle": [],
                        "last": "Khudanpur",
                        "suffix": ""
                    }
                ],
                "year": 2016,
                "venue": "Interspeech",
                "volume": "",
                "issue": "",
                "pages": "2751--2755",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Daniel Povey, Vijayaditya Peddinti, Daniel Galvez, Pe- gah Ghahremani, Vimal Manohar, Xingyu Na, Yim- ing Wang, and Sanjeev Khudanpur. 2016. Purely sequence-trained neural networks for asr based on lattice-free mmi. In Interspeech, pages 2751-2755.",
                "links": null
            },
            "BIBREF20": {
                "ref_id": "b20",
                "title": "Phoneme-bert: Joint language modelling of phoneme sequence and asr transcript",
                "authors": [
                    {
                        "first": "Ayush",
                        "middle": [],
                        "last": "Mukuntha Narayanan Sundararaman",
                        "suffix": ""
                    },
                    {
                        "first": "Jithendra",
                        "middle": [],
                        "last": "Kumar",
                        "suffix": ""
                    },
                    {
                        "first": "",
                        "middle": [],
                        "last": "Vepa",
                        "suffix": ""
                    }
                ],
                "year": 2021,
                "venue": "",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Mukuntha Narayanan Sundararaman, Ayush Kumar, and Jithendra Vepa. 2021. Phoneme-bert: Joint lan- guage modelling of phoneme sequence and asr tran- script.",
                "links": null
            },
            "BIBREF21": {
                "ref_id": "b21",
                "title": "Sequence-discriminative training of deep neural networks",
                "authors": [
                    {
                        "first": "Karel",
                        "middle": [],
                        "last": "Vesel\u1ef3",
                        "suffix": ""
                    },
                    {
                        "first": "Arnab",
                        "middle": [],
                        "last": "Ghoshal",
                        "suffix": ""
                    },
                    {
                        "first": "Luk\u00e1s",
                        "middle": [],
                        "last": "Burget",
                        "suffix": ""
                    },
                    {
                        "first": "Daniel",
                        "middle": [],
                        "last": "Povey",
                        "suffix": ""
                    }
                ],
                "year": 2013,
                "venue": "In Interspeech",
                "volume": "2013",
                "issue": "",
                "pages": "2345--2349",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Karel Vesel\u1ef3, Arnab Ghoshal, Luk\u00e1s Burget, and Daniel Povey. 2013. Sequence-discriminative training of deep neural networks. In Interspeech, volume 2013, pages 2345-2349.",
                "links": null
            },
            "BIBREF22": {
                "ref_id": "b22",
                "title": "Ariya Rastrow, Andreas Stolcke, and Ivan Bulyko. 2022. Rescorebert: Discriminative speech recognition rescoring with bert",
                "authors": [
                    {
                        "first": "Liyan",
                        "middle": [],
                        "last": "Xu",
                        "suffix": ""
                    },
                    {
                        "first": "Yile",
                        "middle": [],
                        "last": "Gu",
                        "suffix": ""
                    },
                    {
                        "first": "Jari",
                        "middle": [],
                        "last": "Kolehmainen",
                        "suffix": ""
                    },
                    {
                        "first": "Haidar",
                        "middle": [],
                        "last": "Khan",
                        "suffix": ""
                    },
                    {
                        "first": "Ankur",
                        "middle": [],
                        "last": "Gandhe",
                        "suffix": ""
                    }
                ],
                "year": null,
                "venue": "",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Liyan Xu, Yile Gu, Jari Kolehmainen, Haidar Khan, Ankur Gandhe, Ariya Rastrow, Andreas Stolcke, and Ivan Bulyko. 2022. Rescorebert: Discriminative speech recognition rescoring with bert.",
                "links": null
            }
        },
        "ref_entries": {
            "FIGREF1": {
                "type_str": "figure",
                "num": null,
                "text": "FT -Word prediction accuracy of FlauBERT-O-base_uncasedFigures 1 to 4 show the results assessed at each epoch. In table 2, we summarise the results for the last epoch and also for the default ORAL -Word prediction accuracy of FlauBERT-O-asr, using the initial flaubert-baseuncased BPE tokenizer ORAL_NB -Word prediction accuracy of FlauBERT-O-asr_newbpe, using a new BPE tokenizer trained on ASR data",
                "uris": null
            },
            "FIGREF3": {
                "type_str": "figure",
                "num": null,
                "text": "TV news classification -train 38K, test 5K",
                "uris": null
            },
            "FIGREF4": {
                "type_str": "figure",
                "num": null,
                "text": "Figure 6: TV news classification -train 5K, test 38K",
                "uris": null
            },
            "FIGREF5": {
                "type_str": "figure",
                "num": null,
                "text": "Figure 7: TV news classification -train 500, test 47K",
                "uris": null
            },
            "FIGREF6": {
                "type_str": "figure",
                "num": null,
                "text": "TV news classification -train 200, test 47K",
                "uris": null
            },
            "FIGREF7": {
                "type_str": "figure",
                "num": null,
                "text": "LAS learning curve for syntactic parser according to quantity of training data. Similar shape is obtained for UAS and UPOS.",
                "uris": null
            },
            "TABREF2": {
                "type_str": "table",
                "text": "Word prediction task accuracies",
                "html": null,
                "content": "<table><tr><td>5 Downstream Task 1: Automatic Classification of TV Shows</td></tr></table>",
                "num": null
            },
            "TABREF5": {
                "type_str": "table",
                "text": "",
                "html": null,
                "content": "<table><tr><td>: Main result on syntax prediction. Metrics are Labeled Attachment Score (LAS), Unlabeled Attach-ment Score (UAS) and Part-of-speech tagging accuracy (UPOS). Higher is better, highest figure in bold.</td></tr></table>",
                "num": null
            },
            "TABREF7": {
                "type_str": "table",
                "text": "Effect of repunctuating speech transcripts on syntactic parsing prior to extracting representations. Results from the ORAL representations are given for reference.",
                "html": null,
                "content": "<table/>",
                "num": null
            }
        }
    }
}