File size: 91,363 Bytes
6fa4bc9
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
1001
1002
1003
1004
1005
1006
1007
1008
1009
1010
1011
1012
1013
1014
1015
1016
1017
1018
1019
1020
1021
1022
1023
1024
1025
1026
1027
1028
1029
1030
1031
1032
1033
1034
1035
1036
1037
1038
1039
1040
1041
1042
1043
1044
1045
1046
1047
1048
1049
1050
1051
1052
1053
1054
1055
1056
1057
1058
1059
1060
1061
1062
1063
1064
1065
1066
1067
1068
1069
1070
1071
1072
1073
1074
1075
1076
1077
1078
1079
1080
1081
1082
1083
1084
1085
1086
1087
1088
1089
1090
1091
1092
1093
1094
1095
1096
1097
1098
1099
1100
1101
1102
1103
1104
1105
1106
1107
1108
1109
1110
1111
1112
1113
1114
1115
1116
1117
1118
1119
1120
1121
1122
1123
1124
1125
1126
1127
1128
1129
1130
1131
1132
1133
1134
1135
1136
1137
1138
1139
1140
1141
1142
1143
1144
1145
1146
1147
1148
1149
1150
1151
1152
1153
1154
1155
1156
1157
1158
1159
1160
1161
1162
1163
1164
1165
1166
1167
1168
1169
1170
1171
1172
1173
1174
1175
1176
1177
1178
1179
1180
1181
1182
1183
1184
1185
1186
1187
1188
1189
1190
1191
1192
1193
1194
1195
1196
1197
1198
1199
1200
1201
1202
1203
1204
1205
1206
1207
1208
1209
1210
1211
1212
1213
1214
1215
1216
1217
1218
1219
1220
1221
1222
1223
1224
1225
1226
1227
1228
1229
1230
1231
1232
1233
1234
1235
1236
1237
1238
1239
1240
1241
1242
1243
1244
1245
1246
1247
1248
1249
1250
1251
1252
1253
1254
1255
1256
1257
1258
1259
1260
1261
1262
1263
1264
1265
1266
1267
1268
1269
1270
1271
1272
1273
1274
1275
1276
1277
1278
1279
1280
1281
1282
1283
1284
1285
1286
1287
1288
1289
1290
1291
1292
1293
1294
1295
1296
1297
1298
1299
1300
1301
1302
1303
1304
1305
1306
1307
1308
1309
1310
1311
1312
1313
1314
1315
1316
1317
1318
1319
1320
1321
1322
1323
1324
1325
1326
1327
1328
1329
1330
1331
1332
1333
1334
1335
1336
1337
1338
1339
1340
1341
1342
1343
1344
1345
1346
1347
1348
1349
1350
1351
1352
1353
1354
1355
1356
1357
1358
1359
1360
1361
1362
1363
1364
1365
1366
1367
1368
1369
1370
1371
1372
1373
1374
1375
1376
1377
1378
1379
1380
1381
1382
1383
1384
1385
1386
1387
1388
1389
1390
1391
1392
1393
1394
1395
1396
1397
1398
1399
1400
1401
1402
1403
1404
1405
1406
1407
1408
1409
1410
1411
1412
1413
1414
1415
1416
1417
1418
1419
1420
1421
1422
1423
1424
1425
1426
1427
1428
1429
1430
1431
1432
1433
1434
1435
1436
1437
1438
1439
1440
1441
1442
1443
1444
1445
1446
1447
1448
1449
1450
1451
1452
1453
1454
1455
1456
1457
1458
1459
1460
1461
1462
1463
1464
1465
1466
1467
1468
1469
1470
1471
1472
1473
1474
1475
1476
1477
1478
1479
1480
1481
1482
1483
1484
1485
1486
1487
1488
1489
1490
1491
1492
1493
1494
1495
1496
1497
1498
1499
1500
1501
1502
1503
1504
1505
1506
1507
1508
1509
1510
1511
1512
1513
1514
1515
1516
1517
1518
1519
1520
1521
1522
1523
1524
1525
1526
1527
1528
1529
1530
1531
1532
1533
1534
1535
1536
1537
1538
1539
1540
1541
1542
1543
1544
1545
1546
1547
1548
1549
1550
1551
1552
1553
1554
1555
1556
1557
1558
1559
1560
1561
1562
1563
1564
1565
1566
1567
1568
1569
1570
1571
1572
1573
1574
1575
1576
1577
1578
1579
1580
1581
1582
1583
1584
1585
1586
1587
1588
1589
1590
1591
1592
1593
1594
1595
1596
1597
1598
1599
1600
1601
1602
1603
1604
1605
1606
1607
1608
1609
1610
1611
1612
1613
1614
1615
1616
1617
1618
1619
1620
1621
1622
1623
1624
1625
1626
1627
1628
1629
1630
1631
1632
1633
1634
1635
1636
1637
1638
1639
1640
1641
1642
1643
1644
1645
1646
1647
1648
1649
1650
1651
1652
1653
1654
1655
1656
1657
1658
1659
1660
1661
1662
1663
1664
1665
1666
1667
1668
1669
1670
1671
1672
1673
1674
1675
1676
1677
1678
1679
1680
1681
1682
1683
1684
1685
1686
1687
1688
1689
1690
1691
1692
1693
1694
1695
1696
1697
1698
1699
{
    "paper_id": "2021",
    "header": {
        "generated_with": "S2ORC 1.0.0",
        "date_generated": "2023-01-19T03:35:41.725695Z"
    },
    "title": "Exploring Linguistically-Lightweight Keyword Extraction Techniques for Indexing News Articles in a Multilingual Set-up",
    "authors": [
        {
            "first": "Jakub",
            "middle": [],
            "last": "Piskorski",
            "suffix": "",
            "affiliation": {},
            "email": "jpiskorski@gmail.com"
        },
        {
            "first": "Nicolas",
            "middle": [],
            "last": "Stefanovitch",
            "suffix": "",
            "affiliation": {
                "laboratory": "Joint Research Centre (JRC) Ispra",
                "institution": "",
                "location": {
                    "country": "Italy"
                }
            },
            "email": "nicolas.stefanovitch@ec.europa.eu"
        },
        {
            "first": "European",
            "middle": [],
            "last": "Commission",
            "suffix": "",
            "affiliation": {
                "laboratory": "Joint Research Centre (JRC) Ispra",
                "institution": "",
                "location": {
                    "country": "Italy"
                }
            },
            "email": ""
        },
        {
            "first": "Guillaume",
            "middle": [],
            "last": "Jacquet",
            "suffix": "",
            "affiliation": {
                "laboratory": "Joint Research Centre (JRC) Ispra",
                "institution": "",
                "location": {
                    "country": "Italy"
                }
            },
            "email": "guillaume.jacquet@ec.europa.eu"
        },
        {
            "first": "Aldo",
            "middle": [],
            "last": "Podavini",
            "suffix": "",
            "affiliation": {
                "laboratory": "Joint Research Centre (JRC) Ispra",
                "institution": "",
                "location": {
                    "country": "Italy"
                }
            },
            "email": "aldo.podavini@ec.europa.eu"
        }
    ],
    "year": "",
    "venue": null,
    "identifiers": {},
    "abstract": "This paper presents a study of state-of-theart unsupervised and linguistically unsophisticated keyword extraction algorithms, based on statistic-, graph-, and embedding-based approaches, including, i.a., Total Keyword Frequency, TF-IDF, RAKE, KPMiner, YAKE, KeyBERT, and variants of TextRank-based keyword extraction algorithms. The study was motivated by the need to select the most appropriate technique to extract keywords for indexing news articles in a realworld large-scale news analysis engine. The algorithms were evaluated on a corpus of circa 330 news articles in 7 languages. The overall best F 1 scores for all languages on average were obtained using a combination of the recently introduced YAKE algorithm and KPMiner (20.1%, 46.6% and 47.2% for exact, partial and fuzzy matching resp.).",
    "pdf_parse": {
        "paper_id": "2021",
        "_pdf_hash": "",
        "abstract": [
            {
                "text": "This paper presents a study of state-of-theart unsupervised and linguistically unsophisticated keyword extraction algorithms, based on statistic-, graph-, and embedding-based approaches, including, i.a., Total Keyword Frequency, TF-IDF, RAKE, KPMiner, YAKE, KeyBERT, and variants of TextRank-based keyword extraction algorithms. The study was motivated by the need to select the most appropriate technique to extract keywords for indexing news articles in a realworld large-scale news analysis engine. The algorithms were evaluated on a corpus of circa 330 news articles in 7 languages. The overall best F 1 scores for all languages on average were obtained using a combination of the recently introduced YAKE algorithm and KPMiner (20.1%, 46.6% and 47.2% for exact, partial and fuzzy matching resp.).",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Abstract",
                "sec_num": null
            }
        ],
        "body_text": [
            {
                "text": "Keyword Extraction (KE) is the task of automated extraction of single or multiple-token phrases from a textual document that best express all key aspects of its content and can be seen as automated generation of a short document summary. It constitutes an enabling technology for document indexing, clustering, classification, summarization, etc. This paper presents a comparative study of the performance of some state-of-the-art unsupervised linguistically-lightweight keyword extraction methods and combinations thereof applied on news articles in seven languages. The main drive behind the reported work was to explore the usability of these methods for adding another level of indexing of news articles gathered and analysed by the Europe Media Monitor (EMM) 1 (Steinberger et al., 2017) , a large-scale multilingual real-time news gathering and analysis system, which processes an average of 300,000 online news articles per day in up to 70 languages and is serving several EU institutions and international organisations.",
                "cite_spans": [
                    {
                        "start": 299,
                        "end": 346,
                        "text": "clustering, classification, summarization, etc.",
                        "ref_id": null
                    },
                    {
                        "start": 766,
                        "end": 792,
                        "text": "(Steinberger et al., 2017)",
                        "ref_id": "BIBREF21"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": "1"
            },
            {
                "text": "While a vast bulk of research and tools for KE have been reported in the past, the specific focus of our research was to select the most suitable KE methods for indexing news articles taking specifically into account the operational, multilingual and real-time processing character of EMM. Hence, only unsupervised, scalable vis-a-vis multilinguality and robust algorithms that do not require any sophisticated linguistic resources and are capable of processing single news article in a time-efficient manner were considered.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": "1"
            },
            {
                "text": "Keyword extraction has been the subject of research for decades. Both unsupervised and supervised approaches exist, the unsupervised being particularly popular due to the scarcity of annotated data as well as their domain independence.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": "1"
            },
            {
                "text": "The unsupervised approaches are usually divided in three phases: (a) selection of candidate tokens that can constitute part of a keyword using some heuristics based on statistics and/or certain linguistic features (e.g., belonging to a specific part-ofspeech or not being a stop word, etc.), (b) rank-ing the selected tokens, and (c) generating keywords out of the selected tokens, where the final rank is computed using the scores of the individual tokens. The unsupervised methods are divided into: statistics-, graph-, embeddings-and language model-based ones. The statistics-based methods exploit frequency, positional and co-occurrence statistics in the process of selecting candidate keywords. The graph-based methods create a graph from textual documents with nodes representing the candidate keywords and edges representing some relatedness to other candidate keywords, and then deploy graph ranking algorithms, e.g. PageRank, TextRank, to rank the final set of keywords. Recently, a third group of methods emerged which are based on word (Mikolov et al., 2013) and sentence embeddings (Pagliardini et al., 2018) . Linguistic sophistication constitutes another dimension to look at the keyword extraction algorithms. Some of the methods use barely any language-specific resources, e.g., only stop word lists, whereas others exploit part-of-speech tagging or even syntactic parsing.",
                "cite_spans": [
                    {
                        "start": 1047,
                        "end": 1069,
                        "text": "(Mikolov et al., 2013)",
                        "ref_id": "BIBREF14"
                    },
                    {
                        "start": 1094,
                        "end": 1120,
                        "text": "(Pagliardini et al., 2018)",
                        "ref_id": "BIBREF15"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": "1"
            },
            {
                "text": "The supervised methods are simply divided into shallow and deep learning methods. The shallow methods exploit either binary classifiers to decide whether a token sequence is a keyword, linear regression-based models to rank the candidate keywords, and sequence labelling techniques. The deep learning methods exploit encoder-decoder and sequence-to-sequence labelling approaches. Most of the supervised machine-learning approaches reported in the literature deploy more linguistic sophistication (i.e., linguistic features) vis-a-vis unsupervised methods.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": "1"
            },
            {
                "text": "Extensive surveys on keyword extraction methods and comparison of their relative performance are provided in (Papagiannopoulou and Tsoumakas, 2020; Hasan and Ng, 2014; Kilic and Cetin, 2019; Alami Merrouni et al., 2019 ).",
                "cite_spans": [
                    {
                        "start": 109,
                        "end": 147,
                        "text": "(Papagiannopoulou and Tsoumakas, 2020;",
                        "ref_id": "BIBREF16"
                    },
                    {
                        "start": 148,
                        "end": 167,
                        "text": "Hasan and Ng, 2014;",
                        "ref_id": "BIBREF8"
                    },
                    {
                        "start": 168,
                        "end": 190,
                        "text": "Kilic and Cetin, 2019;",
                        "ref_id": "BIBREF9"
                    },
                    {
                        "start": 191,
                        "end": 218,
                        "text": "Alami Merrouni et al., 2019",
                        "ref_id": "BIBREF0"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": "1"
            },
            {
                "text": "Since only a few monolingual corpora with keyword annotation of news articles exist (Marujo et al., 2013 (Marujo et al., , 2012 Bougouin et al., 2013) that use different approaches to keyword annotation, we have created a new multilingual corpus of circa 330 news articles annotated with keywords covering 7 languages which is used for evaluation purposes in our study. We are not aware of any similar multilingual resource available for research purposes.",
                "cite_spans": [
                    {
                        "start": 84,
                        "end": 104,
                        "text": "(Marujo et al., 2013",
                        "ref_id": "BIBREF12"
                    },
                    {
                        "start": 105,
                        "end": 127,
                        "text": "(Marujo et al., , 2012",
                        "ref_id": "BIBREF11"
                    },
                    {
                        "start": 128,
                        "end": 150,
                        "text": "Bougouin et al., 2013)",
                        "ref_id": "BIBREF3"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": "1"
            },
            {
                "text": "The paper is organized as follows. First, Section 2 introduces the Keyword Extraction task for news article indexing. Section 3 gives an overview of the methods explored. Next, Section 4 describes the creation of a multi-lingual data set and experiment results. Finally, we end up with conclusions and an outlook on future work in Section 5.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": "1"
            },
            {
                "text": "The purpose of KE might vary depending on the domain in which it is deployed. In media monitoring and analysis the main objective is to capture from the text of each news article the main topics discussed therein, the key events reported, the entities involved in these events and what is the outcome, impact and significance thereof. For the sake of specifying what the expected output of KE should be, and in order to guide human annotators tasked to create test datasets, the following constraints on keyword selection were introduced (here in simplified form):",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Keyword Extraction Task",
                "sec_num": "2"
            },
            {
                "text": "\u2022 a keyword can be a single word or a sequence of up to 5 consecutive words (unless it is a long proper name) as they appear in the news article or the title thereof,",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Keyword Extraction Task",
                "sec_num": "2"
            },
            {
                "text": "\u2022 a minimum of 5 and ideally not more than 15 keywords (with ca 30% margin -to provide some flexibility) should be selected, however the set of selected keywords may not constitute more than 50% of the body of the news article,",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Keyword Extraction Task",
                "sec_num": "2"
            },
            {
                "text": "\u2022 a single keyword may not include more than one entity,",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Keyword Extraction Task",
                "sec_num": "2"
            },
            {
                "text": "\u2022 a keyword has to be either a noun phrase, proper name, verb, adjective, phrasal verb, or part of a clause (e.g., 'Trump died'),",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Keyword Extraction Task",
                "sec_num": "2"
            },
            {
                "text": "\u2022 a stand-alone adverb, conjunction, determiner, number, preposition or pronoun may not constitute a keyword,",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Keyword Extraction Task",
                "sec_num": "2"
            },
            {
                "text": "\u2022 a full sentence can never constitute a keyword,",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Keyword Extraction Task",
                "sec_num": "2"
            },
            {
                "text": "\u2022 keywords should not be converted into their corresponding base forms, disregarding the fact that a base form would appear more natural,",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Keyword Extraction Task",
                "sec_num": "2"
            },
            {
                "text": "\u2022 if there are many candidate keywords to represent the same concept, only one of them should be selected.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Keyword Extraction Task",
                "sec_num": "2"
            },
            {
                "text": "Given the specific context of real-time media monitoring, our experiments imposed the following main selection criteria to the keyword extraction techniques to explore and evaluate:",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Methods",
                "sec_num": "3"
            },
            {
                "text": "\u2022 efficiency: ability to process a single news article within a fraction of a second,",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Methods",
                "sec_num": "3"
            },
            {
                "text": "\u2022 multi-linguality: ability to quickly adapt the method to the processing of many different languages,",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Methods",
                "sec_num": "3"
            },
            {
                "text": "\u2022 robustness: ability to process corrupted data without impacting performance.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Methods",
                "sec_num": "3"
            },
            {
                "text": "Consequently, we have selected methods that: (a) do not require any language-specific resources except stop word lists and off-the-shelf pre-computed word embeddings, (b) exploit only information that can be computed in a time-efficient manner, e.g., frequency statistics, co-occurrence, positional information, string similarity, etc., (c) do not require any external text corpora (with one exception for a baseline method). The pool of methods (and variants thereof) explored includes:",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Methods",
                "sec_num": "3"
            },
            {
                "text": "Total Keyword Frequency (TKF) exploits only frequency information to rank candidate keywords, where candidates are 1-3 word n-grams from text that do not contain punctuation marks, and which neither start nor end with a stop word.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Methods",
                "sec_num": "3"
            },
            {
                "text": "Term Frequency-Inverse Document Frequency (TF-IDF) constitutes the main baseline algorithm in our study. For the computation of TF-IDF scores a corpus consisting of 34.5M news articles gathered by EMM that span over the first 6 months of 2020 and covering ca. 70 languages was exploited. 2 A maximum of min(20, N/6) keywords with highest TF-IDF scores are returned for a news article, where N stands for the total number of tokens in the article.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Methods",
                "sec_num": "3"
            },
            {
                "text": "Rapid Automatic Keyword Extraction (RAKE) exploits both frequency and co-occurrence information about tokens to score candidate keyword phrases (token sequences that do contain neither stop words nor phrase delimiters) (Rose et al., 2 In particular, the pool of 34.5M news articles included: 11309K English, 6746K Spanish, 2322K French, 2001K Italian, 1431K German, 760K Romanian and 183K Polish articles, which covers the languages of the evaluation dataset (see Section 4.1). 2010). More specifically, the score for a candidate keyword phrase is computed as the sum of its member word scores. We explored three options for scoring words: (a) s(w) = f requency(w) (RAKE-FREQ), (b) s(w) = degree(w) (RAKE-DEG), which stands for the number of other content words that co-occurr with w in any candidate keyword phrase, and (c) s(w) = degree(w)/f requency(w) (RAKE-DEGFREQ).",
                "cite_spans": [
                    {
                        "start": 219,
                        "end": 232,
                        "text": "(Rose et al.,",
                        "ref_id": null
                    },
                    {
                        "start": 233,
                        "end": 234,
                        "text": "2",
                        "ref_id": null
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Methods",
                "sec_num": "3"
            },
            {
                "text": "Keyphrase Miner (KP-Miner) exploits frequency and positional information about candidate keywords (word n-grams that do not contain punctuation marks, and which neither start nor end with a stop word) with some weighting of multi-token keywords (El-Beltagy and Rafea, 2009) . More precisely, the score of a candidate keyword (in the case of single document scenario) is computed as:",
                "cite_spans": [
                    {
                        "start": 245,
                        "end": 273,
                        "text": "(El-Beltagy and Rafea, 2009)",
                        "ref_id": "BIBREF6"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Methods",
                "sec_num": "3"
            },
            {
                "text": "s(k) = f req(k) \u2022 max( |K| \u03b1 \u2022 |K m | , \u03c9) \u2022 1 AvgP os(k)",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Methods",
                "sec_num": "3"
            },
            {
                "text": "where f req(k), K, K m denote frequency of k, the set of all candidate keywords and the set of all multi-token candidate keywords resp., whereas \u03b1 and \u03c9 are two weight adjustment constants, and AvgP os(k) denotes the average position of the keyword in a text in terms of regions separated by punctuations. KP-Miner also has a specific cut-off parameter, which determines the number of tokens after which if the keyword appears for the first time it is filtered out and discarded as a candidate. Our version of KP-Miner does not include stemming different from the original one (El-Beltagy and Rafea, 2009) due to our multilingual context and the specification of KE task (see Section 2). Finally, KP-Miner scans the top n ranking candidates and removes the ones which constitute sub-parts of others and adjusts the scores accordingly. Based on the empirical observations the specific parameters, namely, \u03b1, \u03c9 and cut-off were set to 1.0, 3.0 and 1000 resp.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Methods",
                "sec_num": "3"
            },
            {
                "text": "Yet Another Keyword Extraction (Yake) exploits a wider range of features (Campos et al., 2020) vis-a-vis RAKE and KP-Miner in the process of scoring single tokens. Like the two algorithms introduced earlier, YAKE selects as candidate keywords word n-grams that do not contain punctuation marks, and which neither start nor end with a stop word. However, on top of this, an additional token classification step is then carried out in order to filter out additional tokens that should not constitute part of a keyword (e.g. non alphanumeric character sequences, etc.). Single tokens are scored using the following formula:",
                "cite_spans": [
                    {
                        "start": 73,
                        "end": 94,
                        "text": "(Campos et al., 2020)",
                        "ref_id": "BIBREF5"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Methods",
                "sec_num": "3"
            },
            {
                "text": "Score(t) = T rel\u2212context (t) \u2022 T position (t) T case (t) + T f req\u2212norm (t)+Tsentence(t) T rel\u2212context (t)",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Methods",
                "sec_num": "3"
            },
            {
                "text": "where: (a) T case (t) is a feature that reflects statistics on case information of all occurrences of t based on the assumption that uppercase tokens are more relevant than lowercase ones, (b) T position (t) is a feature that exploits positional information and boosts tokens that tend to appear at the beginning of a text, (c) T f req\u2212norm is a feature that gives higher value to tokens appearing more than the mean and balanced by the span provided by standard deviation, (d) T sentence (t) is a feature that boosts significance of tokens that appear in many different sentences, and (e) T rel\u2212context (t) is a relatedness to context indicator that 'downgrades' tokens that co-occur with higher number of unique tokens in a given window (see (Campos et al., 2020) for details). The score for a candidate keyword k = t 1 t 2 . . . t n is then computed as:",
                "cite_spans": [
                    {
                        "start": 744,
                        "end": 765,
                        "text": "(Campos et al., 2020)",
                        "ref_id": "BIBREF5"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Methods",
                "sec_num": "3"
            },
            {
                "text": "Score(k) = n i=1 Score(t i ) f requency(k) \u2022 (1 + n i=1 Score(t i ))",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Methods",
                "sec_num": "3"
            },
            {
                "text": "Once the candidate keywords are ranked, potential duplicates are removed by adding them in relevance order. When a new keyword is added it is compared against all more relevant candidates in terms of semantic similarity, and if this similarity is below a specified threshold it is discarded. While the original YAKE algorithm exploits for this purpose the Levenshtein distance, our implementation uses Weighted Logest Common Substrings string distance metric (Piskorski et al., 2009) which favours overlap in the initial part of the strings compared.",
                "cite_spans": [
                    {
                        "start": 459,
                        "end": 483,
                        "text": "(Piskorski et al., 2009)",
                        "ref_id": "BIBREF17"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Methods",
                "sec_num": "3"
            },
            {
                "text": "Keyword Extraction (KEYEMB) exploits document embeddings and cosine similarity in order to identify candidate keywords. First, a document embedding is computed, then word n-grams of different sizes are generated, which are subsequently ranked along their similarity to the embedding of the document (Grootendorst, 2020) .",
                "cite_spans": [
                    {
                        "start": 299,
                        "end": 319,
                        "text": "(Grootendorst, 2020)",
                        "ref_id": "BIBREF7"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Embedding-based",
                "sec_num": null
            },
            {
                "text": "We tested three different out-of-the-box transformer-based sentence embeddings. BERTbased ones are taken from (Reimers and Gurevych, 2020) , which are both multilingual and fine-tuned on natural language inference and semantic text similarity tasks. One version uses a basic BERT model (KEYEMB-BERT-B) and the other a lightweight BERT model (KEYEMB-BERT-D). Finally, KEYEMB-LASER is based on LASER (Artetxe and Schwenk, 2019) embeddings. Contrary to BERT, they have not been fine-tuned on semantic similarity tasks, but for the task of aligning similar multilingual concepts to the same semantic space.",
                "cite_spans": [
                    {
                        "start": 110,
                        "end": 138,
                        "text": "(Reimers and Gurevych, 2020)",
                        "ref_id": "BIBREF18"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Embedding-based",
                "sec_num": null
            },
            {
                "text": "Filtering stop words without applying any of the different post-processing steps proposed in (Grootendorst, 2020) provided the best results and therefore is the setting we used in the evaluation and comparison against other methods.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Embedding-based",
                "sec_num": null
            },
            {
                "text": "Graph-based Keyword Extraction: (GRAPH) exploits properties of a graph whose nodes are substrings extracted from the text in order to identify which are the most important (Litvak and Last, 2008) . This approach differs from TextRank (Mihalcea and Tarau, 2004), in two ways: firstly, the graph is constructed in a fundamentally different way yielding smaller graphs and therefore faster processing time; secondly, different lowercomplexity graph measures are also explored, allowing even faster processing time.",
                "cite_spans": [
                    {
                        "start": 172,
                        "end": 195,
                        "text": "(Litvak and Last, 2008)",
                        "ref_id": "BIBREF10"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Embedding-based",
                "sec_num": null
            },
            {
                "text": "A node of the graph corresponds either to a sentence, a phrase delimited by any punctuation marks or a token sequence delimited by stop words. Two nodes are connected only if they share at least 20% of words after removal of stop words.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Embedding-based",
                "sec_num": null
            },
            {
                "text": "The importance of the nodes can be defined in different ways. In this study we looked at: (a) degree (GRAPH-DEGREE), which measures the absolute number of related sentences in the text, (b) centrality (GRAPH-CENTR) which intuitively measures the extent to which a specific node serves as a bridge to connect any unrelated pieces of information, (c) clustering (GRAPH-CLUST) which measure the level of interconnection between the neighbours of a node and itself, and finally, (d) the sum of the centrality and clustering measure (GRAPH-CE&CL). Please refer to (Brandes, 2005) for further details on these graph measures.",
                "cite_spans": [
                    {
                        "start": 559,
                        "end": 574,
                        "text": "(Brandes, 2005)",
                        "ref_id": null
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Embedding-based",
                "sec_num": null
            },
            {
                "text": "Although more sophisticated linguistic processing resources such as POS taggers and dependency parsers are available for at least several languages we did not consider KE techniques that exploit them since the range of languages covered would be still far away from the ca. 70 languages covered by EMM. Furthermore, although the BERT-based approaches to KE (even without any tuning) are known to be orders of magnitudes slower than the other methods, we explored them given the wide range of languages covered in terms of off-the-shelf embeddings.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Embedding-based",
                "sec_num": null
            },
            {
                "text": "For the evaluation of the KE algorithms we created random samples of circa 50 news articles published in 2020 for 7 languages: English, French, German, Italian, Polish, Romanian and Spanish. The selection of the languages was motivated to cover all three main Indo-European language families: Germanic, Romance and Slavic languages.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Dataset",
                "sec_num": "4.1"
            },
            {
                "text": "The news articles were annotated with keywords by two human experts for each language in the following manner. Initially, all annotators were presented with the task definition, keyword selection guidelines, and annotated a couple of trial articles. Next, the annotators were tasked to select keywords for the proper set of 50 news articles for each language. The annotation was done by each annotator separately since we were interested to measure the discrepancies between annotators and differences between the languages. The final sets of documents used for evaluation for some of the languages contained less than 50 news articles due to some near duplicates encountered, etc. Table 1 shows the differences in terms of keyword annotation distribution across languages. The average number of keywords per article varies from 8.68 for French to 13.20 for German. At the token level, the average ranges from 20.66 annotated tokens (French) per article to 30.24 (Romanian). The discrepancies between annotators differ significantly across languages, e.g., for Polish, only 9.37% of the keywords are shared between the two annotators, whereas for Romanian, they are 48.68%. However, when one measures the differences at the token level the discrepancies are significantly smaller, i.e., for Polish, 49.67% of the tokens are shared between the annotators, whereas for Romanian, 69.16%. This comparison between annotators is completed by computing the percentage of \"fuzzy\" common tokens (Table 1) , corresponding to the common 4-gram characters. As expected, the percentage of \"fuzzy\" common tokens is higher than for exact common tokens for all languages. It increases by ca. 2 points for English, French, Italian, Spanish and more than 4 points for German, Polish and Romanian.",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 682,
                        "end": 689,
                        "text": "Table 1",
                        "ref_id": "TABREF1"
                    },
                    {
                        "start": 1486,
                        "end": 1495,
                        "text": "(Table 1)",
                        "ref_id": "TABREF1"
                    }
                ],
                "eq_spans": [],
                "section": "Dataset",
                "sec_num": "4.1"
            },
            {
                "text": "Based on the relatively high level of discrepancies between each pair of annotators per language (see Table 1 ) we decided to create the ground truth for evaluation by merging the respective keyword sets for each languages. The statistics of the resulting ground truth data are summarized in Table 2 . We can observe that the average number of keywords per article for Italian and French is significantly lower than for the other languages. The average number of tokens per keyword is quite stable, from 2.33 (Spanish) to 2.79 (English), except for German, 1.75 tokens per keyword, due to the frequent use of compounds in this language.",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 102,
                        "end": 109,
                        "text": "Table 1",
                        "ref_id": "TABREF1"
                    },
                    {
                        "start": 292,
                        "end": 299,
                        "text": "Table 2",
                        "ref_id": "TABREF2"
                    }
                ],
                "eq_spans": [],
                "section": "Dataset",
                "sec_num": "4.1"
            },
            {
                "text": "We have used the classical precision (P ), recall (R) and F 1 metrics for the evaluation purposes. The overall P , R and F 1 scores were computed as an average over the respective scores for single news articles.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Evaluation Methodology",
                "sec_num": "4.2"
            },
            {
                "text": "We have computed the scores in three different ways. In the exact matching mode, we consider that an extracted keyword is matched correctly only if exactly the same keyword occurs in the ground truth (or vice versa).",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Evaluation Methodology",
                "sec_num": "4.2"
            },
            {
                "text": "In the partial matching mode, the match of a given keyword c vis-a-vis Ground Truth GT = {k 1 , . . . , k n } is computed as follows:",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Evaluation Methodology",
                "sec_num": "4.2"
            },
            {
                "text": "match(c) = max k\u2208GT 2 \u2022 commonT okens(c, k) |c| T + |k| T",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Evaluation Methodology",
                "sec_num": "4.2"
            },
            {
                "text": "where commonT okens(c, k) denotes the number of tokens that appear both in c and k, and |c| T (|k| T ) denote the number of tokens the keyword c (k) consists of. The value of match(c) is between 0 and 1. Analogously, in the fuzzy matching mode, the match of a given keyword c vis-a-vis Ground Truth GT = {k 1 , . . . , k n } is computed as follows:",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Evaluation Methodology",
                "sec_num": "4.2"
            },
            {
                "text": "match(c) = max k\u2208GT Similarity(c, k)",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Evaluation Methodology",
                "sec_num": "4.2"
            },
            {
                "text": "where Similarity(c, k) is computed using Longest Common Substring similarity metric (Bergroth et al., 2000) , whose value is between 0 and 1. Both P and R are computed analogously using the concept of partial and fuzzy matching. The main rationale behind using the partial and fuzzy matching mode was the fact that exact matching is simply too strict in terms of penalisation of automatically extracted keywords which do have strong overlap with keywords in the ground truth.",
                "cite_spans": [
                    {
                        "start": 84,
                        "end": 107,
                        "text": "(Bergroth et al., 2000)",
                        "ref_id": "BIBREF2"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Evaluation Methodology",
                "sec_num": "4.2"
            },
            {
                "text": "Finally, we have also computed standard deviation (SD) for all metrics in order to observe whether any of the algorithms is prone to producing response outliers.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Evaluation Methodology",
                "sec_num": "4.2"
            },
            {
                "text": "We have evaluated all the algorithms described in Section 3 with the following settings, unless specified elsewhere differently: (a) the max. number of tokens per keyword is 3, whereas the minimum (maximum) number of characters is set to 2 (80), (b) keywords can neither start nor end with a stop word, (c) keywords cannot contain tokens composed only of non-alphanumeric characters, and (d) the default maximum number of keywords to return is 15. The main drive behind setting the maximum number of keywords to 15 is based on empirical observation, optimizing both F 1 score and not returning too long list of keywords.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Results",
                "sec_num": "4.3"
            },
            {
                "text": "The overall performance of each algorithm averaged across languages, in term of P , R and F 1 scores is listed in Table 3 , respectively for exact, partial and fuzzy matching. In general, only the results for the best settings per algorithm type are provided except for YAKE and KPMINER, which performed overall best. More specifically, the table contains results of some additional variants of YAKE and its combinations with KPMiner, namely: (a) YAKE-15 and YAKE-20 which return 15 and 20 keywords resp., (b) YAKE-KPMINER-I (intersection) which returns the intersection of the results returned by YAKE-15 and KP-Miner, (c) YAKE-KPMINER-U (union) which merges up to 10 top keywords returned by YAKE and KP-Miner output, and (d) YAKE-KPMINER-R (re-ranking) which sums the ranks of the keywords returned by YAKE-15 and KPMINER and selects top 15 keywords after the re-ranking.",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 114,
                        "end": 121,
                        "text": "Table 3",
                        "ref_id": null
                    }
                ],
                "eq_spans": [],
                "section": "Results",
                "sec_num": "4.3"
            },
            {
                "text": "Across the three types of matching, the list of algorithms obtaining good results is quite stable (cf. Table 3 ). YAKE-KPMINER-R constantly obtaining the best F 1 , respectively 20.1%, 46.6% and 47.2% for the exact, partial and fuzzy matching, followed or equaled by the YAKE-KPMINER-U. YAKE-KPMINER-I obtained the best precision, respectively 28.5%, 55.9% and 57.2%. In terms of standard deviation (SD), YAKE-KPMINER-I appears to be the most unstable since it is constantly the algorithm with the highest SD, for P , R and F 1 , and for all types of matching.",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 103,
                        "end": 110,
                        "text": "Table 3",
                        "ref_id": null
                    }
                ],
                "eq_spans": [],
                "section": "Results",
                "sec_num": "4.3"
            },
            {
                "text": "As expected, the results obtained with partial and fuzzy matching are better than with exact matching. More interestingly, the fuzzy matching also allows to smooth the discrepancy between languages. Figure 1 highlights for YAKE-KPMINER-R algorithm how some languages like Polish, a highly inflected language, have a poor F 1 for exact matching, but are close to the all-language average for fuzzy matching. Figure 2 aims at comparing the results obtained in each language with a selection of algorithms for the fuzzy matching. The KPMINER algorithm appears to be best suited for the French language, whereas German the group of YAKE algorithms appears to be a better choice. There are some other language specific aspects according to the different algorithms, but less significant. As a matter of fact, the observations on YAKE and KPMINER strengths when applying on texts in specific languages were the main drive to introduce the various variants of combining these KE algorithms.",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 199,
                        "end": 207,
                        "text": "Figure 1",
                        "ref_id": "FIGREF0"
                    },
                    {
                        "start": 407,
                        "end": 415,
                        "text": "Figure 2",
                        "ref_id": null
                    }
                ],
                "eq_spans": [],
                "section": "Results",
                "sec_num": "4.3"
            },
            {
                "text": "One can also conclude from the evaluation figures that YAKE-KP-MINER-R appears to be the best \"all-rounder\" algorithm. In this context it is also important to emphasize that the performance of the various algorithms relies on the quality and coverage of the stop word lists, which are used by almost all algorithms compared here. In particular, the respective algorithms used identical stop word lists, covering: English (583 words), French (464), German (604), Italian (397), Polish (355), Romanian (282), and Spanish (352).",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Results",
                "sec_num": "4.3"
            },
            {
                "text": "KEYEMB-based approaches tend to focus only on the most important sentence in the news article. As such, frequently, several 3-grams candidates originating from the same sentence are returned, where most of them are redundant. Interestingly, as regards fuzzy matching KEYEMB-LASER performs better than BERT-based ones despite not being specially trained on similarity tasks, while KEYEMB-BERT-D performs overall best out of the three. It is worth mentioning that this approach is by far the slowest of the reported approaches in terms of time efficiency.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Results",
                "sec_num": "4.3"
            },
            {
                "text": "GRAPH-based approaches suffer from a similar focusing bias: they tend to focus on the most important concepts, as such they are always present but so are some variations thereof, e.g. reporting most frequent words within all the different contexts they appear in, therefore generating redundant keywords. Among this family of algorithms, the GRAPH-DEGREE performed best, meaning that a high co-occurrence count is a good indicator of relevance for KE.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Results",
                "sec_num": "4.3"
            },
            {
                "text": "Embedding and graph-based approaches overfocus on the key concepts of a text. The fact that they are based on an indirect form of counting the most important words, without any further postprocessing, may in part explain why their performance is comparable to TF-IDF, which relies directly on frequency count. An advantage of graphbased approaches compared to embedding-based ones and TF-IDF is that they don't need to be trained in advance on any corpora. ",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Results",
                "sec_num": "4.3"
            },
            {
                "text": "Based on the results presented in the previous Section we carried out some additional experiments in order to explore whether the best performing algorithm, namely, YAKE-KPMINER-R, could be improved. In particular, given that this algorithm combines merging of keywords of two different algorithms, we have added an additional deduplication step. To be more precise, all keyword candidates that are properly included in other keyword candidates are discarded. We evaluated this new variant with different settings as regards the maximum allowed number of keywords returned. While we have not observed significant improvements in terms of the F 1 score when increasing the number of keywords returned by the algorithms described in the previous Section, the evaluation of YAKE-KPMINER-R with deduplication revealed that increasing this parameter yields some gains. Figure 3 and 4 provide P , R and F 1 curves for fuzzy matching according to the maximum number of keywords allowed to be returned for the English and German subcorpus.",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 864,
                        "end": 872,
                        "text": "Figure 3",
                        "ref_id": null
                    }
                ],
                "eq_spans": [],
                "section": "Deduplication",
                "sec_num": "4.3.1"
            },
            {
                "text": "One can observe that shifting the maximum number of keywords to ca. 25 results in some improvement for F 1 and R. While these findings pave the way for some future explorations on parameter tuning to improve F 1 figures, one needs to emphasize here that increasing the number of keywords, even if resulting in some small gains in F 1 is not a desired feature from an application point of view, where analysts expect and prefer to 'see less than more'.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Deduplication",
                "sec_num": "4.3.1"
            },
            {
                "text": "We have carried out a small comparison of the runtime behaviour of the algorithms with respect to the time needed to process a collection of 16983 news articles on Covid-19 in English (84.9 MB of space on disk). The time given in seconds to run KTF, Rake, KPMiner, Yake and some variants thereof are provided in Table 4 . All the aforementioned algorithms have been implemented in Java and optimized in term of efficient data structures used that correspond to the upper bounds of the respective time complexity of these algorithms. Both embedding-and graph-based algorithms explored in our study were implemented in Python, using some existing libraries, and were not optimized for speed. For these reasons, it is not meaningful to report their exact time performance. As before, on a given CPU, embedding-based approaches run an order of magnitude slower than graph based algorithms, which themselves run a magnitude slower than the simpler algorithms, whose performance is reported in Table 4 .",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 312,
                        "end": 319,
                        "text": "Table 4",
                        "ref_id": "TABREF4"
                    },
                    {
                        "start": 988,
                        "end": 995,
                        "text": "Table 4",
                        "ref_id": "TABREF4"
                    }
                ],
                "eq_spans": [],
                "section": "Time efficiency performance",
                "sec_num": "4.4"
            },
            {
                "text": "This paper presented the results of a small comparative study of the performance of some stateof-the-art knowledge-lightweight keyword extraction methods in the context of indexing news articles in various languages with keywords. The best performing method, namely, a combination of Yake and KPMiner algorithms, obtained F 1 score of 20.1%, 46.6% and 47.2% for the exact, partial and fuzzy matching respectively. Since both of these algorithms exploit neither any languagespecific (except stop word lists) nor other external resources like domain-specific corpora, this solution can be easily adapted to the processing of many languages and constitutes a strong baseline for further explorations.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Conclusions and Outlook",
                "sec_num": "5"
            },
            {
                "text": "The comparison presented in this paper is not exhaustive, other linguistically-lightweight unsupervised approaches could be explored, e.g., the graph-centric approach presented in (Skrlj et al., 2019) , and some post-processing filters to merge redundant keywords going beyond exploiting string similarity metrics, and simultaneously, techniques to improve diversification of the keywords returned.",
                "cite_spans": [
                    {
                        "start": 180,
                        "end": 200,
                        "text": "(Skrlj et al., 2019)",
                        "ref_id": "BIBREF20"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Conclusions and Outlook",
                "sec_num": "5"
            },
            {
                "text": "Extending the approaches explored in this study, e.g., through use of part-of-speech-based patterns to filter out implausible keywords (e.g., imposing constraints to include only adjectives and nouns as elements of keywords), use of more elaborated graph-based keyword ranking methods (e.g. Page Rank), integration of semantics (e.g., linking semantic meaning to text sequences through using knowledge bases and semantic networks (Papagiannopoulou and Tsoumakas, 2020; Hasan and Ng, 2014; Kilic and Cetin, 2019; Alami Merrouni et al., 2019) ) would potentially allow to improve the performance. However, these extensions would require significantly more linguistic sophistication, and consequently would be more difficult to port across languages.",
                "cite_spans": [
                    {
                        "start": 430,
                        "end": 468,
                        "text": "(Papagiannopoulou and Tsoumakas, 2020;",
                        "ref_id": "BIBREF16"
                    },
                    {
                        "start": 469,
                        "end": 488,
                        "text": "Hasan and Ng, 2014;",
                        "ref_id": "BIBREF8"
                    },
                    {
                        "start": 489,
                        "end": 511,
                        "text": "Kilic and Cetin, 2019;",
                        "ref_id": "BIBREF9"
                    },
                    {
                        "start": 512,
                        "end": 540,
                        "text": "Alami Merrouni et al., 2019)",
                        "ref_id": "BIBREF0"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Conclusions and Outlook",
                "sec_num": "5"
            },
            {
                "text": "For matters related to accessing the ground truth dataset created for the sake of carrying out the evaluation presented in this paper please contact the authors.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Conclusions and Outlook",
                "sec_num": "5"
            },
            {
                "text": "https://emm.newsbrief.eu/",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "",
                "sec_num": null
            }
        ],
        "back_matter": [
            {
                "text": "We are greatly indebted to Stefano Bucci, Florentina Ciltu, Corrado Mirra, Monica De Paola, Te\u00f3filio Garcia, Camelia Ignat, Jens Linge, Manuel Marker, Ma\u0142gorzata Piskorska, Camille Schaeffer, Jessica Scornavacche and Beatriz Torighelli for helping us with the keyword annotation of news articles in various languages. We are also thankful to Martin Atkinson who contributed to the work presented in this report, and to Charles MacMillan for proofreading the paper.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Acknowledgments",
                "sec_num": null
            }
        ],
        "bib_entries": {
            "BIBREF0": {
                "ref_id": "b0",
                "title": "Automatic keyphrase extraction: a survey and trends",
                "authors": [
                    {
                        "first": "Bouchra",
                        "middle": [],
                        "last": "Zakariae Alami Merrouni",
                        "suffix": ""
                    },
                    {
                        "first": "Brahim",
                        "middle": [],
                        "last": "Frikh",
                        "suffix": ""
                    },
                    {
                        "first": "",
                        "middle": [],
                        "last": "Ouhbi",
                        "suffix": ""
                    }
                ],
                "year": 2019,
                "venue": "Journal of Intelligent Information Systems",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Zakariae Alami Merrouni, Bouchra Frikh, and Brahim Ouhbi. 2019. Automatic keyphrase extraction: a sur- vey and trends. Journal of Intelligent Information Systems, 54.",
                "links": null
            },
            "BIBREF1": {
                "ref_id": "b1",
                "title": "Massively multilingual sentence embeddings for zeroshot cross-lingual transfer and beyond",
                "authors": [
                    {
                        "first": "Mikel",
                        "middle": [],
                        "last": "Artetxe",
                        "suffix": ""
                    },
                    {
                        "first": "Holger",
                        "middle": [],
                        "last": "Schwenk",
                        "suffix": ""
                    }
                ],
                "year": 2019,
                "venue": "Transactions of the Association for Computational Linguistics",
                "volume": "7",
                "issue": "",
                "pages": "597--610",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Mikel Artetxe and Holger Schwenk. 2019. Mas- sively multilingual sentence embeddings for zero- shot cross-lingual transfer and beyond. Transac- tions of the Association for Computational Linguis- tics, 7:597-610.",
                "links": null
            },
            "BIBREF2": {
                "ref_id": "b2",
                "title": "A survey of longest common subsequence algorithms",
                "authors": [
                    {
                        "first": "Lasse",
                        "middle": [],
                        "last": "Bergroth",
                        "suffix": ""
                    },
                    {
                        "first": "H",
                        "middle": [],
                        "last": "Hakonen",
                        "suffix": ""
                    },
                    {
                        "first": "T",
                        "middle": [],
                        "last": "Raita",
                        "suffix": ""
                    }
                ],
                "year": 2000,
                "venue": "",
                "volume": "",
                "issue": "",
                "pages": "39--48",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Lasse Bergroth, H. Hakonen, and T. Raita. 2000. A survey of longest common subsequence algorithms. pages 39-48.",
                "links": null
            },
            "BIBREF3": {
                "ref_id": "b3",
                "title": "TopicRank: Graph-based topic ranking for keyphrase extraction",
                "authors": [
                    {
                        "first": "Adrien",
                        "middle": [],
                        "last": "Bougouin",
                        "suffix": ""
                    },
                    {
                        "first": "Florian",
                        "middle": [],
                        "last": "Boudin",
                        "suffix": ""
                    },
                    {
                        "first": "B\u00e9atrice",
                        "middle": [],
                        "last": "Daille",
                        "suffix": ""
                    }
                ],
                "year": 2013,
                "venue": "Proceedings of the 6 th International Joint Conference on NLP",
                "volume": "",
                "issue": "",
                "pages": "543--551",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Adrien Bougouin, Florian Boudin, and B\u00e9atrice Daille. 2013. TopicRank: Graph-based topic ranking for keyphrase extraction. In Proceedings of the 6 th In- ternational Joint Conference on NLP, pages 543- 551, Nagoya, Japan.",
                "links": null
            },
            "BIBREF4": {
                "ref_id": "b4",
                "title": "Network analysis: methodological foundations",
                "authors": [],
                "year": 2005,
                "venue": "",
                "volume": "3418",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Ulrik Brandes. 2005. Network analysis: methodologi- cal foundations, volume 3418. Springer Science & Business Media.",
                "links": null
            },
            "BIBREF5": {
                "ref_id": "b5",
                "title": "Yake! keyword extraction from single documents using multiple local features",
                "authors": [
                    {
                        "first": "Ricardo",
                        "middle": [],
                        "last": "Campos",
                        "suffix": ""
                    },
                    {
                        "first": "V\u00edtor",
                        "middle": [],
                        "last": "Mangaravite",
                        "suffix": ""
                    },
                    {
                        "first": "Arian",
                        "middle": [],
                        "last": "Pasquali",
                        "suffix": ""
                    },
                    {
                        "first": "A",
                        "middle": [],
                        "last": "Jorge",
                        "suffix": ""
                    },
                    {
                        "first": "C",
                        "middle": [],
                        "last": "Nunes",
                        "suffix": ""
                    },
                    {
                        "first": "Adam",
                        "middle": [],
                        "last": "Jatowt",
                        "suffix": ""
                    }
                ],
                "year": 2020,
                "venue": "Inf. Sci",
                "volume": "509",
                "issue": "",
                "pages": "257--289",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Ricardo Campos, V\u00edtor Mangaravite, Arian Pasquali, A. Jorge, C. Nunes, and Adam Jatowt. 2020. Yake! keyword extraction from single documents using multiple local features. Inf. Sci., 509:257-289.",
                "links": null
            },
            "BIBREF6": {
                "ref_id": "b6",
                "title": "Kpminer: A keyphrase extraction system for english and arabic documents",
                "authors": [
                    {
                        "first": "R",
                        "middle": [],
                        "last": "Samhaa",
                        "suffix": ""
                    },
                    {
                        "first": "Ahmed",
                        "middle": [
                            "A"
                        ],
                        "last": "El-Beltagy",
                        "suffix": ""
                    },
                    {
                        "first": "",
                        "middle": [],
                        "last": "Rafea",
                        "suffix": ""
                    }
                ],
                "year": 2009,
                "venue": "Inf. Syst",
                "volume": "34",
                "issue": "1",
                "pages": "132--144",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Samhaa R. El-Beltagy and Ahmed A. Rafea. 2009. Kp- miner: A keyphrase extraction system for english and arabic documents. Inf. Syst., 34(1):132-144.",
                "links": null
            },
            "BIBREF7": {
                "ref_id": "b7",
                "title": "Keybert: Minimal keyword extraction with bert",
                "authors": [
                    {
                        "first": "Maarten",
                        "middle": [],
                        "last": "Grootendorst",
                        "suffix": ""
                    }
                ],
                "year": 2020,
                "venue": "",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {
                    "DOI": [
                        "10.5281/zenodo.4461265"
                    ]
                },
                "num": null,
                "urls": [],
                "raw_text": "Maarten Grootendorst. 2020. Keybert: Minimal key- word extraction with bert.",
                "links": null
            },
            "BIBREF8": {
                "ref_id": "b8",
                "title": "Automatic keyphrase extraction: A survey of the state of the art",
                "authors": [
                    {
                        "first": "Saidul",
                        "middle": [],
                        "last": "Kazi",
                        "suffix": ""
                    },
                    {
                        "first": "Vincent",
                        "middle": [],
                        "last": "Hasan",
                        "suffix": ""
                    },
                    {
                        "first": "",
                        "middle": [],
                        "last": "Ng",
                        "suffix": ""
                    }
                ],
                "year": 2014,
                "venue": "Proceedings of the 52 nd ACL Conference",
                "volume": "",
                "issue": "",
                "pages": "1262--1273",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Kazi Saidul Hasan and Vincent Ng. 2014. Automatic keyphrase extraction: A survey of the state of the art. In Proceedings of the 52 nd ACL Conference, pages 1262-1273, Baltimore, Maryland. ACL.",
                "links": null
            },
            "BIBREF9": {
                "ref_id": "b9",
                "title": "A survey on keyword and key phrase extraction with deep learning",
                "authors": [
                    {
                        "first": "Ozlem",
                        "middle": [],
                        "last": "Kilic",
                        "suffix": ""
                    },
                    {
                        "first": "Ayd\u0131n",
                        "middle": [],
                        "last": "Cetin",
                        "suffix": ""
                    }
                ],
                "year": 2019,
                "venue": "",
                "volume": "",
                "issue": "",
                "pages": "1--6",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Ozlem Kilic and Ayd\u0131n Cetin. 2019. A survey on key- word and key phrase extraction with deep learning. pages 1-6.",
                "links": null
            },
            "BIBREF10": {
                "ref_id": "b10",
                "title": "Graph-based keyword extraction for single-document summarization",
                "authors": [
                    {
                        "first": "Marina",
                        "middle": [],
                        "last": "Litvak",
                        "suffix": ""
                    },
                    {
                        "first": "Mark",
                        "middle": [],
                        "last": "Last",
                        "suffix": ""
                    }
                ],
                "year": 2008,
                "venue": "Coling 2008: Proceedings of the workshop Multi-source Multilingual Information Extraction and Summarization",
                "volume": "",
                "issue": "",
                "pages": "17--24",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Marina Litvak and Mark Last. 2008. Graph-based keyword extraction for single-document summariza- tion. In Coling 2008: Proceedings of the work- shop Multi-source Multilingual Information Extrac- tion and Summarization, pages 17-24.",
                "links": null
            },
            "BIBREF11": {
                "ref_id": "b11",
                "title": "Supervised topical key phrase extraction of news stories using crowdsourcing, light filtering and coreference normalization. Language Resources and Evaluation",
                "authors": [
                    {
                        "first": "Lu\u00eds",
                        "middle": [],
                        "last": "Marujo",
                        "suffix": ""
                    },
                    {
                        "first": "Anatole",
                        "middle": [],
                        "last": "Gershman",
                        "suffix": ""
                    },
                    {
                        "first": "G",
                        "middle": [
                            "Jaime"
                        ],
                        "last": "Carbonell",
                        "suffix": ""
                    },
                    {
                        "first": "E",
                        "middle": [
                            "Robert"
                        ],
                        "last": "Frederking",
                        "suffix": ""
                    },
                    {
                        "first": "Paulo Jo\u00e3o",
                        "middle": [],
                        "last": "Neto",
                        "suffix": ""
                    }
                ],
                "year": 2012,
                "venue": "",
                "volume": "",
                "issue": "",
                "pages": "399--403",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Lu\u00eds Marujo, Anatole Gershman, G. Jaime Carbonell, E. Robert Frederking, and Paulo Jo\u00e3o Neto. 2012. Supervised topical key phrase extraction of news stories using crowdsourcing, light filtering and co- reference normalization. Language Resources and Evaluation, pages 399-403.",
                "links": null
            },
            "BIBREF12": {
                "ref_id": "b12",
                "title": "Keyphrase cloud generation of broadcast news. Proceedings of INTERSPEECH",
                "authors": [
                    {
                        "first": "Lu\u00eds",
                        "middle": [],
                        "last": "Marujo",
                        "suffix": ""
                    },
                    {
                        "first": "M\u00e1rcio",
                        "middle": [],
                        "last": "Viveiros",
                        "suffix": ""
                    },
                    {
                        "first": "Jo\u00e3o",
                        "middle": [],
                        "last": "Neto",
                        "suffix": ""
                    }
                ],
                "year": 2013,
                "venue": "",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Lu\u00eds Marujo, M\u00e1rcio Viveiros, and Jo\u00e3o Neto. 2013. Keyphrase cloud generation of broadcast news. Pro- ceedings of INTERSPEECH 2013.",
                "links": null
            },
            "BIBREF13": {
                "ref_id": "b13",
                "title": "Textrank: Bringing order into text",
                "authors": [
                    {
                        "first": "Rada",
                        "middle": [],
                        "last": "Mihalcea",
                        "suffix": ""
                    },
                    {
                        "first": "Paul",
                        "middle": [],
                        "last": "Tarau",
                        "suffix": ""
                    }
                ],
                "year": 2004,
                "venue": "Proceedings of the 2004 conference on empirical methods in natural language processing",
                "volume": "",
                "issue": "",
                "pages": "404--411",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Rada Mihalcea and Paul Tarau. 2004. Textrank: Bring- ing order into text. In Proceedings of the 2004 con- ference on empirical methods in natural language processing, pages 404-411.",
                "links": null
            },
            "BIBREF14": {
                "ref_id": "b14",
                "title": "Distributed representations of words and phrases and their compositionality",
                "authors": [
                    {
                        "first": "Tomas",
                        "middle": [],
                        "last": "Mikolov",
                        "suffix": ""
                    },
                    {
                        "first": "Ilya",
                        "middle": [],
                        "last": "Sutskever",
                        "suffix": ""
                    },
                    {
                        "first": "Kai",
                        "middle": [],
                        "last": "Chen",
                        "suffix": ""
                    },
                    {
                        "first": "Greg",
                        "middle": [
                            "S"
                        ],
                        "last": "Corrado",
                        "suffix": ""
                    },
                    {
                        "first": "Jeff",
                        "middle": [],
                        "last": "Dean",
                        "suffix": ""
                    }
                ],
                "year": 2013,
                "venue": "NIPS",
                "volume": "",
                "issue": "",
                "pages": "3111--3119",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Cor- rado, and Jeff Dean. 2013. Distributed representa- tions of words and phrases and their compositional- ity. In NIPS, pages 3111-3119. Curran Associates, Inc.",
                "links": null
            },
            "BIBREF15": {
                "ref_id": "b15",
                "title": "Unsupervised learning of sentence embeddings using compositional n-gram features",
                "authors": [
                    {
                        "first": "Matteo",
                        "middle": [],
                        "last": "Pagliardini",
                        "suffix": ""
                    },
                    {
                        "first": "Prakhar",
                        "middle": [],
                        "last": "Gupta",
                        "suffix": ""
                    },
                    {
                        "first": "Martin",
                        "middle": [],
                        "last": "Jaggi",
                        "suffix": ""
                    }
                ],
                "year": 2018,
                "venue": "Proceedings of",
                "volume": "",
                "issue": "",
                "pages": "528--540",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Matteo Pagliardini, Prakhar Gupta, and Martin Jaggi. 2018. Unsupervised learning of sentence embed- dings using compositional n-gram features. In Pro- ceedings of NAACL 2018, pages 528-540, New Or- leans, Louisiana. ACL.",
                "links": null
            },
            "BIBREF16": {
                "ref_id": "b16",
                "title": "A review of keyphrase extraction",
                "authors": [
                    {
                        "first": "Eirini",
                        "middle": [],
                        "last": "Papagiannopoulou",
                        "suffix": ""
                    },
                    {
                        "first": "Grigorios",
                        "middle": [],
                        "last": "Tsoumakas",
                        "suffix": ""
                    }
                ],
                "year": 2020,
                "venue": "Data Mining and Knowledge Discovery",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Eirini Papagiannopoulou and Grigorios Tsoumakas. 2020. A review of keyphrase extraction. Wiley Inter- disciplinary Reviews: Data Mining and Knowledge Discovery, 10.",
                "links": null
            },
            "BIBREF17": {
                "ref_id": "b17",
                "title": "On knowledge-poor methods for person name matching and lemmatization for highly inflectional languages",
                "authors": [
                    {
                        "first": "Jakub",
                        "middle": [],
                        "last": "Piskorski",
                        "suffix": ""
                    },
                    {
                        "first": "Karol",
                        "middle": [],
                        "last": "Wieloch",
                        "suffix": ""
                    },
                    {
                        "first": "Marcin",
                        "middle": [],
                        "last": "Sydow",
                        "suffix": ""
                    }
                ],
                "year": 2009,
                "venue": "Information Retrieval",
                "volume": "12",
                "issue": "3",
                "pages": "275--299",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Jakub Piskorski, Karol Wieloch, and Marcin Sydow. 2009. On knowledge-poor methods for person name matching and lemmatization for highly inflectional languages. Information Retrieval, 12(3):275-299.",
                "links": null
            },
            "BIBREF18": {
                "ref_id": "b18",
                "title": "Making monolingual sentence embeddings multilingual using knowledge distillation",
                "authors": [
                    {
                        "first": "Nils",
                        "middle": [],
                        "last": "Reimers",
                        "suffix": ""
                    },
                    {
                        "first": "Iryna",
                        "middle": [],
                        "last": "Gurevych",
                        "suffix": ""
                    }
                ],
                "year": 2020,
                "venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Nils Reimers and Iryna Gurevych. 2020. Making monolingual sentence embeddings multilingual us- ing knowledge distillation. In Proceedings of the 2020 Conference on Empirical Methods in Natu- ral Language Processing. Association for Computa- tional Linguistics.",
                "links": null
            },
            "BIBREF19": {
                "ref_id": "b19",
                "title": "Automatic keyword extraction from individual documents",
                "authors": [
                    {
                        "first": "Stuart",
                        "middle": [],
                        "last": "Rose",
                        "suffix": ""
                    },
                    {
                        "first": "Dave",
                        "middle": [],
                        "last": "Engel",
                        "suffix": ""
                    },
                    {
                        "first": "Nick",
                        "middle": [],
                        "last": "Cramer",
                        "suffix": ""
                    },
                    {
                        "first": "Wendy",
                        "middle": [],
                        "last": "Cowley",
                        "suffix": ""
                    }
                ],
                "year": 2010,
                "venue": "Text Mining. Applications and Theory",
                "volume": "",
                "issue": "",
                "pages": "1--20",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Stuart Rose, Dave Engel, Nick Cramer, and Wendy Cowley. 2010. Automatic keyword extraction from individual documents. In Michael W. Berry and Ja- cob Kogan, editors, Text Mining. Applications and Theory, pages 1-20. John Wiley and Sons, Ltd.",
                "links": null
            },
            "BIBREF20": {
                "ref_id": "b20",
                "title": "Rakun: Rank-based keyword extraction via unsupervised learning and meta vertex aggregation",
                "authors": [
                    {
                        "first": "Blaz",
                        "middle": [],
                        "last": "Skrlj",
                        "suffix": ""
                    },
                    {
                        "first": "Andraz",
                        "middle": [],
                        "last": "Repar",
                        "suffix": ""
                    },
                    {
                        "first": "Senja",
                        "middle": [],
                        "last": "Pollak",
                        "suffix": ""
                    }
                ],
                "year": 2019,
                "venue": "SLSP",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Blaz Skrlj, Andraz Repar, and Senja Pollak. 2019. Rakun: Rank-based keyword extraction via unsu- pervised learning and meta vertex aggregation. In SLSP.",
                "links": null
            },
            "BIBREF21": {
                "ref_id": "b21",
                "title": "EMM: Supporting the analyst by turning multilingual text into structured data",
                "authors": [
                    {
                        "first": "Ralf",
                        "middle": [],
                        "last": "Steinberger",
                        "suffix": ""
                    },
                    {
                        "first": "Martin",
                        "middle": [],
                        "last": "Atkinson",
                        "suffix": ""
                    },
                    {
                        "first": "Teofilo",
                        "middle": [],
                        "last": "Garcia",
                        "suffix": ""
                    },
                    {
                        "first": "Erik",
                        "middle": [],
                        "last": "Van Der Goot",
                        "suffix": ""
                    },
                    {
                        "first": "Jens",
                        "middle": [],
                        "last": "Linge",
                        "suffix": ""
                    },
                    {
                        "first": "Charles",
                        "middle": [],
                        "last": "Macmillan",
                        "suffix": ""
                    },
                    {
                        "first": "Hristo",
                        "middle": [],
                        "last": "Tanev",
                        "suffix": ""
                    },
                    {
                        "first": "Marco",
                        "middle": [],
                        "last": "Verile",
                        "suffix": ""
                    },
                    {
                        "first": "Gerhard",
                        "middle": [],
                        "last": "Wagner",
                        "suffix": ""
                    }
                ],
                "year": 2017,
                "venue": "Transparenz Aus Verantwortung: Neue Herausforderungen F\u00fcr Die Digitale Datenanalyse",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Ralf Steinberger, Martin Atkinson, Teofilo Garcia, Erik van der Goot, Jens Linge, Charles Macmillan, Hristo Tanev, Marco Verile, and Gerhard Wagner. 2017. EMM: Supporting the analyst by turning multilin- gual text into structured data. In Transparenz Aus Verantwortung: Neue Herausforderungen F\u00fcr Die Digitale Datenanalyse. Erich Schmidt Verlag.",
                "links": null
            }
        },
        "ref_entries": {
            "FIGREF0": {
                "text": "F 1 scores for exact, partial and fuzzy matching for YAKE-KPMINER-R.",
                "num": null,
                "type_str": "figure",
                "uris": null
            },
            "TABREF1": {
                "text": "Exact and fuzzy overlap of keywords and tokens for annotator pairs for each language.",
                "num": null,
                "type_str": "table",
                "content": "<table><tr><td>Language</td><td colspan=\"3\">#articles avg. nb of avg. nb of</td></tr><tr><td/><td/><td colspan=\"2\">keywords tokens per</td></tr><tr><td/><td/><td>per article</td><td>keyword</td></tr><tr><td>English</td><td>50</td><td>22.04</td><td>2.79</td></tr><tr><td>French</td><td>47</td><td>14.34</td><td>2.70</td></tr><tr><td>German</td><td>50</td><td>21.36</td><td>1.75</td></tr><tr><td>Italian</td><td>50</td><td>16.16</td><td>2.34</td></tr><tr><td>Polish</td><td>39</td><td>21.18</td><td>2.67</td></tr><tr><td>Romanian</td><td>49</td><td>20.61</td><td>2.62</td></tr><tr><td>Spanish</td><td>48</td><td>22.75</td><td>2.33</td></tr></table>",
                "html": null
            },
            "TABREF2": {
                "text": "",
                "num": null,
                "type_str": "table",
                "content": "<table/>",
                "html": null
            },
            "TABREF4": {
                "text": "Time efficiency comparison on a set of circa 17K news articles in English on Covid-19.",
                "num": null,
                "type_str": "table",
                "content": "<table/>",
                "html": null
            }
        }
    }
}