File size: 103,684 Bytes
6fa4bc9
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
1001
1002
1003
1004
1005
1006
1007
1008
1009
1010
1011
1012
1013
1014
1015
1016
1017
1018
1019
1020
1021
1022
1023
1024
1025
1026
1027
1028
1029
1030
1031
1032
1033
1034
1035
1036
1037
1038
1039
1040
1041
1042
1043
1044
1045
1046
1047
1048
1049
1050
1051
1052
1053
1054
1055
1056
1057
1058
1059
1060
1061
1062
1063
1064
1065
1066
1067
1068
1069
1070
1071
1072
1073
1074
1075
1076
1077
1078
1079
1080
1081
1082
1083
1084
1085
1086
1087
1088
1089
1090
1091
1092
1093
1094
1095
1096
1097
1098
1099
1100
1101
1102
1103
1104
1105
1106
1107
1108
1109
1110
1111
1112
1113
1114
1115
1116
1117
1118
1119
1120
1121
1122
1123
1124
1125
1126
1127
1128
1129
1130
1131
1132
1133
1134
1135
1136
1137
1138
1139
1140
1141
1142
1143
1144
1145
1146
1147
1148
1149
1150
1151
1152
1153
1154
1155
1156
1157
1158
1159
1160
1161
1162
1163
1164
1165
1166
1167
1168
1169
1170
1171
1172
1173
1174
1175
1176
1177
1178
1179
1180
1181
1182
1183
1184
1185
1186
1187
1188
1189
1190
1191
1192
1193
1194
1195
1196
1197
1198
1199
1200
1201
1202
1203
1204
1205
1206
1207
1208
1209
1210
1211
1212
1213
1214
1215
1216
1217
1218
1219
1220
1221
1222
1223
1224
1225
1226
1227
1228
1229
1230
1231
1232
1233
1234
1235
1236
1237
1238
1239
1240
1241
1242
1243
1244
1245
1246
1247
1248
1249
1250
1251
1252
1253
1254
1255
1256
1257
1258
1259
1260
1261
1262
1263
1264
1265
1266
1267
1268
1269
1270
1271
1272
1273
1274
1275
1276
1277
1278
1279
1280
1281
1282
1283
1284
1285
1286
1287
1288
1289
1290
1291
1292
1293
1294
1295
1296
1297
1298
1299
1300
1301
1302
1303
1304
1305
1306
1307
1308
1309
1310
1311
1312
1313
1314
1315
1316
1317
1318
1319
1320
1321
1322
1323
1324
1325
1326
1327
1328
1329
1330
1331
1332
1333
1334
1335
1336
1337
1338
1339
1340
1341
1342
1343
1344
1345
1346
1347
1348
1349
1350
1351
1352
1353
1354
1355
1356
1357
1358
1359
1360
1361
1362
1363
1364
1365
1366
1367
1368
1369
1370
1371
1372
1373
1374
1375
1376
1377
1378
1379
1380
1381
1382
1383
1384
1385
1386
1387
1388
1389
1390
1391
1392
1393
1394
1395
1396
1397
1398
1399
1400
1401
1402
1403
1404
1405
1406
1407
1408
1409
1410
1411
1412
1413
1414
1415
1416
1417
1418
1419
1420
1421
1422
1423
1424
1425
1426
1427
1428
1429
1430
1431
1432
1433
1434
1435
1436
1437
1438
1439
1440
1441
1442
1443
1444
1445
1446
1447
1448
1449
1450
1451
1452
1453
1454
1455
1456
1457
1458
1459
1460
1461
1462
1463
1464
1465
1466
1467
1468
1469
1470
1471
1472
1473
1474
1475
1476
1477
1478
1479
1480
1481
1482
1483
1484
1485
1486
1487
1488
1489
1490
1491
1492
1493
1494
1495
1496
1497
1498
1499
1500
1501
1502
1503
1504
1505
1506
1507
1508
1509
1510
1511
1512
1513
1514
1515
1516
1517
1518
1519
1520
1521
1522
1523
1524
1525
1526
1527
1528
1529
1530
1531
1532
1533
1534
1535
1536
1537
1538
1539
1540
1541
1542
1543
1544
1545
1546
1547
1548
1549
1550
1551
1552
1553
1554
1555
1556
1557
1558
1559
1560
1561
1562
1563
1564
1565
1566
1567
1568
1569
1570
1571
1572
1573
1574
1575
1576
1577
1578
1579
1580
1581
1582
1583
1584
1585
1586
1587
1588
1589
1590
1591
1592
1593
1594
1595
1596
1597
1598
1599
1600
1601
1602
1603
1604
1605
1606
1607
1608
1609
1610
1611
1612
1613
1614
1615
1616
1617
1618
1619
1620
1621
1622
1623
1624
1625
1626
1627
1628
1629
1630
1631
1632
1633
1634
1635
1636
1637
1638
1639
1640
1641
1642
1643
1644
1645
1646
1647
1648
1649
1650
1651
1652
1653
1654
1655
1656
1657
1658
1659
1660
1661
1662
1663
1664
1665
1666
1667
1668
1669
1670
1671
1672
1673
1674
1675
1676
1677
1678
1679
1680
1681
1682
1683
1684
1685
1686
1687
1688
1689
1690
1691
1692
1693
1694
1695
1696
1697
1698
1699
1700
1701
1702
1703
1704
1705
1706
1707
1708
1709
1710
1711
1712
1713
1714
1715
1716
1717
1718
1719
1720
1721
1722
1723
1724
1725
1726
1727
1728
1729
1730
1731
1732
1733
1734
1735
1736
1737
1738
1739
1740
1741
1742
1743
1744
1745
1746
1747
1748
1749
1750
1751
1752
1753
1754
1755
1756
1757
1758
1759
1760
1761
1762
1763
1764
1765
1766
1767
1768
1769
1770
1771
1772
1773
1774
1775
1776
1777
1778
1779
1780
1781
1782
1783
1784
1785
1786
1787
1788
1789
1790
1791
1792
1793
1794
1795
1796
1797
1798
1799
1800
1801
1802
1803
1804
1805
1806
1807
1808
1809
1810
1811
1812
1813
1814
1815
1816
1817
1818
1819
1820
1821
1822
1823
1824
1825
1826
1827
1828
1829
1830
1831
1832
1833
1834
1835
{
    "paper_id": "D15-1012",
    "header": {
        "generated_with": "S2ORC 1.0.0",
        "date_generated": "2023-01-19T16:28:36.471780Z"
    },
    "title": "Phrase-based Compressive Cross-Language Summarization",
    "authors": [
        {
            "first": "Jin-Ge",
            "middle": [],
            "last": "Yao",
            "suffix": "",
            "affiliation": {},
            "email": "yaojinge@pku.edu.cn"
        },
        {
            "first": "Xiaojun",
            "middle": [],
            "last": "Wan",
            "suffix": "",
            "affiliation": {},
            "email": "wanxiaojun@pku.edu.cn"
        },
        {
            "first": "Jianguo",
            "middle": [],
            "last": "Xiao",
            "suffix": "",
            "affiliation": {},
            "email": "xiaojianguo@pku.edu.cn"
        }
    ],
    "year": "",
    "venue": null,
    "identifiers": {},
    "abstract": "The task of cross-language document summarization is to create a summary in a target language from documents in a different source language. Previous methods only involve direct extraction of automatically translated sentences from the original documents. Inspired by phrasebased machine translation, we propose a phrase-based model to simultaneously perform sentence scoring, extraction and compression. We design a greedy algorithm to approximately optimize the score function. Experimental results show that our methods outperform the state-of-theart extractive systems while maintaining similar grammatical quality.",
    "pdf_parse": {
        "paper_id": "D15-1012",
        "_pdf_hash": "",
        "abstract": [
            {
                "text": "The task of cross-language document summarization is to create a summary in a target language from documents in a different source language. Previous methods only involve direct extraction of automatically translated sentences from the original documents. Inspired by phrasebased machine translation, we propose a phrase-based model to simultaneously perform sentence scoring, extraction and compression. We design a greedy algorithm to approximately optimize the score function. Experimental results show that our methods outperform the state-of-theart extractive systems while maintaining similar grammatical quality.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Abstract",
                "sec_num": null
            }
        ],
        "body_text": [
            {
                "text": "The task of cross-language summarization is to produce a summary in a target language from documents written in a different source language. This task is particularly useful for readers to quickly get the main idea of documents written in a source language that they are not familiar with. Following Wan (2011), we focus on English-to-Chinese summarization in this work.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": "1"
            },
            {
                "text": "The simplest and the most straightforward way to perform cross-language summarization is pipelining general summarization and machine translation. Such systems either translate all the documents before running generic summarization algorithms on the translated documents, or summarize from the original documents and then only translate the produced summary into the target language. Wan (2011) show that such pipelining approaches are inferior to methods that utilize information from both sides. In that work, the author proposes graph-based models and achieves fair amount of improvement. However, to the best of our knowledge, no previous work of this task tries to focus on summarization beyond pure sentence extraction.",
                "cite_spans": [
                    {
                        "start": 384,
                        "end": 394,
                        "text": "Wan (2011)",
                        "ref_id": "BIBREF17"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": "1"
            },
            {
                "text": "On the other hand, cross-language summarization can be seen as a special kind of machine translation: translating the original documents into a brief summary in a different language. Inspired by phrase-based machine translation models (Koehn et al., 2003) , we propose a phrase-based scoring scheme for cross-language summarization in this work.",
                "cite_spans": [
                    {
                        "start": 235,
                        "end": 255,
                        "text": "(Koehn et al., 2003)",
                        "ref_id": "BIBREF6"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": "1"
            },
            {
                "text": "Since our framework is based on phrases, we are not limited to produce extractive summaries. We can use the scoring scheme to perform joint sentence selection and compression. Unlike typical sentence compression methods, our proposed algorithm does not require additional syntactic preprocessing such as part-of-speech tagging or syntactic parsing. We only utilize information from translated texts with phrase alignments. The scoring function consists of a submodular term of compressed sentences and a bounded distortion penalty term. We design a greedy procedure to efficiently get approximate solutions.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": "1"
            },
            {
                "text": "For experimental evaluation, we use the DUC2001 dataset with manually translated reference Chinese summaries. Results based on the ROUGE metrics show the effectiveness of our proposed methods. We also conduct manual evaluation and the results suggest that the linguistic quality of produced summaries is not decreased by too much, compared with extractive counterparts. In some cases, the grammatical smoothness can even be improved by compression.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": "1"
            },
            {
                "text": "The contributions of this paper include:",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": "1"
            },
            {
                "text": "\u2022 Utilizing the phrase alignment information, we design a scoring scheme for the crosslanguage document summarization task.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": "1"
            },
            {
                "text": "\u2022 We design an efficient greedy algorithm to generate summaries. The greedy algorithm is partially submodular and has a provable constant approximation factor to the optimal solution up to a small constant.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": "1"
            },
            {
                "text": "\u2022 We achieve state-of-the-art results using the extractive counterpart of our compressive summarization framework. Performance in terms of ROUGE metrics can be significantly improved when simultaneously performing extraction and compression.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": "1"
            },
            {
                "text": "Document summarization can be treated as a special kind of translation process: translating from a bunch of related source documents to a short target summary. This analogy also holds for crosslanguage document summarization, with the only difference that the languages of source documents and the target summary are different. Our design of sentence scoring function for cross-language document summarization purpose is inspired by phrase-based machine translation models. Here we briefly describe the general idea of phrase-based translation. One may refer to Koehn (2009) for more detailed description.",
                "cite_spans": [
                    {
                        "start": 562,
                        "end": 574,
                        "text": "Koehn (2009)",
                        "ref_id": "BIBREF7"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Background",
                "sec_num": "2"
            },
            {
                "text": "Phrase-based machine translation models are currently giving state-of-the-art translations for many pairs of languages and dominating modern statistical machine translation. Classical word-based IBM models cannot capture local contextual information and local reordering very well. Phrasebased translation models operate on lexical entries with more than one word on the source language and the target language. The allowance of multiword expressions is believed to be the main reason for the improvements that phrase-based models give. Note that these multi-word expressions, typically addressed as phrases in machine translation literature, are essentially continuous n-grams and do not need to be linguistically integrate and meaningful constituents.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Phrase-based Machine Translation",
                "sec_num": "2.1"
            },
            {
                "text": "Define y as a phrase-based derivation, or more precisely a finite sequence of phrases p 1 , p 2 , . . . , p L . For any derivation y we use e(y) to refer to the target-side translation text defined by y. This translation is derived by concatenating the strings e(p 1 ), e(p 2 ), . . . , e(p L ). The scoring scheme for a phrase-based derivation y from the source sentence to the target sentence e(y) is:",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Phrase-based Machine Translation",
                "sec_num": "2.1"
            },
            {
                "text": "f (y) = L k=1 g(p k ) + LM (e(y)) + L\u22121 k=1 \u03b7|start(p k+1 ) \u2212 1 \u2212 end(p k )|",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Phrase-based Machine Translation",
                "sec_num": "2.1"
            },
            {
                "text": "where LM (\u2022) is the target-side language model score, g(\u2022) is the score function of phrases, \u03b7 < 0 is the distortion parameter for penalizing the distance between neighboring phrases in the derivation. Note that the phrases addressed here are typically continuous n-grams and need not to be grammatical linguistic phrasal units. Later we will directly use phrases provided by modern machine translation systems.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Phrase-based Machine Translation",
                "sec_num": "2.1"
            },
            {
                "text": "Searching for the best translation under this score definition is difficult in general. Thus approximate decoding algorithms such as beam search should be applied. Meanwhile, several constraints should be satisfied during the decoding process. The most important one is to set a constant limit of the distortion term |start(p k+1 ) \u2212 1 \u2212 end(p k )| \u2264 \u03b4 to exhibit derivations with distant phrase translations.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Phrase-based Machine Translation",
                "sec_num": "2.1"
            },
            {
                "text": "Inspired by the general idea of phrase-based machine translation, we describe our proposed phrase-based model for cross-language summarization in this section.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Phrase-based Cross-Language Summarization",
                "sec_num": "3"
            },
            {
                "text": "In the context of cross-language summarization, here we assume that we can also have phrases in both source and target languages along with phrase alignments between the two sides. For summarization purposes, we may wish to select sentences containing more important phrases. Then it is plausible to measure the scores of these aligned phrases via importance weighing. Inspired by phrase-based translation models, we can assign phrase-based scores to sentences from the translated documents for summarization purposes. We define our scoring function for each sentence s as:",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Phrase-based Sentence Scoring",
                "sec_num": "3.1"
            },
            {
                "text": "F (s) = p\u2208s d 0 g(p) + bg(s) +\u03b7 dist(y(s))",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Phrase-based Sentence Scoring",
                "sec_num": "3.1"
            },
            {
                "text": "Here in the first term g(\u2022) is the score of phrase p, which can be simply set to document frequency. The phrase score is penalized with a constant damping factor d 0 to decay scores for repeated phrases. The second term bg(s) is the bigram score of sentence s. It is used here to simulate the effect of language models in phrase-based translation models. Denoting y(s) as the phrasebased derivation (as mentioned earlier in the previous section) of sentence s, the last distortion term",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Phrase-based Sentence Scoring",
                "sec_num": "3.1"
            },
            {
                "text": "dist(y(s)) = L k=1 |start(p k+1 ) \u2212 1 \u2212 end(p k )",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Phrase-based Sentence Scoring",
                "sec_num": "3.1"
            },
            {
                "text": "| is exactly the same as the distortion penalty term in phrase-based translation models. This term can be used as a reflection of complexity of the translation. All the above terms can be derived from bilingual sentence pairs with phrase alignments.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Phrase-based Sentence Scoring",
                "sec_num": "3.1"
            },
            {
                "text": "Meanwhile, we may also wish to exclude unimportant phrases and badly translated phrases. Our definition can also be used to guide sentence compression by trying to remove redundant phrase.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Phrase-based Sentence Scoring",
                "sec_num": "3.1"
            },
            {
                "text": "Based on the definition over sentences, we define our summary scoring measure over a summary S:",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Phrase-based Sentence Scoring",
                "sec_num": "3.1"
            },
            {
                "text": "F (S) = p\u2208S count(p,S) i=1 d i\u22121 g(p) + s\u2208S bg(s) +\u03b7 s\u2208S dist(y(s))",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Phrase-based Sentence Scoring",
                "sec_num": "3.1"
            },
            {
                "text": "where d is a predefined constant damping factor to penalize repeated occurrences of the same phrases, count(p, S) is the number of occurrences in the summary S for phrase p. All other terms are inherited from the sentence score definition.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Phrase-based Sentence Scoring",
                "sec_num": "3.1"
            },
            {
                "text": "In the next section we describe our framework to efficiently utilize this scoring function for crosslanguage summarization.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Phrase-based Sentence Scoring",
                "sec_num": "3.1"
            },
            {
                "text": "Utilizing the phrase-based score definition of sentences, we can use greedy algorithms to simultaneously perform sentence selection and sentence compression. Assuming that we have a predefined budget B (e.g. total number of Chinese characters allowed) to restrict the total length of a generated summary. We use C(S) to denote the cost of a summary S, measured by the number of Chinese characters contained in total. The greedy algorithm we will use for our compressive summarization is listed in Algorithm 1.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "A Greedy Algorithm for Compressed Sentence Selection",
                "sec_num": "3.2"
            },
            {
                "text": "Algorithm 1 A greedy algorithm for phrase-based summarization 1: S \u2190 \u2205 2: i \u2190 1 3: single best = argmax s\u2208U,C({s})\u2264B F ({s}) 4: while U = \u2205 do 5:",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "A Greedy Algorithm for Compressed Sentence Selection",
                "sec_num": "3.2"
            },
            {
                "text": "s i = argmax s\u2208U F (S i\u22121 \u222a{s})\u2212F (S i\u22121 ) C({s}) r 6: if C(S i\u22121 \u222a {s}) \u2264 B then 7: S i \u2190 S i\u22121 \u222a {s} 8: i \u2190 i + 1 9:",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "A Greedy Algorithm for Compressed Sentence Selection",
                "sec_num": "3.2"
            },
            {
                "text": "end if 10:",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "A Greedy Algorithm for Compressed Sentence Selection",
                "sec_num": "3.2"
            },
            {
                "text": "U \u2190 U \\ {s i } 11: end while 12: return S * = argmax S\u2208{single best,S i } F (S)",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "A Greedy Algorithm for Compressed Sentence Selection",
                "sec_num": "3.2"
            },
            {
                "text": "The space U denotes the set of all possible compressed sentences. In each iteration, the algorithm tries to find the compressed sentence with maximum gain-cost ratio (Line 5, where we will follow previous work to set r = 1), and merge it to the summary set at the current iteration (denoted as S i ). The target is to find the compression with maximum gain-cost ratio. This will be discussed in the next section. Note that the algorithm is also naturally applicable to extractive summarization. For extractive summarization, Line 5 corresponds to direct calculations of sentence scores based on our proposed phrase-based function and U will denote all full sentences from the original translated documents.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "A Greedy Algorithm for Compressed Sentence Selection",
                "sec_num": "3.2"
            },
            {
                "text": "The outline of this algorithm is very similar to the greedy algorithm used by Morita et al. (2013) for subtree extraction, except that in our context the increase of cost function when adding a sentence is exactly the cost of that sentence.",
                "cite_spans": [
                    {
                        "start": 78,
                        "end": 98,
                        "text": "Morita et al. (2013)",
                        "ref_id": "BIBREF14"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "A Greedy Algorithm for Compressed Sentence Selection",
                "sec_num": "3.2"
            },
            {
                "text": "When the distortion term is ignored (\u03b7 = 0), the scoring function is clearly submodular 1 (Lin and Bilmes, 2010) in terms of the set of compressed sentences, since the score now only consists of functional gains of phrases along with bigrams of a compressed sentence. Morita et al. (2013) have proved that when r = 1, this greedy algorithm will achieve a constant approximation factor 1 2 (1 \u2212 e \u22121 ) to the optimal solution. Note that this only gives us the worst case guarantee. What we can achieve in practice is usually far better.",
                "cite_spans": [
                    {
                        "start": 268,
                        "end": 288,
                        "text": "Morita et al. (2013)",
                        "ref_id": "BIBREF14"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "A Greedy Algorithm for Compressed Sentence Selection",
                "sec_num": "3.2"
            },
            {
                "text": "On the other hand, setting \u03b7 < 0 will not affect the performance guarantee too much. Intuitively this is because in most phrase-based translation models a distortion limit constraint |start(p k+1 )\u2212 1 \u2212 end(p k )| \u2264 \u03b4 will be applied on distortion terms, while performing sentence compression can never increase distortion. The main conclusion is formulated as:",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "A Greedy Algorithm for Compressed Sentence Selection",
                "sec_num": "3.2"
            },
            {
                "text": "Theorem 1. If Algorithm 1 outputs S greedy while the optimal solution is OP T , we have",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "A Greedy Algorithm for Compressed Sentence Selection",
                "sec_num": "3.2"
            },
            {
                "text": "F (S greedy ) \u2265 1 2 (1 \u2212 e \u22121 )F (OP T ) + 1 2 \u03b7\u03b3.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "A Greedy Algorithm for Compressed Sentence Selection",
                "sec_num": "3.2"
            },
            {
                "text": "Here \u03b3 > 0 is a constant controlled by distortion difference between sentences, which is relatively small in practice compared with phrase scores. \u03b7 < 0 is the distortion parameter. Note that when \u03b7 is set to be 0, the scoring function is submodular and then we recover the 1 2 (1 \u2212 e \u22121 ) approximation factor as studied by Morita et al. (2013) . We leave the proof of Theorem 1 to supplementary materials due to space limit. The submodularity term in the score plays an important role in the proof.",
                "cite_spans": [
                    {
                        "start": 325,
                        "end": 345,
                        "text": "Morita et al. (2013)",
                        "ref_id": "BIBREF14"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "A Greedy Algorithm for Compressed Sentence Selection",
                "sec_num": "3.2"
            },
            {
                "text": "In Algorithm 1, the most important part is the greedy selection process (Line 5). The greedy selection criteria here is to maximize the gain-cost ratio. For compressive summarization, we are trying to compress each unselected sentence s tos, aiming at maximizing the gain-cost ratio, where the gain corresponds to",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Finding the Maximum Density Compression",
                "sec_num": "3.3"
            },
            {
                "text": "F (S i\u22121 \u222a {s}) \u2212 F (S i\u22121 ) = p\u2208s count(p,S) i=1 d i\u22121 g(p) + bg(s) + \u03b7dist(s),",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Finding the Maximum Density Compression",
                "sec_num": "3.3"
            },
            {
                "text": "and then add the compressed sentences with maximum gain-cost ratio to the summary. We will also address the compression process for each sentence as finding the maximum density compression. The whole framework forms a joint selection and compression process. In our phrase-based scoring for sentences, although there exist no apparent optimal substructure available for exact dynamic programming due to nonlocal distortion penalty, we can have a tractable approximate procedure since the search space is only defined by local decisions on whether a phrase should be kept or dropped.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Finding the Maximum Density Compression",
                "sec_num": "3.3"
            },
            {
                "text": "Our compression process for each sentence s is displayed in Algorithm 2. It gradually expands the set of phrases to be kept in the final compression, from the initial set of large density phrases (Line 4, assuming that phrases with large scores and small costs will always be kept), we can recover the compression with maximum density. The function dist(\u2022, \u2022) is the unit distortion penalty defined as dist(a, b) = |start(b) \u2212 1 \u2212 end(a)|. We define p.score to be the sum of damped phrase score for phrase p, i.e. p.score =",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Finding the Maximum Density Compression",
                "sec_num": "3.3"
            },
            {
                "text": "count(p,S i\u22121 ) i=1 d i\u22121 g(p)",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Finding the Maximum Density Compression",
                "sec_num": "3.3"
            },
            {
                "text": ", when the current partial summary is S i\u22121 . Therefore during each iteration of the greedy selection process, the compression procedure will also be affected by sentences that have already been included. Define p.cost as the number of words p contains.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Finding the Maximum Density Compression",
                "sec_num": "3.3"
            },
            {
                "text": "Algorithm 2 A growing algorithm for finding the maximum density compressed sentence",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Finding the Maximum Density Compression",
                "sec_num": "3.3"
            },
            {
                "text": "1: function GET MAX DENSITY COMPRESSION(s, Si\u22121) 2:",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Finding the Maximum Density Compression",
                "sec_num": "3.3"
            },
            {
                "text": "queue Q \u2190 \u2205, kept \u2190 \u2205 3:",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Finding the Maximum Density Compression",
                "sec_num": "3.3"
            },
            {
                "text": "for each phrase p in s.phrases do 4:",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Finding the Maximum Density Compression",
                "sec_num": "3.3"
            },
            {
                "text": "if p.score/p.cost > 1 then 5:",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Finding the Maximum Density Compression",
                "sec_num": "3.3"
            },
            {
                "text": "kept \u2190 kept \u222a{p} 6:",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Finding the Maximum Density Compression",
                "sec_num": "3.3"
            },
            {
                "text": "Q.enqueue(p) 7:",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Finding the Maximum Density Compression",
                "sec_num": "3.3"
            },
            {
                "text": "end if 8: end for 9:",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Finding the Maximum Density Compression",
                "sec_num": "3.3"
            },
            {
                "text": "while Q = \u2205 do 10: p \u2190 Q.deque() 11:",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Finding the Maximum Density Compression",
                "sec_num": "3.3"
            },
            {
                "text": "ppv \u2190 p.previous phrase, pnx \u2190 p.next phrase 12:",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Finding the Maximum Density Compression",
                "sec_num": "3.3"
            },
            {
                "text": "if ppv.score+bg(ppv,p)+\u03b7dist (ppv,p) ppv.cost+p.cost > 1 then 13:",
                "cite_spans": [
                    {
                        "start": 29,
                        "end": 36,
                        "text": "(ppv,p)",
                        "ref_id": null
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Finding the Maximum Density Compression",
                "sec_num": "3.3"
            },
            {
                "text": "Q.enqueue(ppv), kept \u2190 kept \u222a{ppv} 14:",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Finding the Maximum Density Compression",
                "sec_num": "3.3"
            },
            {
                "text": "end if 15:",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Finding the Maximum Density Compression",
                "sec_num": "3.3"
            },
            {
                "text": "if pnx.score+bg(pnx,p)+\u03b7dist (p,pnx) p.cost+pnx.cost > 1 then 16:",
                "cite_spans": [
                    {
                        "start": 29,
                        "end": 36,
                        "text": "(p,pnx)",
                        "ref_id": null
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Finding the Maximum Density Compression",
                "sec_num": "3.3"
            },
            {
                "text": "Q.enqueue(pnx), kept \u2190 kept \u222a{pnx} 17:",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Finding the Maximum Density Compression",
                "sec_num": "3.3"
            },
            {
                "text": "end if 18: end while 19: returns = kept, ratio =",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Finding the Maximum Density Compression",
                "sec_num": "3.3"
            },
            {
                "text": "F (S i\u22121 \u222a{s})\u2212F (S i\u22121 )",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Finding the Maximum Density Compression",
                "sec_num": "3.3"
            },
            {
                "text": "s.cost",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Finding the Maximum Density Compression",
                "sec_num": "3.3"
            },
            {
                "text": "Empirically we find this procedure gives almost the same results with exhaustive search while maintaining efficiency. Assuming that sentence length is no more than L, then the asymptotic complexity of Algorithm 2 will be O(L) since the algorithm requires two passes of all phrases. Therefore the whole framework requires O(kN L) time for a document cluster containing N sentences in total to generate a summary with k sentences.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "20: end function",
                "sec_num": null
            },
            {
                "text": "In the final compressed sentence we just leave the selected phrases continuously as they are, relying on bigram scores to ensure local smoothness. The task is after all a summarization task, where bigram scores play a role of not only controlling grammaticality but keeping main information of the original documents.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "20: end function",
                "sec_num": null
            },
            {
                "text": "Later we will see that this compression process will not hurt grammatical fluency of translated sentences in general. In many cases it may even improve fluency by deleting redundant parentheses or removing incorrectly reordered (unimportant) phrases.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "20: end function",
                "sec_num": null
            },
            {
                "text": "Currently there are not so many available datasets for our particular setting of the cross-language summarization task. Hence we only evaluate our method on the same dataset used by Wan (2011) . The dataset is created by manually translating the reference summaries into Chinese from the original DUC 2001 dataset in English. We will refer to this dataset as the DUC 2001 dataset in this paper. There are 30 English document sets in the DUC 2001 dataset for multi-document summarization. Each set contains several documents related to the same topic. Three generic reference English summaries are provided by NIST annotators for each document set. All these English summaries have been translated to Chinese by native Chinese annotators.",
                "cite_spans": [
                    {
                        "start": 182,
                        "end": 192,
                        "text": "Wan (2011)",
                        "ref_id": "BIBREF17"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Data",
                "sec_num": "4.1"
            },
            {
                "text": "All the English sentences in the original documents have been automatically translated into Chinese using Google Translate. We also collect the phrase alignment information from the responses of Google Translate (stored in JSON format) along with the translated texts. We use the Stanford Chinese Word Segmenter 2 for Chinese word segmentation.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Data",
                "sec_num": "4.1"
            },
            {
                "text": "The parameters in the algorithms are simply set to be r = 1, d = 0.5, \u03b7 = \u22120.5.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Data",
                "sec_num": "4.1"
            },
            {
                "text": "We will report the performance of our compressive solution, denoted as PBCS (for Phrase-Based Compressive Summarization), with comparisons of the following systems:",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Evaluation",
                "sec_num": "4.2"
            },
            {
                "text": "\u2022 PBES: The acronym comes from Phrase-Based Extractive Summarization. It is the extractive counterpart of our solution without calling Algorithm 2.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Evaluation",
                "sec_num": "4.2"
            },
            {
                "text": "\u2022 Baseline (EN): This baseline relies on merely the English-side information for En-glish sentence ranking in the original documents. The scoring function is designed to be document frequencies of English bigrams, which is similar to the second term in our proposed sentence scoring function in Section 3.1 and is submodular. 3 The extracted English summary is finally automatically translated into the corresponding Chinese summary. This is also known as the summary translation scheme.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Evaluation",
                "sec_num": "4.2"
            },
            {
                "text": "\u2022 Baseline (CN): This baseline relies on merely the Chinese-side information for Chinese sentence ranking. The scoring function is similarly defined by document frequency of Chinese bigrams. The Chinese summary sentences are then directly extracted from the translated Chinese documents. This is also known as the document translation scheme.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Evaluation",
                "sec_num": "4.2"
            },
            {
                "text": "\u2022 CoRank: We reimplement the graph-based CoRank algorithm, which gives the state-ofthe-art performance on the same DUC 2001 dataset for comparison.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Evaluation",
                "sec_num": "4.2"
            },
            {
                "text": "\u2022 Baseline (ENcomp): This is a compressive baseline where the extracted English sentences in Baseline (EN) will be compressed before being translated to Chinese. The compression process follows from an integer linear program as described by Clarke and Lapata (2008) . This baseline gives strong performance as we have found on English DUC 2001 dataset as well as other monolingual datasets.",
                "cite_spans": [
                    {
                        "start": 241,
                        "end": 265,
                        "text": "Clarke and Lapata (2008)",
                        "ref_id": "BIBREF3"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Evaluation",
                "sec_num": "4.2"
            },
            {
                "text": "We experiment with two kinds of summary budgets for comparative study. The first one is limiting the summary length to be no more than five sentences. The second one is limiting the total number of Chinese characters of each produced summary to be no more than 300. They will be addressed as Sentence Budgeting and Character Budgeting in the experimental results respectively. Similar to traditional summarization tasks, we use the ROUGE metrics for automatic evaluation of all systems in comparison. The ROUGE metrics measure summary quality by counting overlapping word units (e.g. n-grams) between the candidate summary and the reference summary. Following previous work in the same task, we report the following ROUGE F-measure scores: ROUGE-1 (unigrams), ROUGE-2 (bigrams), ROUGE-W (weighted longest common subsequence; weight=1.2), ROUGE-L (longest common subsequences), and ROUGE-SU4 (skip bigrams with a maximum distance of 4). Here we investigate two kinds of ROUGE metrics for Chinese: ROUGE metrics based on words (after Chinese word segmentation) and ROUGE metrics based on singleton Chinese characters. The latter metrics will not suffer from the problem of word segmentation inconsistency.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Evaluation",
                "sec_num": "4.2"
            },
            {
                "text": "To compare our method with extractive baselines in terms of information loss and grammatical quality, we also ask three native Chinese students as annotators to carry out manual evaluation. The aspects considered during evaluation include Grammaticality (GR), Non-Redundancy (NR), Referential Clarity (RC), Topical Focus (TF) and Structural Coherence (SC). Each aspect is rated with scores from 1 (poor) to 5 (good) 4 . This evaluation is performed on the same random sample of 10 document sets from the DUC 2001 dataset. One group of the gold-standard summaries is left out for evaluation of human-level performance. The other two groups are shown to the annotators, giving them a sense of topics talked about in the document sets. Table 1 and Table 2 display the ROUGE results for our proposed methods and the baseline methods, including both word-based and character-based evaluation. We also conduct pairwise t-test and find that almost all the differences between PBCS and other systems are statistically significant with p 0.01 5 except for the ROUGE-W metric. We have the same observations with previous work on the inferiority of using information from only one-side, while using Chinese-side information only is more beneficial than English-side only. The CoRank algorithm utilizes both sides of information together and achieves significantly better performance over Baseline (EN) and Baseline(CN). Our compressive system outperforms the CoRank algorithm 6 in all metrics.",
                "cite_spans": [
                    {
                        "start": 1386,
                        "end": 1390,
                        "text": "(EN)",
                        "ref_id": null
                    }
                ],
                "ref_spans": [
                    {
                        "start": 733,
                        "end": 752,
                        "text": "Table 1 and Table 2",
                        "ref_id": "TABREF1"
                    }
                ],
                "eq_spans": [],
                "section": "Evaluation",
                "sec_num": "4.2"
            },
            {
                "text": "Also our system overperforms the compressive pipelining system (Baseline(ENcomp)) as well. Note that the latter only considers information from the source language side. Meanwhile sentence compression may sometimes causes worse translations compared with translating the full original sentence.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Results and Discussion",
                "sec_num": "4.3"
            },
            {
                "text": "For manual evaluation, the average score and standard deviation for each metric is displayed in Table 3 . From the comparison between compressive summarization and the extractive version, there exist slight improvements of nonredundancy. This exactly matches what we can expect from sentence compression that keeps only important part and drop redundancy. We also observe certain amount of improvements on referential clarity. This may be a result of deletions of some phrases containing pronouns, such as he said. Most of such phrases are semantically unimportant and will be dropped during the process of finding the maximum density compression.",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 96,
                        "end": 103,
                        "text": "Table 3",
                        "ref_id": null
                    }
                ],
                "eq_spans": [],
                "section": "Results and Discussion",
                "sec_num": "4.3"
            },
            {
                "text": "Despite not directly using syntactic information, our compressive summaries do not suffer too much loss of grammaticality. This suggest that bigrams can be treated as good indicators of local grammatical smoothness. We reckon that sentences describing the same events may partially share descriptive bigram patterns, thus sentences selected by the algorithm will consist of mostly important patterns that appear repeatedly in the original document cluster. Only those words that are neither semantically important nor syntactically pivotal will be deleted. Figure 1 lists the summaries for the first document set D04 in the DUC 2001 dataset produced by the proposed compressive system. The Chinese side sentences have been split with spaces according to phrase alignment results. Phrases that have been compressed are grayed out. We also include original English sentences for reference, with deletions according to word alignments from the Chinese sentences. We can observe that our compressive system tries to compress sentences by removing relatively unimportant phrases. The effect of translation errors (e.g. the word watch in on storm watch has been incorrectly translated in the example) can also be reduced since those incorrectly translated words will be dropped for having low information gains. In some cases the gram- Wan (2011) . We believe that this comes from different machine translation results output by Google Translate. Table 3 : Manual evaluation results matical fluency can even be improved from sentence compression, as redundant parentheses may sometimes be removed. We leave the output summaries from all systems for the same document set to supplementary materials.",
                "cite_spans": [
                    {
                        "start": 1330,
                        "end": 1340,
                        "text": "Wan (2011)",
                        "ref_id": "BIBREF17"
                    }
                ],
                "ref_spans": [
                    {
                        "start": 557,
                        "end": 565,
                        "text": "Figure 1",
                        "ref_id": "FIGREF0"
                    },
                    {
                        "start": 1441,
                        "end": 1448,
                        "text": "Table 3",
                        "ref_id": null
                    }
                ],
                "eq_spans": [],
                "section": "Results and Discussion",
                "sec_num": "4.3"
            },
            {
                "text": "In our experiments, we also study the influence of relevant parameter settings. Figure 2a depicts the variation of ROUGE-2 F-measure when changing the damping factor d from different values in {1, 2 \u22121 , 3 \u22121 , 4 \u22121 , 5 \u22121 }, while \u03b7 = \u22120.5 being fixed. We can see that under proper range the value of d does not effect the result for too much. No damping or too much damping will severely decrease the performance. Figure 2b shows the performance change under different settings of the distortion parameter \u03b7 taking values from {0, \u22120.2, \u22120.5, \u22121, \u22123}, while fixing d = 0.5. The results suggest that, for our purposes of summarization, the difference of considering distortion penalty or not is obvious. At certain level, the effect brought by different values distortion parameter becomes stable.",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 80,
                        "end": 89,
                        "text": "Figure 2a",
                        "ref_id": null
                    },
                    {
                        "start": 416,
                        "end": 425,
                        "text": "Figure 2b",
                        "ref_id": null
                    }
                ],
                "eq_spans": [],
                "section": "Results and Discussion",
                "sec_num": "4.3"
            },
            {
                "text": "We also empirically study the effect of approximation. The compressive summarization framework proposed in this paper can be trivially cast into an integer linear program (ILP), with the number of variables being too large to make the problem tractable 7 . In this experiment, we use Figure 2c , we depict the objective value achieved by ILP as exact solution, comparing with results from sentences which are gradually selected and compressed by our greedy algorithm. We can see that the approximation is close.",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 284,
                        "end": 293,
                        "text": "Figure 2c",
                        "ref_id": null
                    }
                ],
                "eq_spans": [],
                "section": "Results and Discussion",
                "sec_num": "4.3"
            },
            {
                "text": "The task focused in this paper is cross-language document summarization. Several pilot studies have investigated this task. Before Wan (2011)'s work that explicitly utilizes bilingual information in a graph-based framework, earlier methods often use information only from one language (de Chalendar et al., 2005; Pingali et al., 2007; Orasan and Chiorean, 2008; Litvak et al., 2010) .",
                "cite_spans": [
                    {
                        "start": 289,
                        "end": 312,
                        "text": "Chalendar et al., 2005;",
                        "ref_id": "BIBREF5"
                    },
                    {
                        "start": 313,
                        "end": 334,
                        "text": "Pingali et al., 2007;",
                        "ref_id": "BIBREF16"
                    },
                    {
                        "start": 335,
                        "end": 361,
                        "text": "Orasan and Chiorean, 2008;",
                        "ref_id": "BIBREF15"
                    },
                    {
                        "start": 362,
                        "end": 382,
                        "text": "Litvak et al., 2010)",
                        "ref_id": "BIBREF13"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Related Work",
                "sec_num": "5"
            },
            {
                "text": "This work is closely related to greedy algorithms for budgeted submodular maximization. Many studies have formalized text summarization tasks as submodular maximization problems (Lin and Bilmes, 2010; Lin and Bilmes, 2011; Morita et al., 2013) . A more recent work (Dasgupta et al., 2013) discussed the problem of maximizing a function with a submodular part and a nonsubmodular dispersion term, which may appear to be closer to our scoring functions.",
                "cite_spans": [
                    {
                        "start": 178,
                        "end": 200,
                        "text": "(Lin and Bilmes, 2010;",
                        "ref_id": "BIBREF11"
                    },
                    {
                        "start": 201,
                        "end": 222,
                        "text": "Lin and Bilmes, 2011;",
                        "ref_id": "BIBREF12"
                    },
                    {
                        "start": 223,
                        "end": 243,
                        "text": "Morita et al., 2013)",
                        "ref_id": "BIBREF14"
                    },
                    {
                        "start": 265,
                        "end": 288,
                        "text": "(Dasgupta et al., 2013)",
                        "ref_id": "BIBREF4"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Related Work",
                "sec_num": "5"
            },
            {
                "text": "In recent years, some research has made progress beyond extractive summarization, espethe original maximization problem with pruned brute-force enumeration and therefore exactly optimal but too costly.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Related Work",
                "sec_num": "5"
            },
            {
                "text": "8 http://lpsolve.sourceforge.net/ cially in the context of compressive summarization. Zajic et al. (2006) tries a pipeline strategy with heuristics to generate multiple candidate compressions and extract from this compressed sentences. Berg-Kirkpatrick et al. (2011) create linear models of weights learned by structural SVMs for different components and tried to jointly formulate sentence selection and syntax tree trimming in integer linear programs. Woodsend and Lapata (2012) propose quasi tree substitution grammars for multiple rewriting operations. All these methods involve integer linear programming solvers to generate compressed summaries, which is time-consuming for multidocument summarization tasks. Almeida and Martins (2013) form the compressive summarization problem in a more efficient dual decomposition framework. Models for sentence compression and extractive summarization are trained by multitask learning techniques. Wang et al. (2013) explore different types of compression on constituent parse trees for query-focused summarization. Li et al. (2013) propose a guided sentence compression model with ILP-based summary sentence selection. Their following work (Li et al., 2014) incorporate various constraints on constituent parse trees to improve the linguistic quality of the compressed sentences. In these studies, the bestperforming systems require supervised learning for different subtasks. More recent work tries to formulate document summarization tasks as optimization problems and use their solutions to guide sentence compression (Li et al., 2015; Yao et al., 2015) . employ integer linear programming for conducting phrase selection and merging simultaneously to form compressed sentences after phrase extraction.",
                "cite_spans": [
                    {
                        "start": 86,
                        "end": 105,
                        "text": "Zajic et al. (2006)",
                        "ref_id": "BIBREF21"
                    },
                    {
                        "start": 715,
                        "end": 741,
                        "text": "Almeida and Martins (2013)",
                        "ref_id": "BIBREF0"
                    },
                    {
                        "start": 942,
                        "end": 960,
                        "text": "Wang et al. (2013)",
                        "ref_id": "BIBREF18"
                    },
                    {
                        "start": 1060,
                        "end": 1076,
                        "text": "Li et al. (2013)",
                        "ref_id": "BIBREF8"
                    },
                    {
                        "start": 1185,
                        "end": 1202,
                        "text": "(Li et al., 2014)",
                        "ref_id": "BIBREF9"
                    },
                    {
                        "start": 1566,
                        "end": 1583,
                        "text": "(Li et al., 2015;",
                        "ref_id": "BIBREF2"
                    },
                    {
                        "start": 1584,
                        "end": 1601,
                        "text": "Yao et al., 2015)",
                        "ref_id": "BIBREF20"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Related Work",
                "sec_num": "5"
            },
            {
                "text": "In this paper we propose a phrase-based framework for the task of cross-language document summarization. The proposed scoring scheme can be naturally operated on compressive summarization. We use efficient greedy procedure to approximately optimize the scoring function. Experimental results show improvements of our compressive solution over state-of-the-art systems. Even though we do not explicitly use any syntactic information, the generated summaries of our system do not lose much grammaticality and fluency. The scoring function in our framework is in- spired by earlier phrase-based machine translation models. Our next step is to try more fine-grained scoring schemes using similar techniques from modern approaches of statistical machine translation. To further improve grammaticality of generated summaries, we may try to sacrifice the time efficiency for a little bit and use syntactic information provided by syntactic parsers. Our framework currently uses only the single best translation. It will be more powerful to integrate machine translation and summarization, utilizing multiple possible translations.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Conclusion and Future Work",
                "sec_num": "6"
            },
            {
                "text": "Currently many successful statistical machine translation systems are phrase-based with alignment information provided and we utilize this fact in this work. It is interesting to explore how will the performance be affected if we are only provided with parallel sentences and then alignments can only be derived using an independent aligner.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Conclusion and Future Work",
                "sec_num": "6"
            },
            {
                "text": "A set function F : 2 U \u2192 R defined over subsets of a universe set U is said to be submodular iff it satisfies the diminishing returns property:\u2200S \u2286 T \u2286 U \\ u, we have F (S \u222a {u}) \u2212 F (S) \u2265 F (T \u222a {u}) \u2212 F (T ).",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "",
                "sec_num": null
            },
            {
                "text": "http://nlp.stanford.edu/software/ segmenter.shtml",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "",
                "sec_num": null
            },
            {
                "text": "In our experiments this method gives similar performance compared with graph-based pipelining baselines implemented in previous work.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "",
                "sec_num": null
            },
            {
                "text": "Fractional numbers are allowed for cases where the annotators feel uncertain about.5 The significance level holds after Bonferroni adjustment, for the purpose of multiple testing.6 There exists ignorable difference between the results of our reimplemented version of CoRank and those reported by",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "",
                "sec_num": null
            },
            {
                "text": "By casting decisions on whether to select a certain phrase or bigram as binary variables, with additional linear constraints on phrase/bigram selection consistency, we get an ILP with essentially the same objective function and a linear budget constraint. This is conceptually equivalent to solving",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "",
                "sec_num": null
            }
        ],
        "back_matter": [
            {
                "text": "We thank all the anonymous reviewers for helpful comments and suggestions. This work was supported by National Hi-Tech Research and Development Program (863 Program) of China (2015AA015403, 2014AA015102) and National Natural Science Foundation of China (61170166, 61331011). The contact author of this paper, according to the meaning given to this role by Peking University, is Xiaojun Wan.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Acknowledgments",
                "sec_num": null
            }
        ],
        "bib_entries": {
            "BIBREF0": {
                "ref_id": "b0",
                "title": "Fast and robust compressive summarization with dual decomposition and multi-task learning",
                "authors": [
                    {
                        "first": "Miguel",
                        "middle": [],
                        "last": "Almeida",
                        "suffix": ""
                    },
                    {
                        "first": "Andre",
                        "middle": [],
                        "last": "Martins",
                        "suffix": ""
                    }
                ],
                "year": 2013,
                "venue": "Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics",
                "volume": "1",
                "issue": "",
                "pages": "196--206",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Miguel Almeida and Andre Martins. 2013. Fast and robust compressive summarization with dual de- composition and multi-task learning. In Proceed- ings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Pa- pers), pages 196-206, Sofia, Bulgaria, August. As- sociation for Computational Linguistics.",
                "links": null
            },
            "BIBREF1": {
                "ref_id": "b1",
                "title": "Jointly learning to extract and compress",
                "authors": [
                    {
                        "first": "Taylor",
                        "middle": [],
                        "last": "Berg-Kirkpatrick",
                        "suffix": ""
                    },
                    {
                        "first": "Dan",
                        "middle": [],
                        "last": "Gillick",
                        "suffix": ""
                    },
                    {
                        "first": "Dan",
                        "middle": [],
                        "last": "Klein",
                        "suffix": ""
                    }
                ],
                "year": 2011,
                "venue": "Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies",
                "volume": "",
                "issue": "",
                "pages": "481--490",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Taylor Berg-Kirkpatrick, Dan Gillick, and Dan Klein. 2011. Jointly learning to extract and compress. In Proceedings of the 49th Annual Meeting of the Asso- ciation for Computational Linguistics: Human Lan- guage Technologies, pages 481-490, Portland, Ore- gon, USA, June. Association for Computational Lin- guistics.",
                "links": null
            },
            "BIBREF2": {
                "ref_id": "b2",
                "title": "Abstractive multidocument summarization via phrase selection and merging",
                "authors": [
                    {
                        "first": "Lidong",
                        "middle": [],
                        "last": "Bing",
                        "suffix": ""
                    },
                    {
                        "first": "Piji",
                        "middle": [],
                        "last": "Li",
                        "suffix": ""
                    },
                    {
                        "first": "Yi",
                        "middle": [],
                        "last": "Liao",
                        "suffix": ""
                    },
                    {
                        "first": "Wai",
                        "middle": [],
                        "last": "Lam",
                        "suffix": ""
                    },
                    {
                        "first": "Weiwei",
                        "middle": [],
                        "last": "Guo",
                        "suffix": ""
                    },
                    {
                        "first": "Rebecca",
                        "middle": [],
                        "last": "Passonneau",
                        "suffix": ""
                    }
                ],
                "year": 2015,
                "venue": "Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing",
                "volume": "1",
                "issue": "",
                "pages": "1587--1597",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Lidong Bing, Piji Li, Yi Liao, Wai Lam, Weiwei Guo, and Rebecca Passonneau. 2015. Abstractive multi- document summarization via phrase selection and merging. In Proceedings of the 53rd Annual Meet- ing of the Association for Computational Linguistics and the 7th International Joint Conference on Natu- ral Language Processing (Volume 1: Long Papers), pages 1587-1597, Beijing, China, July. Association for Computational Linguistics.",
                "links": null
            },
            "BIBREF3": {
                "ref_id": "b3",
                "title": "Global inference for sentence compression: An integer linear programming approach",
                "authors": [
                    {
                        "first": "James",
                        "middle": [],
                        "last": "Clarke",
                        "suffix": ""
                    },
                    {
                        "first": "Mirella",
                        "middle": [],
                        "last": "Lapata",
                        "suffix": ""
                    }
                ],
                "year": 2008,
                "venue": "Journal of Artificial Intelligence Research",
                "volume": "31",
                "issue": "",
                "pages": "273--381",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "James Clarke and Mirella Lapata. 2008. Global in- ference for sentence compression: An integer linear programming approach. Journal of Artificial Intelli- gence Research, 31:273-381.",
                "links": null
            },
            "BIBREF4": {
                "ref_id": "b4",
                "title": "Summarization through submodularity and dispersion",
                "authors": [
                    {
                        "first": "Anirban",
                        "middle": [],
                        "last": "Dasgupta",
                        "suffix": ""
                    },
                    {
                        "first": "Ravi",
                        "middle": [],
                        "last": "Kumar",
                        "suffix": ""
                    },
                    {
                        "first": "Sujith",
                        "middle": [],
                        "last": "Ravi",
                        "suffix": ""
                    }
                ],
                "year": 2013,
                "venue": "Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics",
                "volume": "",
                "issue": "",
                "pages": "1014--1022",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Anirban Dasgupta, Ravi Kumar, and Sujith Ravi. 2013. Summarization through submodularity and disper- sion. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Vol- ume 1: Long Papers), pages 1014-1022, Sofia, Bul- garia, August. Association for Computational Lin- guistics.",
                "links": null
            },
            "BIBREF5": {
                "ref_id": "b5",
                "title": "Crosslingual summarization with thematic extraction, syntactic sentence simplification, and bilingual generation",
                "authors": [
                    {
                        "first": "Romaric",
                        "middle": [],
                        "last": "Ga\u00ebl De Chalendar",
                        "suffix": ""
                    },
                    {
                        "first": "Olivier",
                        "middle": [],
                        "last": "Besan\u00e7on",
                        "suffix": ""
                    },
                    {
                        "first": "Gregory",
                        "middle": [],
                        "last": "Ferret",
                        "suffix": ""
                    },
                    {
                        "first": "Olivier",
                        "middle": [],
                        "last": "Grefenstette",
                        "suffix": ""
                    },
                    {
                        "first": "",
                        "middle": [],
                        "last": "Mesnard",
                        "suffix": ""
                    }
                ],
                "year": 2005,
                "venue": "Workshop on Crossing Barriers in Text Summarization Research, 5th International Conference on Recent Advances in Natural Language Processing (RANLP2005)",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Ga\u00ebl de Chalendar, Romaric Besan\u00e7on, Olivier Ferret, Gregory Grefenstette, and Olivier Mesnard. 2005. Crosslingual summarization with thematic extrac- tion, syntactic sentence simplification, and bilin- gual generation. In Workshop on Crossing Barri- ers in Text Summarization Research, 5th Interna- tional Conference on Recent Advances in Natural Language Processing (RANLP2005).",
                "links": null
            },
            "BIBREF6": {
                "ref_id": "b6",
                "title": "Statistical phrase-based translation",
                "authors": [
                    {
                        "first": "Philipp",
                        "middle": [],
                        "last": "Koehn",
                        "suffix": ""
                    },
                    {
                        "first": "Franz",
                        "middle": [
                            "Josef"
                        ],
                        "last": "Och",
                        "suffix": ""
                    },
                    {
                        "first": "Marcu",
                        "middle": [],
                        "last": "Daniel",
                        "suffix": ""
                    }
                ],
                "year": 2003,
                "venue": "Human Language Technologies: The 2003 Annual Conference of the North American Chapter of the Association for Computational Linguistics",
                "volume": "",
                "issue": "",
                "pages": "48--54",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Philipp Koehn, Franz Josef Och, and Marcu Daniel. 2003. Statistical phrase-based translation. In Hu- man Language Technologies: The 2003 Annual Conference of the North American Chapter of the Association for Computational Linguistics, pages 48-54, Edmonton, May-June. Association for Com- putational Linguistics.",
                "links": null
            },
            "BIBREF7": {
                "ref_id": "b7",
                "title": "Statistical Machine Translation",
                "authors": [
                    {
                        "first": "Philipp",
                        "middle": [],
                        "last": "Koehn",
                        "suffix": ""
                    }
                ],
                "year": 2009,
                "venue": "",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Philipp Koehn. 2009. Statistical Machine Translation. Cambridge University Press.",
                "links": null
            },
            "BIBREF8": {
                "ref_id": "b8",
                "title": "Document summarization via guided sentence compression",
                "authors": [
                    {
                        "first": "Chen",
                        "middle": [],
                        "last": "Li",
                        "suffix": ""
                    },
                    {
                        "first": "Fei",
                        "middle": [],
                        "last": "Liu",
                        "suffix": ""
                    },
                    {
                        "first": "Fuliang",
                        "middle": [],
                        "last": "Weng",
                        "suffix": ""
                    },
                    {
                        "first": "Yang",
                        "middle": [],
                        "last": "Liu",
                        "suffix": ""
                    }
                ],
                "year": 2013,
                "venue": "Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing",
                "volume": "",
                "issue": "",
                "pages": "490--500",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Chen Li, Fei Liu, Fuliang Weng, and Yang Liu. 2013. Document summarization via guided sentence com- pression. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Process- ing, pages 490-500, Seattle, Washington, USA, Oc- tober. Association for Computational Linguistics.",
                "links": null
            },
            "BIBREF9": {
                "ref_id": "b9",
                "title": "Improving multi-documents summarization by sentence compression based on expanded constituent parse trees",
                "authors": [
                    {
                        "first": "Chen",
                        "middle": [],
                        "last": "Li",
                        "suffix": ""
                    },
                    {
                        "first": "Yang",
                        "middle": [],
                        "last": "Liu",
                        "suffix": ""
                    },
                    {
                        "first": "Fei",
                        "middle": [],
                        "last": "Liu",
                        "suffix": ""
                    },
                    {
                        "first": "Lin",
                        "middle": [],
                        "last": "Zhao",
                        "suffix": ""
                    },
                    {
                        "first": "Fuliang",
                        "middle": [],
                        "last": "Weng",
                        "suffix": ""
                    }
                ],
                "year": 2014,
                "venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
                "volume": "",
                "issue": "",
                "pages": "691--701",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Chen Li, Yang Liu, Fei Liu, Lin Zhao, and Fuliang Weng. 2014. Improving multi-documents summa- rization by sentence compression based on expanded constituent parse trees. In Proceedings of the 2014 Conference on Empirical Methods in Natural Lan- guage Processing (EMNLP), pages 691-701, Doha, Qatar, October. Association for Computational Lin- guistics.",
                "links": null
            },
            "BIBREF10": {
                "ref_id": "b10",
                "title": "Reader-aware multi-document summarization via sparse coding",
                "authors": [
                    {
                        "first": "Piji",
                        "middle": [],
                        "last": "Li",
                        "suffix": ""
                    },
                    {
                        "first": "Lidong",
                        "middle": [],
                        "last": "Bing",
                        "suffix": ""
                    },
                    {
                        "first": "Wai",
                        "middle": [],
                        "last": "Lam",
                        "suffix": ""
                    },
                    {
                        "first": "Hang",
                        "middle": [],
                        "last": "Li",
                        "suffix": ""
                    },
                    {
                        "first": "Yi",
                        "middle": [],
                        "last": "Liao",
                        "suffix": ""
                    }
                ],
                "year": 2015,
                "venue": "IJCAI",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Piji Li, Lidong Bing, Wai Lam, Hang Li, and Yi Liao. 2015. Reader-aware multi-document summariza- tion via sparse coding. In IJCAI.",
                "links": null
            },
            "BIBREF11": {
                "ref_id": "b11",
                "title": "Multi-document summarization via budgeted maximization of submodular functions",
                "authors": [
                    {
                        "first": "Hui",
                        "middle": [],
                        "last": "Lin",
                        "suffix": ""
                    },
                    {
                        "first": "Jeff",
                        "middle": [],
                        "last": "Bilmes",
                        "suffix": ""
                    }
                ],
                "year": 2010,
                "venue": "Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics",
                "volume": "",
                "issue": "",
                "pages": "912--920",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Hui Lin and Jeff Bilmes. 2010. Multi-document sum- marization via budgeted maximization of submod- ular functions. In Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Lin- guistics, pages 912-920, Los Angeles, California, June. Association for Computational Linguistics.",
                "links": null
            },
            "BIBREF12": {
                "ref_id": "b12",
                "title": "A class of submodular functions for document summarization",
                "authors": [
                    {
                        "first": "Hui",
                        "middle": [],
                        "last": "Lin",
                        "suffix": ""
                    },
                    {
                        "first": "Jeff",
                        "middle": [],
                        "last": "Bilmes",
                        "suffix": ""
                    }
                ],
                "year": 2011,
                "venue": "Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies",
                "volume": "",
                "issue": "",
                "pages": "510--520",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Hui Lin and Jeff Bilmes. 2011. A class of submodu- lar functions for document summarization. In Pro- ceedings of the 49th Annual Meeting of the Associ- ation for Computational Linguistics: Human Lan- guage Technologies, pages 510-520, Portland, Ore- gon, USA, June. Association for Computational Lin- guistics.",
                "links": null
            },
            "BIBREF13": {
                "ref_id": "b13",
                "title": "A new approach to improving multilingual summarization using a genetic algorithm",
                "authors": [
                    {
                        "first": "Marina",
                        "middle": [],
                        "last": "Litvak",
                        "suffix": ""
                    },
                    {
                        "first": "Mark",
                        "middle": [],
                        "last": "Last",
                        "suffix": ""
                    },
                    {
                        "first": "Menahem",
                        "middle": [],
                        "last": "Friedman",
                        "suffix": ""
                    }
                ],
                "year": 2010,
                "venue": "Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics",
                "volume": "",
                "issue": "",
                "pages": "927--936",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Marina Litvak, Mark Last, and Menahem Friedman. 2010. A new approach to improving multilingual summarization using a genetic algorithm. In Pro- ceedings of the 48th Annual Meeting of the Associa- tion for Computational Linguistics, pages 927-936, Uppsala, Sweden, July. Association for Computa- tional Linguistics.",
                "links": null
            },
            "BIBREF14": {
                "ref_id": "b14",
                "title": "Subtree extractive summarization via submodular maximization",
                "authors": [
                    {
                        "first": "Hajime",
                        "middle": [],
                        "last": "Morita",
                        "suffix": ""
                    },
                    {
                        "first": "Ryohei",
                        "middle": [],
                        "last": "Sasano",
                        "suffix": ""
                    },
                    {
                        "first": "Hiroya",
                        "middle": [],
                        "last": "Takamura",
                        "suffix": ""
                    },
                    {
                        "first": "Manabu",
                        "middle": [],
                        "last": "Okumura",
                        "suffix": ""
                    }
                ],
                "year": 2013,
                "venue": "Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics",
                "volume": "1",
                "issue": "",
                "pages": "1023--1032",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Hajime Morita, Ryohei Sasano, Hiroya Takamura, and Manabu Okumura. 2013. Subtree extractive sum- marization via submodular maximization. In Pro- ceedings of the 51st Annual Meeting of the Associa- tion for Computational Linguistics (Volume 1: Long Papers), pages 1023-1032, Sofia, Bulgaria, August. Association for Computational Linguistics.",
                "links": null
            },
            "BIBREF15": {
                "ref_id": "b15",
                "title": "Evaluation of a cross-lingual romanian-english multi-document summariser",
                "authors": [
                    {
                        "first": "Constantin",
                        "middle": [],
                        "last": "Orasan",
                        "suffix": ""
                    },
                    {
                        "first": "",
                        "middle": [],
                        "last": "Oana Andreea Chiorean",
                        "suffix": ""
                    }
                ],
                "year": 2008,
                "venue": "LREC",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Constantin Orasan and Oana Andreea Chiorean. 2008. Evaluation of a cross-lingual romanian-english multi-document summariser. In LREC.",
                "links": null
            },
            "BIBREF16": {
                "ref_id": "b16",
                "title": "Experiments in cross language query focused multi-document summarization",
                "authors": [
                    {
                        "first": "Prasad",
                        "middle": [],
                        "last": "Pingali",
                        "suffix": ""
                    },
                    {
                        "first": "Jagadeesh",
                        "middle": [],
                        "last": "Jagarlamudi",
                        "suffix": ""
                    },
                    {
                        "first": "Vasudeva",
                        "middle": [],
                        "last": "Varma",
                        "suffix": ""
                    }
                ],
                "year": 2007,
                "venue": "Workshop on Cross Lingual Information Access Addressing the Information Need of Multilingual Societies in IJCAI2007. Citeseer",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Prasad Pingali, Jagadeesh Jagarlamudi, and Vasudeva Varma. 2007. Experiments in cross language query focused multi-document summarization. In Work- shop on Cross Lingual Information Access Address- ing the Information Need of Multilingual Societies in IJCAI2007. Citeseer.",
                "links": null
            },
            "BIBREF17": {
                "ref_id": "b17",
                "title": "Using bilingual information for cross-language document summarization",
                "authors": [
                    {
                        "first": "Xiaojun",
                        "middle": [],
                        "last": "Wan",
                        "suffix": ""
                    }
                ],
                "year": 2011,
                "venue": "Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies",
                "volume": "",
                "issue": "",
                "pages": "1546--1555",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Xiaojun Wan. 2011. Using bilingual information for cross-language document summarization. In Pro- ceedings of the 49th Annual Meeting of the Associ- ation for Computational Linguistics: Human Lan- guage Technologies, pages 1546-1555, Portland, Oregon, USA, June. Association for Computational Linguistics.",
                "links": null
            },
            "BIBREF18": {
                "ref_id": "b18",
                "title": "A sentence compression based framework to query-focused multidocument summarization",
                "authors": [
                    {
                        "first": "Lu",
                        "middle": [],
                        "last": "Wang",
                        "suffix": ""
                    },
                    {
                        "first": "Hema",
                        "middle": [],
                        "last": "Raghavan",
                        "suffix": ""
                    },
                    {
                        "first": "Vittorio",
                        "middle": [],
                        "last": "Castelli",
                        "suffix": ""
                    },
                    {
                        "first": "Radu",
                        "middle": [],
                        "last": "Florian",
                        "suffix": ""
                    },
                    {
                        "first": "Claire",
                        "middle": [],
                        "last": "Cardie",
                        "suffix": ""
                    }
                ],
                "year": 2013,
                "venue": "Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics",
                "volume": "1",
                "issue": "",
                "pages": "1384--1394",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Lu Wang, Hema Raghavan, Vittorio Castelli, Radu Flo- rian, and Claire Cardie. 2013. A sentence com- pression based framework to query-focused multi- document summarization. In Proceedings of the 51st Annual Meeting of the Association for Compu- tational Linguistics (Volume 1: Long Papers), pages 1384-1394, Sofia, Bulgaria, August. Association for Computational Linguistics.",
                "links": null
            },
            "BIBREF19": {
                "ref_id": "b19",
                "title": "Multiple aspect summarization using integer linear programming",
                "authors": [
                    {
                        "first": "Kristian",
                        "middle": [],
                        "last": "Woodsend",
                        "suffix": ""
                    },
                    {
                        "first": "Mirella",
                        "middle": [],
                        "last": "Lapata",
                        "suffix": ""
                    }
                ],
                "year": 2012,
                "venue": "Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning",
                "volume": "",
                "issue": "",
                "pages": "233--243",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Kristian Woodsend and Mirella Lapata. 2012. Mul- tiple aspect summarization using integer linear pro- gramming. In Proceedings of the 2012 Joint Con- ference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning, pages 233-243. Association for Compu- tational Linguistics.",
                "links": null
            },
            "BIBREF20": {
                "ref_id": "b20",
                "title": "Compressive document summarization via sparse optimization",
                "authors": [
                    {
                        "first": "Jin-Ge",
                        "middle": [],
                        "last": "Yao",
                        "suffix": ""
                    },
                    {
                        "first": "Xiaojun",
                        "middle": [],
                        "last": "Wan",
                        "suffix": ""
                    },
                    {
                        "first": "Jianguo",
                        "middle": [],
                        "last": "Xiao",
                        "suffix": ""
                    }
                ],
                "year": 2015,
                "venue": "IJCAI",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Jin-ge Yao, Xiaojun Wan, and Jianguo Xiao. 2015. Compressive document summarization via sparse optimization. In IJCAI.",
                "links": null
            },
            "BIBREF21": {
                "ref_id": "b21",
                "title": "Sentence compression as a component of a multi-document summarization system",
                "authors": [
                    {
                        "first": "Bonnie",
                        "middle": [],
                        "last": "David M Zajic",
                        "suffix": ""
                    },
                    {
                        "first": "Jimmy",
                        "middle": [],
                        "last": "Dorr",
                        "suffix": ""
                    },
                    {
                        "first": "Richard",
                        "middle": [],
                        "last": "Lin",
                        "suffix": ""
                    },
                    {
                        "first": "",
                        "middle": [],
                        "last": "Schwartz",
                        "suffix": ""
                    }
                ],
                "year": 2006,
                "venue": "Proceedings of the 2006 Document Understanding Workshop",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "David M Zajic, Bonnie Dorr, Jimmy Lin, and Richard Schwartz. 2006. Sentence compression as a compo- nent of a multi-document summarization system. In Proceedings of the 2006 Document Understanding Workshop, New York.",
                "links": null
            }
        },
        "ref_entries": {
            "FIGREF0": {
                "type_str": "figure",
                "num": null,
                "uris": null,
                "text": "Example compressive summary lp solve package 8 as the ILP solver to obtain an exact solution on the first document cluster (D04) in DUC 2001 dataset. In"
            },
            "FIGREF1": {
                "type_str": "figure",
                "num": null,
                "uris": null,
                "text": "Figure 2: Experimental analysis"
            },
            "TABREF1": {
                "type_str": "table",
                "num": null,
                "content": "<table><tr><td>Sentence Budgeting</td><td colspan=\"5\">ROUGE-1 ROUGE-2 ROUGE-W ROUGE-L ROUGE-SU4</td></tr><tr><td>Baseline(EN)</td><td>0.34842</td><td>0.11823</td><td>0.05505</td><td>0.15665</td><td>0.12320</td></tr><tr><td>Baseline(CN)</td><td>0.34901</td><td>0.12015</td><td>0.05664</td><td>0.15942</td><td>0.12625</td></tr><tr><td>PBES</td><td>0.36618</td><td>0.12281</td><td>0.05913</td><td>0.16018</td><td>0.11317</td></tr><tr><td>CoRank (reimplemented)</td><td>0.37601</td><td>0.12570</td><td>0.06088</td><td>0.17350</td><td>0.13352</td></tr><tr><td>Baseline(ENcomp)</td><td>0.36982</td><td>0.13001</td><td>0.06906</td><td>0.16233</td><td>0.13543</td></tr><tr><td>PBCS</td><td>0.37890</td><td>0.13549</td><td>0.07102</td><td>0.17632</td><td>0.14098</td></tr><tr><td>Character Budgeting</td><td colspan=\"5\">ROUGE-1 ROUGE-2 ROUGE-W ROUGE-L ROUGE-SU4</td></tr><tr><td>Baseline(EN)</td><td>0.33602</td><td>0.10546</td><td>0.05263</td><td>0.15437</td><td>0.12161</td></tr><tr><td>Baseline(CN)</td><td>0.34075</td><td>0.12012</td><td>0.05678</td><td>0.15736</td><td>0.11981</td></tr><tr><td>PBES</td><td>0.35483</td><td>0.11902</td><td>0.05642</td><td>0.15899</td><td>0.11205</td></tr><tr><td>CoRank (reimplemented)</td><td>0.36147</td><td>0.12305</td><td>0.05847</td><td>0.16962</td><td>0.13364</td></tr><tr><td>Baseline(ENcomp)</td><td>0.36654</td><td>0.12960</td><td>0.06503</td><td>0.15987</td><td>0.13421</td></tr><tr><td>PBCS</td><td>0.37842</td><td>0.13441</td><td>0.07005</td><td>0.16928</td><td>0.13985</td></tr></table>",
                "text": "Results of word-based ROUGE evaluation",
                "html": null
            },
            "TABREF2": {
                "type_str": "table",
                "num": null,
                "content": "<table><tr><td>System</td><td>GR</td><td>NR</td><td>RC</td><td>TF</td><td>SC</td></tr><tr><td colspan=\"6\">CoRank 3.00\u00b10.75 3.35\u00b10.57 3.55\u00b10.82 3.90\u00b10.79 3.55\u00b10.74</td></tr><tr><td>PBES</td><td colspan=\"5\">2.90\u00b10.89 3.25\u00b10.70 3.50\u00b10.87 3.96\u00b10.80 3.45\u00b10.50</td></tr><tr><td>PBCS</td><td colspan=\"5\">2.90\u00b10.83 3.60\u00b10.49 3.75\u00b10.82 3.93\u00b10.68 3.40\u00b10.58</td></tr><tr><td>Human</td><td colspan=\"5\">4.60\u00b10.49 4.15\u00b10.73 4.35\u00b10.73 4.93\u00b10.25 3.90\u00b10.94</td></tr></table>",
                "text": "Results of character-based ROUGE evaluation",
                "html": null
            },
            "TABREF3": {
                "type_str": "table",
                "num": null,
                "content": "<table/>",
                "text": "\u51ef\u7279 \u5973\u58eb \u786c\u6717 \uff0c \u7d27\u6025\u670d\u52a1 \u5728\u4f5b\u7f57\u91cc\u8fbe\u5dde \u7684 \u6234\u5fb7 \u53bf\uff0c \u627f\u62c5\u4e86 \u98ce\u66b4 \u7684\u51b2\u51fb \u4e3b \u4efb \u4f30\u8ba1\uff0c \u5b89\u5fb7\u9c81 \u5df2\u7ecf \u9020\u6210 150\u4ebf \u7f8e\u5143 \u5230 200\u4ebf \u7f8e\u5143 \u7684\u635f\u5bb3 ( 75\u4ebf \u82f1\u9551 \uff0c 100\u4ebf \u82f1\u9551 ) \u3002 Ms Kate Hale, director of emergency services in Florida's Dade County, which bore the brunt of the storm, estimated that Andrew had already caused Dollars 15bn to Dollars 20bn (Pounds 7.5bn-Pounds 10bn) of damage.\u96e8\u679c\u98d3\u98ce \uff0c \u88ad\u51fb \u4e1c\u6d77\u5cb8 \u5728 1989\u5e749\u6708 \uff0c \u82b1\u8d39\u4e86 \u4fdd\u9669\u4e1a \u7ea6 42\u4ebf \u7f8e\u5143 \u3002 Hurricane Hugo, which hit the east coast in September 1989, cost the insurance industry about Dollars 4.2bn.\u7f8e\u56fd\u57ce\u5e02 \u6cbf \u58a8\u897f\u54e5\u6e7e\u7684 \u963f\u62c9\u5df4\u9a6c\u5dde \u5230\u5f97\u514b\u8428\u65af\u5dde \u4e1c\u90e8 \u662f \u5728 \u98ce\u66b4 \u624b\u8868 \u6628\u665a \u5b89 \u5fb7\u9c81 \u98d3\u98ce \u5411\u897f \u6a2a\u8de8 \u4f5b\u7f57\u91cc\u8fbe\u5dde\u5357\u90e8 \u5e2d\u5377 \u540e \uff0c\u9020\u6210 \u81f3\u5c11 \u516b\u4eba\u6b7b\u4ea1 \u548c\u4e25\u91cd\u7684 \u8d22 \u4ea7\u635f\u5931 \u3002 US CITIES along the Gulf of Mexico from Alabama to eastern Texas were on storm watch last night as Hurricane Andrew headed west after sweeping across southern Florida, causing at least eight deaths and severe property damage.\u8fc7\u53bb\u7684 \u4e25\u91cd \u98d3\u98ce \u7f8e\u56fd \uff0c\u96e8\u679c \uff0c \u88ad\u51fb \u5357\u5361\u7f57\u6765\u7eb3\u5dde \u4e8e1989\u5e74 \uff0c \u8017\u8d44 \u4ece \u4fdd\u9669 \u635f\u5931 \u884c\u4e1a 42\u4ebf \u7f8e\u5143 \uff0c\u4f46 \u9020\u6210\u7684 \u603b\u4f24\u5bb3 \u7684 \u4f30\u8ba1 60\u4ebf \u7f8e\u5143 \u548c 100\u4ebf \u7f8e\u5143 \u4e4b\u95f4 \u4e0d\u7b49 \u3002 The last serious US hurricane, Hugo, which struck South Carolina in 1989, cost the industry Dollars 4.2bn from insured losses, though estimates of the total damage caused ranged between Dollars 6bn and Dollars 10bn.\u6700\u521d\u7684 \u62a5\u9053\u79f0\uff0c \u81f3\u5c11\u6709\u4e00\u4eba \u5df2\u7ecf \u6b7b\u4ea1 \uff0c 75 \u4eba\u53d7\u4f24 \uff0c\u6570\u5343 \u53d6\u5f97 \u6cbf\u7740 \u8def\u6613\u65af\u5b89 \u90a3\u5dde\u6d77\u5cb8 \u65e0\u5bb6\u53ef\u5f52 \uff0c 14 \u8bc1\u5b9e \u5728\u4f5b\u7f57\u91cc\u8fbe\u5dde\u548c \u6b7b\u4ea1 \u4e09 \u5df4\u54c8\u9a6c\u7fa4\u5c9b \u540e \u3002 Initial reports said at least one person had died, 75 been injured and thousands made homeless along the Louisiana coast, after 14 confirmed deaths in Florida and three in the Bahamas.",
                "html": null
            }
        }
    }
}