File size: 87,835 Bytes
6fa4bc9
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
1001
1002
1003
1004
1005
1006
1007
1008
1009
1010
1011
1012
1013
1014
1015
1016
1017
1018
1019
1020
1021
1022
1023
1024
1025
1026
1027
1028
1029
1030
1031
1032
1033
1034
1035
1036
1037
1038
1039
1040
1041
1042
1043
1044
1045
1046
1047
1048
1049
1050
1051
1052
1053
1054
1055
1056
1057
1058
1059
1060
1061
1062
1063
1064
1065
1066
1067
1068
1069
1070
1071
1072
1073
1074
1075
1076
1077
1078
1079
1080
1081
1082
1083
1084
1085
1086
1087
1088
1089
1090
1091
1092
1093
1094
1095
1096
1097
1098
1099
1100
1101
1102
1103
1104
1105
1106
1107
1108
1109
1110
1111
1112
1113
1114
1115
1116
1117
1118
1119
1120
1121
1122
1123
1124
1125
1126
1127
1128
1129
1130
1131
1132
1133
1134
1135
1136
1137
1138
1139
1140
1141
1142
1143
1144
1145
1146
1147
1148
1149
1150
1151
1152
1153
1154
1155
1156
1157
1158
1159
1160
1161
1162
1163
1164
1165
1166
1167
1168
1169
1170
1171
1172
1173
1174
1175
1176
1177
1178
1179
1180
1181
1182
1183
1184
1185
1186
1187
1188
1189
1190
1191
1192
1193
1194
1195
1196
1197
1198
1199
1200
1201
1202
1203
1204
1205
1206
1207
1208
1209
1210
1211
1212
1213
1214
1215
1216
1217
1218
1219
1220
1221
1222
1223
1224
1225
1226
1227
1228
1229
1230
1231
1232
1233
1234
1235
1236
1237
1238
1239
1240
1241
1242
1243
1244
1245
1246
1247
1248
1249
1250
1251
1252
1253
1254
1255
1256
1257
1258
1259
1260
1261
1262
1263
1264
1265
1266
1267
1268
1269
1270
1271
1272
1273
1274
1275
1276
1277
1278
1279
1280
1281
1282
1283
1284
1285
1286
1287
1288
1289
1290
1291
1292
1293
1294
1295
1296
1297
1298
1299
1300
1301
1302
1303
1304
1305
1306
1307
1308
1309
1310
1311
1312
1313
1314
1315
1316
1317
1318
1319
1320
1321
1322
1323
1324
1325
1326
1327
1328
1329
1330
1331
1332
1333
1334
1335
1336
1337
1338
1339
1340
1341
1342
1343
1344
1345
1346
1347
1348
1349
1350
1351
1352
1353
1354
1355
1356
1357
1358
1359
1360
1361
1362
1363
1364
1365
1366
1367
1368
1369
1370
1371
1372
1373
1374
1375
1376
1377
1378
1379
1380
1381
1382
1383
1384
1385
1386
1387
1388
1389
1390
1391
1392
1393
1394
1395
1396
1397
1398
1399
1400
1401
1402
1403
1404
1405
1406
1407
1408
1409
1410
1411
1412
1413
1414
1415
1416
1417
1418
1419
1420
1421
1422
1423
1424
1425
1426
1427
1428
1429
1430
1431
1432
1433
1434
1435
1436
1437
1438
1439
1440
1441
1442
1443
1444
1445
1446
1447
1448
1449
1450
1451
1452
1453
1454
1455
1456
1457
1458
1459
1460
1461
1462
1463
1464
1465
1466
1467
1468
1469
1470
1471
1472
1473
1474
1475
1476
1477
1478
1479
1480
1481
1482
1483
1484
1485
1486
1487
1488
1489
1490
1491
1492
1493
1494
1495
1496
1497
1498
1499
1500
1501
1502
1503
1504
1505
1506
1507
1508
1509
1510
1511
1512
1513
1514
1515
1516
1517
1518
1519
1520
1521
1522
1523
1524
1525
1526
1527
1528
1529
1530
1531
1532
1533
1534
1535
1536
1537
1538
1539
1540
1541
1542
1543
1544
1545
1546
1547
1548
1549
1550
1551
1552
1553
1554
1555
1556
1557
1558
1559
1560
1561
1562
1563
1564
1565
1566
1567
1568
1569
1570
1571
1572
1573
1574
1575
1576
1577
1578
1579
1580
1581
1582
1583
1584
1585
1586
1587
1588
1589
1590
1591
1592
1593
1594
1595
1596
1597
1598
1599
1600
1601
1602
1603
1604
1605
1606
1607
1608
1609
1610
1611
1612
1613
1614
1615
1616
1617
1618
1619
1620
1621
1622
1623
1624
1625
1626
1627
1628
1629
1630
1631
1632
1633
1634
1635
1636
1637
1638
1639
1640
1641
{
    "paper_id": "D08-1019",
    "header": {
        "generated_with": "S2ORC 1.0.0",
        "date_generated": "2023-01-19T16:31:04.838975Z"
    },
    "title": "Sentence Fusion via Dependency Graph Compression",
    "authors": [
        {
            "first": "Katja",
            "middle": [],
            "last": "Filippova",
            "suffix": "",
            "affiliation": {
                "laboratory": "",
                "institution": "EML Research gGmbH Schloss",
                "location": {
                    "addrLine": "Wolfsbrunnenweg 33",
                    "postCode": "69118",
                    "settlement": "Heidelberg",
                    "country": "Germany"
                }
            },
            "email": ""
        },
        {
            "first": "Michael",
            "middle": [],
            "last": "Strube",
            "suffix": "",
            "affiliation": {
                "laboratory": "",
                "institution": "EML Research gGmbH Schloss",
                "location": {
                    "addrLine": "Wolfsbrunnenweg 33",
                    "postCode": "69118",
                    "settlement": "Heidelberg",
                    "country": "Germany"
                }
            },
            "email": ""
        }
    ],
    "year": "",
    "venue": null,
    "identifiers": {},
    "abstract": "We present a novel unsupervised sentence fusion method which we apply to a corpus of biographies in German. Given a group of related sentences, we align their dependency trees and build a dependency graph. Using integer linear programming we compress this graph to a new tree, which we then linearize. We use GermaNet and Wikipedia for checking semantic compatibility of co-arguments. In an evaluation with human judges our method outperforms the fusion approach of Barzilay & McKeown (2005) with respect to readability.",
    "pdf_parse": {
        "paper_id": "D08-1019",
        "_pdf_hash": "",
        "abstract": [
            {
                "text": "We present a novel unsupervised sentence fusion method which we apply to a corpus of biographies in German. Given a group of related sentences, we align their dependency trees and build a dependency graph. Using integer linear programming we compress this graph to a new tree, which we then linearize. We use GermaNet and Wikipedia for checking semantic compatibility of co-arguments. In an evaluation with human judges our method outperforms the fusion approach of Barzilay & McKeown (2005) with respect to readability.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Abstract",
                "sec_num": null
            }
        ],
        "body_text": [
            {
                "text": "Automatic text summarization is a rapidly developing field in computational linguistics. Summarization systems can be classified as either extractive or abstractive ones (Sp\u00e4rck Jones, 1999) . To date, most systems are extractive: sentences are selected from one or several documents and then ordered. This method exhibits problems, because input sentences very often overlap and complement each other at the same time. As a result there is a trade-off between non-redundancy and completeness of the output. Although the need for abstractive approaches has been recognized before (e.g. McKeown et al. (1999) ), so far almost all attempts to get closer to abstractive summarization using scalable, statistical techniques have been limited to sentence compression.",
                "cite_spans": [
                    {
                        "start": 170,
                        "end": 190,
                        "text": "(Sp\u00e4rck Jones, 1999)",
                        "ref_id": "BIBREF22"
                    },
                    {
                        "start": 586,
                        "end": 607,
                        "text": "McKeown et al. (1999)",
                        "ref_id": "BIBREF17"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": "1"
            },
            {
                "text": "The main reason why there is little progress on abstractive summarization is that this task seems to require a conceptual representation of the text which is not yet available (see e.g. Hovy (2003, p.589) ). Sentence fusion (Barzilay & McKeown, 2005) , where a new sentence is generated from a group of related sentences and where complete semantic and conceptual representation is not required, can be seen as a middle-ground between extractive and abstractive summarization. Our work regards a corpus of biographies in German where multiple documents about the same person should be merged into a single one. An example of a fused sentence (3) with the source sentences (1,2) is given below: Having both (1) and (2) in a summary would make it redundant. Selecting only one of them would not give all the information from the input. (3), fused from both (1) and (2), conveys the necessary information without being redundant and is more appropriate for a summary.",
                "cite_spans": [
                    {
                        "start": 186,
                        "end": 204,
                        "text": "Hovy (2003, p.589)",
                        "ref_id": null
                    },
                    {
                        "start": 224,
                        "end": 250,
                        "text": "(Barzilay & McKeown, 2005)",
                        "ref_id": "BIBREF0"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": "1"
            },
            {
                "text": "To this end, we present a novel sentence fusion method based on dependency structure alignment and semantically and syntactically informed phrase aggregation and pruning. We address the problem in an unsupervised manner and use integer linear programming (ILP) to find a globally optimal solution. We argue that our method has three important advantages compared to existing methods. First, we address the grammaticality issue empirically by means of knowledge obtained from an automatically parsed corpus. We do not require such resources as subcategorization lexicons or hand-crafted rules, but decide to retain a dependency based on its syntactic importance score. The second point concerns integrating semantics. Being definitely important, \"this source of information remains relatively unused in work on aggregation 1 within NLG\" (Reiter & Dale, 2000, p.141) . To our knowledge, in the text-to-text generation field, we are the first to use semantic information not only for alignment but also for aggregation in that we check coarguments' compatibility. Apart from that, our method is not limited to sentence fusion and can be easily applied to sentence compression. In Filippova & Strube (2008) we compress English sentences with the same approach and achieve state-of-the-art performance.",
                "cite_spans": [
                    {
                        "start": 836,
                        "end": 864,
                        "text": "(Reiter & Dale, 2000, p.141)",
                        "ref_id": null
                    },
                    {
                        "start": 1177,
                        "end": 1202,
                        "text": "Filippova & Strube (2008)",
                        "ref_id": "BIBREF6"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": "1"
            },
            {
                "text": "The paper is organized as follows: Section 2 gives an overview of related work and Section 3 presents our data. Section 4 introduces our method and Section 5 describes the experiments and discusses the results of the evaluation. The conclusions follow in the final section.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": "1"
            },
            {
                "text": "Most studies on text-to-text generation concern sentence compression where the input consists of exactly one sentence (Jing, 2001; Hori & Furui, 2004; Clarke & Lapata, 2008, inter alia) . In such setting, redundancy, incompleteness and compatibility issues do not arise. Apart from that, there is no obvious way of how existing sentence compression methods can be adapted to sentence fusion. Barzilay & McKeown (2005) present a sentence fusion method for multi-document news summarization which crucially relies on the assumption that information appearing in many sources is important. Consequently, their method produces an intersection of input sentences by, first, finding the centroid of the input, second, augmenting it with information from other sentences and, finally, pruning a predefined set of constituents (e.g. PPs). The resulting structure is not necessarily a tree and allows for extraction of several trees, each of which can be linearized in many ways. Marsi & Krahmer (2005) extend the approach of Barzilay & McKeown to do not only intersection but also union fusion. Like Barzilay & McKeown (2005) , they find the best linearization with a language model which, as they point out, often produces inadequate rankings being unable to deal with word order, agreement and subcategorization constraints. In our work we aim at producing a valid dependency tree structure so that most grammaticality issues are resolved before the linearization stage. Wan et al. (2007) introduce a global revision method of how a novel sentence can be generated from a set of input words. They formulate the problem as a search for a maximum spanning tree which is incrementally constructed by connecting words or phrases with dependency relations. The grammaticality issue is addressed by a number of hard constraints. As Wan et al. point out, one of the problems with their method is that the output built up from dependencies found in a corpus might have a meaning different from the intended one. Since we build our trees from the input dependencies, this problem does not arise with our method. Apart from that, in our opinion, the optimization formulation we adopt is more appropriate as it allows to integrate many constraints without complex rescoring rules.",
                "cite_spans": [
                    {
                        "start": 118,
                        "end": 130,
                        "text": "(Jing, 2001;",
                        "ref_id": "BIBREF11"
                    },
                    {
                        "start": 131,
                        "end": 150,
                        "text": "Hori & Furui, 2004;",
                        "ref_id": "BIBREF9"
                    },
                    {
                        "start": 151,
                        "end": 185,
                        "text": "Clarke & Lapata, 2008, inter alia)",
                        "ref_id": null
                    },
                    {
                        "start": 392,
                        "end": 417,
                        "text": "Barzilay & McKeown (2005)",
                        "ref_id": "BIBREF0"
                    },
                    {
                        "start": 971,
                        "end": 993,
                        "text": "Marsi & Krahmer (2005)",
                        "ref_id": "BIBREF16"
                    },
                    {
                        "start": 1092,
                        "end": 1117,
                        "text": "Barzilay & McKeown (2005)",
                        "ref_id": "BIBREF0"
                    },
                    {
                        "start": 1465,
                        "end": 1482,
                        "text": "Wan et al. (2007)",
                        "ref_id": "BIBREF25"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Related Work",
                "sec_num": "2"
            },
            {
                "text": "The comparable corpus we work with is a collection of about 400 biographies in German gathered from the Internet 2 . These biographies describe 140 different people, and the number of articles for one person ranges from 2 to 4, being 3 on average. Despite obvious similarities between articles about one person, neither identical content nor identical ordering of information can be expected.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Data",
                "sec_num": "3"
            },
            {
                "text": "Fully automatic preprocessing in our system comprises the following steps: sentence boundaries are identified with a Perl CPAN module 3 . Then the sentences are split into tokens and the TnT tagger (Brants, 2000) and the TreeTagger (Schmid, 1997) are used for tagging and lemmatization respectively. Finally, the biographies are parsed with the CDG dependency parser (Foth & Menzel, 2006) . We also identify references to the biographee (pronominal as well as proper names) and temporal expressions (absolute and relative) with a few rules.",
                "cite_spans": [
                    {
                        "start": 198,
                        "end": 212,
                        "text": "(Brants, 2000)",
                        "ref_id": "BIBREF1"
                    },
                    {
                        "start": 232,
                        "end": 246,
                        "text": "(Schmid, 1997)",
                        "ref_id": "BIBREF21"
                    },
                    {
                        "start": 367,
                        "end": 388,
                        "text": "(Foth & Menzel, 2006)",
                        "ref_id": "BIBREF7"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Data",
                "sec_num": "3"
            },
            {
                "text": "Groups of related sentences serve as input to a sentence fusion system and thus need to be identified first (4.1). Then the dependency trees of the sentences are modified (4.2) and aligned (4.3). Syntactic importance (4.4) and word informativeness (4.5) scores are used to extract a new dependency tree from a graph of aligned trees (4.6). Finally, the tree is linearized (4.7).",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Our Method",
                "sec_num": "4"
            },
            {
                "text": "Sentence alignment for comparable corpora requires methods different from those used in machine translation for parallel corpora. For example, given two biographies of a person, one of them may follow the timeline from birth to death whereas the other may group events thematically or tell only about the scientific contribution of the person. Thus one cannot assume that the sentence order or the content is the same in two biographies. Shallow methods like word or bigram overlap, (weighted) cosine or Jaccard similarity are appealing as they are cheap and robust. In particular, Nelken & Schieber (2006) demonstrate the efficacy of a sentence-based tf*idf score when applied to comparable corpora. Following them, we define the similarity of two sentences sim(s 1 , s 2 ) as",
                "cite_spans": [
                    {
                        "start": 582,
                        "end": 606,
                        "text": "Nelken & Schieber (2006)",
                        "ref_id": "BIBREF18"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Sentence Alignment",
                "sec_num": "4.1"
            },
            {
                "text": "EQUATION",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [
                    {
                        "start": 0,
                        "end": 8,
                        "text": "EQUATION",
                        "ref_id": "EQREF",
                        "raw_str": "S 1 \u2022 S 2 |S 1 | \u2022 |S 2 | = t w S 1 (t) \u2022 w S 2 (t) t w 2 S 1 (t) t w 2 S 2 (t)",
                        "eq_num": "(1)"
                    }
                ],
                "section": "Sentence Alignment",
                "sec_num": "4.1"
            },
            {
                "text": "where S is the set of all lemmas but stop-words from s, and w S (t) is the weight of the term t:",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Sentence Alignment",
                "sec_num": "4.1"
            },
            {
                "text": "w S (t) = S(t) 1 N t (2)",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Sentence Alignment",
                "sec_num": "4.1"
            },
            {
                "text": "where S(t) is the indicator function of S, N t is the number of sentences in the biographies of one person which contain t. We enhance the similarity measure by looking up synonymy in GermaNet (Lemnitzer & Kunze, 2002) . We discard identical or nearly identical sentences (sim(s 1 , s 2 ) > 0.8) and greedily build sentence clusters using a hierarchical groupwiseaverage technique. As a result, one sentence may belong to one cluster at most. These sentence clusters serve as input to the fusion algorithm.",
                "cite_spans": [
                    {
                        "start": 193,
                        "end": 218,
                        "text": "(Lemnitzer & Kunze, 2002)",
                        "ref_id": "BIBREF15"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Sentence Alignment",
                "sec_num": "4.1"
            },
            {
                "text": "We apply a set of transformations to a dependency tree to emphasize its important properties and eliminate unimportant ones. These transformations are necessary for the compression stage. An example of a dependency tree and its modifed version are given in Fig. 1 .",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 257,
                        "end": 263,
                        "text": "Fig. 1",
                        "ref_id": "FIGREF1"
                    }
                ],
                "eq_spans": [],
                "section": "Dependency Tree Modification",
                "sec_num": "4.2"
            },
            {
                "text": "PREP preposition nodes (an, in) are removed and placed as labels on the edges to the respective nouns;",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Dependency Tree Modification",
                "sec_num": "4.2"
            },
            {
                "text": "CONJ a chain of conjuncts (Mathematik und Physik) is split and each node is attached to the parent node (studierte) provided they are not verbs;",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Dependency Tree Modification",
                "sec_num": "4.2"
            },
            {
                "text": "APP a chain of words analyzed as appositions by CDG (Niels Bohr) is collapsed into one node;",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Dependency Tree Modification",
                "sec_num": "4.2"
            },
            {
                "text": "FUNC function words like determiners (der), auxiliary verbs or negative particles are removed from the tree and memorized with their lexical heads (memorizing negative particles preserves negation in the output); ROOT every dependency tree gets an explicit root which is connected to every verb node;",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Dependency Tree Modification",
                "sec_num": "4.2"
            },
            {
                "text": "BIO all occurrences of the biographee (Niels Bohr) are replaced with the bio tag.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Dependency Tree Modification",
                "sec_num": "4.2"
            },
            {
                "text": "Once we have a group of two to four strongly related sentences and their transformed dependency trees, we aim at finding the best node alignment. We use a simple, fast and transparent method and align any two words provided that they 1. are content words;",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Node Alignment",
                "sec_num": "4.3"
            },
            {
                "text": "2. have the same part-of-speech;",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Node Alignment",
                "sec_num": "4.3"
            },
            {
                "text": "3. have identical lemmas or are synonyms.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Node Alignment",
                "sec_num": "4.3"
            },
            {
                "text": "In case of multiple possibilities, which are extremely rare in our data, the choice is made randomly. By merging all aligned nodes we get a dependency graph which consists of all dependencies from the input trees. In case it contains a cycle, one of the alignments from the cycle is eliminated. We prefer this very simple method to bottom-up ones (Barzilay & McKeown, 2005; Marsi & Krahmer, 2005) for two main reasons. Pursuing local subtree alignments, bottom-up methods may leave identical words unaligned and thus prohibit fusion of complementary information. On the other hand, they may force alignment of two unrelated words if the subtrees they root are largely aligned. Although in some cases it helps discover paraphrases, it considerably increases chances of generating ungrammatical output which we want to avoid at any cost.",
                "cite_spans": [
                    {
                        "start": 347,
                        "end": 373,
                        "text": "(Barzilay & McKeown, 2005;",
                        "ref_id": "BIBREF0"
                    },
                    {
                        "start": 374,
                        "end": 396,
                        "text": "Marsi & Krahmer, 2005)",
                        "ref_id": "BIBREF16"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Node Alignment",
                "sec_num": "4.3"
            },
            {
                "text": "Given a dependency graph we want to get a new dependency tree from it. Intuitively, we want to retain obligatory dependencies (e.g. subject) while removing less important ones (e.g. adv). When deciding on pruning an argument, previous approaches either used a set of hand-crafted rules (e.g. Barzilay & McKeown (2005) ), or utilized a subcategorization lexicon (e.g. Jing (2001) ). The hand-crafted rules are often too general to ensure a grammatical argument structure for different verbs (e.g. PPs can be pruned). Subcategorization lexicons are not readily available for many languages and cover only verbs. E.g. they do not tell that the noun son is very often modified by a PP using the preposition of, as in the son of Niels Bohr, and that the NP without a PP modifier may appear incomplete.",
                "cite_spans": [
                    {
                        "start": 292,
                        "end": 317,
                        "text": "Barzilay & McKeown (2005)",
                        "ref_id": "BIBREF0"
                    },
                    {
                        "start": 367,
                        "end": 378,
                        "text": "Jing (2001)",
                        "ref_id": "BIBREF11"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Syntactic Importance Score",
                "sec_num": "4.4"
            },
            {
                "text": "To overcome these problems, we decide on pruning an edge by estimating the conditional probability of its label given its head, P (l|h) 4 . For example, P (subj|studieren) -the probability of the label subject given the verb study -is higher than P (in|studieren), and therefore the subject will be preserved whereas the prepositional label and thus the whole PP can be pruned, if needed. Table 1 presents the probabilities of several labels given that the head is studieren and shows that some prepositions are more important than other ones. Note that if we did not apply the PREP modification we would be unable to distinguish between different prepositions and could only calculate P (pp|studieren) which would not be very informative. subj obja in an nach mit zu 0.88 0.74 0.44 0.42 0.09 0.02 0.01 Table 1 : Probabilities of subj, obja(ccusative), in, at, after, with, to given the verb studieren (study)",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 389,
                        "end": 396,
                        "text": "Table 1",
                        "ref_id": null
                    },
                    {
                        "start": 803,
                        "end": 810,
                        "text": "Table 1",
                        "ref_id": null
                    }
                ],
                "eq_spans": [],
                "section": "Syntactic Importance Score",
                "sec_num": "4.4"
            },
            {
                "text": "We also want to retain informative words in the output tree. There are many ways in which word importance can be defined. Here, we use a formula introduced by Clarke & Lapata (2008) which is a modification of the significance score of Hori & Furui (2004) :",
                "cite_spans": [
                    {
                        "start": 159,
                        "end": 181,
                        "text": "Clarke & Lapata (2008)",
                        "ref_id": "BIBREF2"
                    },
                    {
                        "start": 235,
                        "end": 254,
                        "text": "Hori & Furui (2004)",
                        "ref_id": "BIBREF9"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Word Informativeness Score",
                "sec_num": "4.5"
            },
            {
                "text": "I(w i ) = l N \u2022 f i log F A F i (3)",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Word Informativeness Score",
                "sec_num": "4.5"
            },
            {
                "text": "w i is the topic word (either noun or verb), f i is the frequency of w i in the aligned biographies, F i is the frequency of w i in the corpus, and F A is the sum of frequencies of all topic words in the corpus. l is the number of clause nodes above w and N is the maximum level of embedding of the sentence which w belongs to. By defining word importance differently, e.g. as relatedness of a word to the topic, we could apply our method to topic-based summarization (Krahmer et al., 2008) .",
                "cite_spans": [
                    {
                        "start": 468,
                        "end": 490,
                        "text": "(Krahmer et al., 2008)",
                        "ref_id": "BIBREF13"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Word Informativeness Score",
                "sec_num": "4.5"
            },
            {
                "text": "We formulate the task of getting a tree from a dependency graph as an optimization problem and solve it with ILP 5 . In order to decide which edges of the graph to remove, for each directed dependency edge from head h to word w we introduce a binary variable x l h,w , where l stands for the label of the edge:",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "New Sentence Generation",
                "sec_num": "4.6"
            },
            {
                "text": "x l h,w = 1 if the dependency is preserved 0 otherwise",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "New Sentence Generation",
                "sec_num": "4.6"
            },
            {
                "text": "The goal is to find a subtree of the graph which gets the highest score of the objective function (5) to which both the probability of dependencies (P (l|h) ) and the importance of dependent words (I(w)) contribute: 5 We use lp solve in our implementation http:// sourceforge.net/projects/lpsolve.",
                "cite_spans": [
                    {
                        "start": 216,
                        "end": 217,
                        "text": "5",
                        "ref_id": null
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "New Sentence Generation",
                "sec_num": "4.6"
            },
            {
                "text": "EQUATION",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [
                    {
                        "start": 0,
                        "end": 8,
                        "text": "EQUATION",
                        "ref_id": "EQREF",
                        "raw_str": "f (X) = x x l h,w \u2022 P (l|h) \u2022 I(w)",
                        "eq_num": "(5)"
                    }
                ],
                "section": "New Sentence Generation",
                "sec_num": "4.6"
            },
            {
                "text": "The objective function is subject to four types of constraints presented below (W stands for the set of graph nodes minus root, i.e. the set of words).",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "New Sentence Generation",
                "sec_num": "4.6"
            },
            {
                "text": "STRUCTURAL constraints allow to get a tree from the graph: (6) ensures that each word has one head at most. 7ensures connectivity in the tree. (8) is optional and restricts the size of the resulting tree to \u03b1 words (\u03b1 = min(0.6 \u2022 |W |, 10)).",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "New Sentence Generation",
                "sec_num": "4.6"
            },
            {
                "text": "EQUATION",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [
                    {
                        "start": 0,
                        "end": 8,
                        "text": "EQUATION",
                        "ref_id": "EQREF",
                        "raw_str": "\u2200w \u2208 W, h,l x l h,w \u2264 1 (6) \u2200w \u2208 W, h,l x l h,w \u2212 1 |W | u,l x l w,u \u2265 0 (7) x x l h,w \u2264 \u03b1",
                        "eq_num": "(8)"
                    }
                ],
                "section": "New Sentence Generation",
                "sec_num": "4.6"
            },
            {
                "text": "SYNTACTIC constraints ensure the syntactic validity of the output tree and explicitly state which arguments should be preserved. We have only one syntactic constraint which guarantees that a subordinating conjunction (sc) is preserved (9) if and only if the clause it belongs to serves as a subordinate clause (sub) in the output.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "New Sentence Generation",
                "sec_num": "4.6"
            },
            {
                "text": "EQUATION",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [
                    {
                        "start": 0,
                        "end": 8,
                        "text": "EQUATION",
                        "ref_id": "EQREF",
                        "raw_str": "\u2200x sc w,u , h,l x sub h,w \u2212 x sc w,u = 0",
                        "eq_num": "(9)"
                    }
                ],
                "section": "New Sentence Generation",
                "sec_num": "4.6"
            },
            {
                "text": "SEMANTIC constraints restrict coordination to semantically compatible elements. The idea behind these constraints is the following (see Fig. 2 ). It can be that one sentence says He studied math and another one He studied physics, so the output may unite the two words under coordination: He studied math and physics. But if the input sentences are He studied physics and He studied sciences, then one should not unite both, because sciences is the generalization of physics. Neither should one unite two unrelated words: He studied with pleasure and He studied with Bohr cannot be fused into He studied with pleasure and Bohr.",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 136,
                        "end": 142,
                        "text": "Fig. 2",
                        "ref_id": "FIGREF2"
                    }
                ],
                "eq_spans": [],
                "section": "New Sentence Generation",
                "sec_num": "4.6"
            },
            {
                "text": "To formalize these intuitions we define two functions hm (w,u) and rel (w,u) : hm(w,u) is a binary function, whereas rel(w,u) returns a value from [0, 1]. We also introduce additional variables y l w,u (represented by dashed lines in Fig. 2) :",
                "cite_spans": [
                    {
                        "start": 57,
                        "end": 62,
                        "text": "(w,u)",
                        "ref_id": null
                    },
                    {
                        "start": 71,
                        "end": 76,
                        "text": "(w,u)",
                        "ref_id": null
                    }
                ],
                "ref_spans": [
                    {
                        "start": 234,
                        "end": 241,
                        "text": "Fig. 2)",
                        "ref_id": "FIGREF2"
                    }
                ],
                "eq_spans": [],
                "section": "New Sentence Generation",
                "sec_num": "4.6"
            },
            {
                "text": "y l w,u = 1 if \u2203h, l : x l h,w = 1 \u2227 x l h,u = 1 0 otherwise (10)",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "New Sentence Generation",
                "sec_num": "4.6"
            },
            {
                "text": "For two edges sharing a head and having identical labels to be retained we check in GermaNet and in the taxonomy derived from Wikipedia (Kassner et al., 2008) that their dependents are not in the hyponymy or meronymy relation (11). We prohibit verb coordination unless it is found in one of the input sentences. If the dependents are nouns, we also check that their semantic relatedness as measured with WikiRelate! (Strube & Ponzetto, 2006) is above a certain threshold (12). We empirically determined the value of \u03b2 = 0.36 by calculating an average similarity of coordinated nouns in the corpus. (14) ) guarantee that y l w,u = x l h,w \u00d7 x l h,u i.e. they ensure that the semantic constraints are applied only if both the labels from h to w and from h to u are preserved.",
                "cite_spans": [
                    {
                        "start": 136,
                        "end": 158,
                        "text": "(Kassner et al., 2008)",
                        "ref_id": "BIBREF12"
                    },
                    {
                        "start": 416,
                        "end": 441,
                        "text": "(Strube & Ponzetto, 2006)",
                        "ref_id": null
                    }
                ],
                "ref_spans": [
                    {
                        "start": 598,
                        "end": 602,
                        "text": "(14)",
                        "ref_id": "FIGREF1"
                    }
                ],
                "eq_spans": [],
                "section": "New Sentence Generation",
                "sec_num": "4.6"
            },
            {
                "text": "EQUATION",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [
                    {
                        "start": 0,
                        "end": 8,
                        "text": "EQUATION",
                        "ref_id": "EQREF",
                        "raw_str": "\u2200y l w,u , hm(w, u) \u2022 y l w,u = 0 (11) \u2200y l w,u , (rel(w, u) \u2212 \u03b2) \u2022 y l w,u \u2265 0",
                        "eq_num": "(12)"
                    }
                ],
                "section": "New Sentence Generation",
                "sec_num": "4.6"
            },
            {
                "text": "EQUATION",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [
                    {
                        "start": 0,
                        "end": 8,
                        "text": "EQUATION",
                        "ref_id": "EQREF",
                        "raw_str": "\u2200y l w,u , x l h,w + x l h,u \u2265 2y l w,u (13) \u2200y l w,u , 1 \u2212 x l h,w + 1 \u2212 x l h,u \u2265 1 \u2212 y l w,u",
                        "eq_num": "(14)"
                    }
                ],
                "section": "New Sentence Generation",
                "sec_num": "4.6"
            },
            {
                "text": "The \"overgenerate-and-rank\" approach to statistical surface realization is very common (Langkilde & Knight, 1998) . Unfortunately, in its simplest and most popular version, it ignores syntactical constraints and may produce ungrammatical output. For example, an inviolable rule of German grammar states that the finite verb must be in the second position in the main clause. Since it is hard to enforce such rules with an ngram language model, syntax-informed linearization methods have been developed for German (Ringger et al., 2004; Filippova & Strube, 2007) . We apply our recent method to order constituents and, using the CMU toolkit (Clarkson & Rosenfeld, 1997) , build a trigram language model from Wikipedia (approx. 1GB plain text) to find the best word order within constituents. Some constraints on word order are inferred from the input. Only interclause punctuation is generated.",
                "cite_spans": [
                    {
                        "start": 87,
                        "end": 113,
                        "text": "(Langkilde & Knight, 1998)",
                        "ref_id": "BIBREF14"
                    },
                    {
                        "start": 513,
                        "end": 535,
                        "text": "(Ringger et al., 2004;",
                        "ref_id": "BIBREF20"
                    },
                    {
                        "start": 536,
                        "end": 561,
                        "text": "Filippova & Strube, 2007)",
                        "ref_id": "BIBREF5"
                    },
                    {
                        "start": 640,
                        "end": 668,
                        "text": "(Clarkson & Rosenfeld, 1997)",
                        "ref_id": "BIBREF3"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Linearization",
                "sec_num": "4.7"
            },
            {
                "text": "We choose Barzilay & McKeown's system as a nontrivial baseline since, to our knowledge, there is no other system which outperforms theirs (Sec. 5.1). It is important for us to evaluate the fusion part of our system, so the input and the linearization module of our method and the baseline are identical. We are also interested in how many errors are due to the linearization module and thus define the readability upper bound (Sec. 5.2). We further present and discuss the experiments (Sec. 5.3 and 5.5).",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Experiments and Evaluation",
                "sec_num": "5"
            },
            {
                "text": "The algorithm of Barzilay & McKeown (2005) proceeds as follows: Given a group of related sentences, a dependency tree is built for each sentence. These trees are modified so that grammatical features are eliminated from the representation and memorized; noun phrases are flattened to facilitate alignment. A locally optimal pairwise alignment of modified dependency trees is recursively found with Word-Net and a paraphrase lexicon. From the alignment costs the centroid of the group is identified. Then this tree is augmented with information from other trees given that it appears in at least half of the sentences from this group. A rule-based pruning module prunes optional constituents, such as PPs or relative clauses. The linearization of the resulting tree (or graph) is done with a trigram language model. To adapt this system to German, we use the Ger-maNet API (Gurevych & Niederlich, 2005) instead of WordNet. We do not use a paraphrase lexicon, because there is no comparable corpus of sufficient size available for German. We readjust the alignment parameters of the system to prevent dissimilar nodes from being aligned. The input to the algorithm is generated as described in Sec. 4.1. The linearization is done as described in Sec. 4.7. In cases when there is a graph to linearize, all possible trees covering the maximum number of nodes are extracted from it and linearized. The most probable string is selected as the final output with a language model. For the rest of the reimplementation we follow the algorithm as presented.",
                "cite_spans": [
                    {
                        "start": 17,
                        "end": 42,
                        "text": "Barzilay & McKeown (2005)",
                        "ref_id": "BIBREF0"
                    },
                    {
                        "start": 872,
                        "end": 901,
                        "text": "(Gurevych & Niederlich, 2005)",
                        "ref_id": "BIBREF8"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Baseline",
                "sec_num": "5.1"
            },
            {
                "text": "To find the upper bound on readability, we select one sentence from the input randomly, parse it and linearize the dependency tree as described in Sec. 4.7. This way we obtain a sentence which may differ in form from the input sentences but whose content is identical to one of them.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Readability Upper Bound",
                "sec_num": "5.2"
            },
            {
                "text": "It is notoriously difficult to evaluate generation and summarization systems as there are many dimensions in which the quality of the output can be assessed. The goal of our present evaluation is in the first place to check whether our method is able to produce sensible output.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Experiments",
                "sec_num": "5.3"
            },
            {
                "text": "We evaluated the three systems (GRAPH-COMPRESSION, BARZILAY & MCKEOWN and READABILITY UB) with 50 native German speakers on 120 fused sentences generated from 40 randomly drawn related sentences groups (3 \u00d7 40). In an online experiment, the participants were asked to read a fused sentence preceded by the input and to rate its readability (read) and informativity in respect to the input (inf ) on a five point scale. The experiment was designed so that every participant rated 40 sentences in total. No participant saw two sentences generated from the same input. The results are presented in Table 2 . len is an average length in words of the output.",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 595,
                        "end": 602,
                        "text": "Table 2",
                        "ref_id": "TABREF1"
                    }
                ],
                "eq_spans": [],
                "section": "Experiments",
                "sec_num": "5.3"
            },
            {
                "text": "READABILITY UB 4.0 3.5 12.9 BARZILAY & MCKEOWN 3.1 3.0 15.5 GRAPH-COMPRESSION 3.7 3.1 13.0 ",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "read inf len",
                "sec_num": null
            },
            {
                "text": "The main disadvantage of our method, as well as other methods designed to work on syntactic structures, is that it requires a very accurate parser. In some cases, errors in the preprocessing made extracting a valid dependency tree impossible. The poor rating of READABILITY UB also shows that errors of the parser and of the linearization module affect the output considerably.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Error Analysis",
                "sec_num": "5.4"
            },
            {
                "text": "Although the semantic constraints ruled out many anomalous combinations, the limited coverage of GermaNet and the taxonomy derived from Wikipedia was the reason for some semantic oddities in the sentences generated by our method. For example, it generated phrases like aus England und Gro\u00dfbritannien (from England and Great Britain). A larger taxonomy would presumably increase the recall of the semantic constraints which proved helpful. Such errors were not observed in the output of the baseline because it does not fuse within NPs.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Error Analysis",
                "sec_num": "5.4"
            },
            {
                "text": "Both the baseline and our method made subcategorization errors, although these are more common for the baseline which aligns not only synonyms but also verbs which share some arguments. Also, the baseline pruned some PPs necessary for a sentence to be complete. For example, it pruned an der Atombombe (on the atom bomb) and generated an incomplete sentence Er arbeitete (He worked). For the baseline, alignment of flattened NPs instead of words caused generating very wordy and redundant sentences when the input parse trees were incorrect. In other cases, our method made mistakes in linearizing constituents because it had to rely on a language model whereas the baseline used unmodified constituents from the input. Absense of intraclause commas caused a drop in readability in some otherwise grammatical sentences.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Error Analysis",
                "sec_num": "5.4"
            },
            {
                "text": "A paired t-test revealed significant differences between the readability ratings of the three systems (p = 0.01) but found no significant differences between the informativity scores of our system and the baseline. Some participants reported informativity hard to estimate and to be assessable for grammatical sentences only. The higher readability rating of our method supports our claim that the method based on syntactic importance score and global constraints generates more grammatical sentences than existing systems. An important advantage of our method is that it addresses the subcategorization issue directly without shifting the burden of selecting the right arguments to the linearization module. The dependency structure it outputs is a tree and not a graph as it may happen with the method of Barzilay & McKeown (2005) . Moreover, our method can distinguish between more and less obligatory arguments. For example, it knows that at is more important than to for study whereas for go it is the other way round. Unlike our differentiated approach, the baseline rule states that PPs can generally be pruned.",
                "cite_spans": [
                    {
                        "start": 807,
                        "end": 832,
                        "text": "Barzilay & McKeown (2005)",
                        "ref_id": "BIBREF0"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Discussion",
                "sec_num": "5.5"
            },
            {
                "text": "Since the baseline generates a new sentence by modifying the tree of an input sentence, in some cases it outputs a compression of this sentence. Unlike this, our method is not based on an input tree and generates a new sentence without being biased to any of the input sentences.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Discussion",
                "sec_num": "5.5"
            },
            {
                "text": "Our method can also be applied to non-trivial sentence compression, whereas the baseline and similar methods, such as Marsi & Krahmer (2005) , would then boil down to a few very general pruning rules. We tested our method on the English compression corpus 6 and evaluated the compressions automatically the same way as Clarke & Lapata (2008) did. The results (Filippova & Strube, 2008) were as good as or significantly better than the state-of-the-art, depending on the choice of dependency parser.",
                "cite_spans": [
                    {
                        "start": 118,
                        "end": 140,
                        "text": "Marsi & Krahmer (2005)",
                        "ref_id": "BIBREF16"
                    },
                    {
                        "start": 319,
                        "end": 341,
                        "text": "Clarke & Lapata (2008)",
                        "ref_id": "BIBREF2"
                    },
                    {
                        "start": 359,
                        "end": 385,
                        "text": "(Filippova & Strube, 2008)",
                        "ref_id": "BIBREF6"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Discussion",
                "sec_num": "5.5"
            },
            {
                "text": "We presented a novel sentence fusion method which formulates the fusion task as an optimization problem. It is unsupervised and finds a globally optimal solution taking semantics, syntax and word informativeness into account. The method does not require hand-crafted rules or lexicons to generate grammatical output but relies on the syntactic importance score calculated from an automatically parsed corpus. An experiment with native speakers demonstrated that our method generates more grammatical sentences than existing systems.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Conclusions",
                "sec_num": "6"
            },
            {
                "text": "There are several directions to explore in the future. Recently query-based sentence fusion has been shown to be a better defined task than generic sentence fusion (Krahmer et al., 2008) . By modifying the word informativeness score, e.g. by giving higher scores to words semantically related to the query, one could force our system to retain words relevant to the query in the output. To generate coherent texts we plan to move beyond sentence generation and add discourse constraints to our system.",
                "cite_spans": [
                    {
                        "start": 164,
                        "end": 186,
                        "text": "(Krahmer et al., 2008)",
                        "ref_id": "BIBREF13"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Conclusions",
                "sec_num": "6"
            },
            {
                "text": "We followBarzilay & McKeown (2005) and refer to aggregation within text-to-text generation as sentence fusion.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "",
                "sec_num": null
            },
            {
                "text": "http://de.wikipedia.org, http://home. datacomm.ch/biografien, http://biographie. net/de, http://www.weltchronik.de/ws/bio/ main.htm, http://www.brockhaus-suche.de/ suche 3 http://search.cpan.org/ \u223c holsten/ Lingua-DE-Sentence-0.07/Sentence.pm",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "",
                "sec_num": null
            },
            {
                "text": "The probabilities are calculated from a corpus of approx. 3,000 biographies from Wikipedia which we annotated automatically as described in Section 3.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "",
                "sec_num": null
            },
            {
                "text": "The corpus is available from http://homepages. inf.ed.ac.uk/s0460084/data.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "",
                "sec_num": null
            }
        ],
        "back_matter": [
            {
                "text": "Acknowledgements: This work has been funded by the Klaus Tschira Foundation, Heidelberg, Germany. The first author has been supported by a KTF grant (09.009.2004). Part of the data has been used with a permission of Bibliographisches Institut & F. A. Brockhaus AG, Mannheim, Germany. We would like to thank the participants in our online evaluation. We are also grateful to Regina Barzilay and the three reviewers for their helpful comments.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "acknowledgement",
                "sec_num": null
            }
        ],
        "bib_entries": {
            "BIBREF0": {
                "ref_id": "b0",
                "title": "Sentence fusion for multidocument news summarization",
                "authors": [
                    {
                        "first": "Regina",
                        "middle": [
                            "&"
                        ],
                        "last": "Barzilay",
                        "suffix": ""
                    },
                    {
                        "first": "Kathleen",
                        "middle": [
                            "R"
                        ],
                        "last": "Mckeown",
                        "suffix": ""
                    }
                ],
                "year": 2005,
                "venue": "Computational Linguistics",
                "volume": "31",
                "issue": "3",
                "pages": "297--327",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Barzilay, Regina & Kathleen R. McKeown (2005). Sen- tence fusion for multidocument news summarization. Computational Linguistics, 31(3):297-327.",
                "links": null
            },
            "BIBREF1": {
                "ref_id": "b1",
                "title": "TnT -A statistical Part-of-Speech tagger",
                "authors": [
                    {
                        "first": "Thorsten",
                        "middle": [],
                        "last": "Brants",
                        "suffix": ""
                    }
                ],
                "year": 2000,
                "venue": "Proceedings of the 6th Conference on Applied Natural Language Processing",
                "volume": "",
                "issue": "",
                "pages": "224--231",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Brants, Thorsten (2000). TnT -A statistical Part-of- Speech tagger. In Proceedings of the 6th Confer- ence on Applied Natural Language Processing, Seat- tle, Wash., 29 April -4 May 2000, pp. 224-231.",
                "links": null
            },
            "BIBREF2": {
                "ref_id": "b2",
                "title": "Global inference for sentence compression: An integer linear programming approach",
                "authors": [
                    {
                        "first": "James & Mirella",
                        "middle": [],
                        "last": "Clarke",
                        "suffix": ""
                    },
                    {
                        "first": "",
                        "middle": [],
                        "last": "Lapata",
                        "suffix": ""
                    }
                ],
                "year": 2008,
                "venue": "Journal of Artificial Intelligence Research",
                "volume": "31",
                "issue": "",
                "pages": "399--429",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Clarke, James & Mirella Lapata (2008). Global inference for sentence compression: An integer linear program- ming approach. Journal of Artificial Intelligence Re- search, 31:399-429.",
                "links": null
            },
            "BIBREF3": {
                "ref_id": "b3",
                "title": "Statistical language modeling using the CMU-Cambridge toolkit",
                "authors": [
                    {
                        "first": "Philip & Ronald",
                        "middle": [],
                        "last": "Clarkson",
                        "suffix": ""
                    },
                    {
                        "first": "",
                        "middle": [],
                        "last": "Rosenfeld",
                        "suffix": ""
                    }
                ],
                "year": 1997,
                "venue": "Proceedings of the 5th European Conference on Speech Communication and Technology",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Clarkson, Philip & Ronald Rosenfeld (1997). Statis- tical language modeling using the CMU-Cambridge toolkit. In Proceedings of the 5th European Con- ference on Speech Communication and Technology,",
                "links": null
            },
            "BIBREF5": {
                "ref_id": "b5",
                "title": "Generating constituent order in German clauses",
                "authors": [
                    {
                        "first": "Katja & Michael",
                        "middle": [],
                        "last": "Filippova",
                        "suffix": ""
                    },
                    {
                        "first": "",
                        "middle": [],
                        "last": "Strube",
                        "suffix": ""
                    }
                ],
                "year": 2007,
                "venue": "Proceedings of the 45th Annual Meeting of the Association for Computational Linguistics",
                "volume": "",
                "issue": "",
                "pages": "320--327",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Filippova, Katja & Michael Strube (2007). Generating constituent order in German clauses. In Proceedings of the 45th Annual Meeting of the Association for Com- putational Linguistics, Prague, Czech Republic, 23-30 June 2007, pp. 320-327.",
                "links": null
            },
            "BIBREF6": {
                "ref_id": "b6",
                "title": "Dependency tree based sentence compression",
                "authors": [
                    {
                        "first": "Katja & Michael",
                        "middle": [],
                        "last": "Filippova",
                        "suffix": ""
                    },
                    {
                        "first": "",
                        "middle": [],
                        "last": "Strube",
                        "suffix": ""
                    }
                ],
                "year": 2008,
                "venue": "Proceedings of the 5th International Conference on Natural Language Generation",
                "volume": "",
                "issue": "",
                "pages": "25--32",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Filippova, Katja & Michael Strube (2008). Dependency tree based sentence compression. In Proceedings of the 5th International Conference on Natural Language Generation, Salt Fork, Ohio, 12-14 June 2008, pp. 25- 32.",
                "links": null
            },
            "BIBREF7": {
                "ref_id": "b7",
                "title": "Hybrid parsing: Using probabilistic models as predictors for a symbolic parser",
                "authors": [
                    {
                        "first": "Kilian & Wolfgang",
                        "middle": [],
                        "last": "Foth",
                        "suffix": ""
                    },
                    {
                        "first": "",
                        "middle": [],
                        "last": "Menzel",
                        "suffix": ""
                    }
                ],
                "year": 2006,
                "venue": "Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the Association for Computational Linguistics",
                "volume": "",
                "issue": "",
                "pages": "321--327",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Foth, Kilian & Wolfgang Menzel (2006). Hybrid pars- ing: Using probabilistic models as predictors for a symbolic parser. In Proceedings of the 21st Interna- tional Conference on Computational Linguistics and 44th Annual Meeting of the Association for Computa- tional Linguistics, Sydney, Australia, 17-21 July 2006, pp. 321-327.",
                "links": null
            },
            "BIBREF8": {
                "ref_id": "b8",
                "title": "Accessing GermaNet data and computing semantic relatedness",
                "authors": [
                    {
                        "first": "Iryna & Hendrik",
                        "middle": [],
                        "last": "Gurevych",
                        "suffix": ""
                    },
                    {
                        "first": "",
                        "middle": [],
                        "last": "Niederlich",
                        "suffix": ""
                    }
                ],
                "year": 2005,
                "venue": "Companion Volume to the Proceedings of the 43rd Annual Meeting of the Association for Computational Linguistics",
                "volume": "",
                "issue": "",
                "pages": "5--8",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Gurevych, Iryna & Hendrik Niederlich (2005). Access- ing GermaNet data and computing semantic related- ness. In Companion Volume to the Proceedings of the 43rd Annual Meeting of the Association for Compu- tational Linguistics, Ann Arbor, Mich., 25-30 June 2005, pp. 5-8.",
                "links": null
            },
            "BIBREF9": {
                "ref_id": "b9",
                "title": "Speech summarization: An approach through word extraction and a method for evaluation",
                "authors": [
                    {
                        "first": "Chiori & Sadaoki",
                        "middle": [],
                        "last": "Hori",
                        "suffix": ""
                    },
                    {
                        "first": "",
                        "middle": [],
                        "last": "Furui",
                        "suffix": ""
                    }
                ],
                "year": 2004,
                "venue": "IEEE Transactions on Information and Systems",
                "volume": "",
                "issue": "1",
                "pages": "15--25",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Hori, Chiori & Sadaoki Furui (2004). Speech summa- rization: An approach through word extraction and a method for evaluation. IEEE Transactions on Infor- mation and Systems, E87-D(1):15-25.",
                "links": null
            },
            "BIBREF10": {
                "ref_id": "b10",
                "title": "The Oxford Handbook of Computational Linguistics",
                "authors": [
                    {
                        "first": "Eduard",
                        "middle": [],
                        "last": "Hovy",
                        "suffix": ""
                    }
                ],
                "year": 2003,
                "venue": "",
                "volume": "",
                "issue": "",
                "pages": "583--598",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Hovy, Eduard (2003). Text summarization. In Ruslan Mitkov (Ed.), The Oxford Handbook of Computational Linguistics, pp. 583-598. Oxford, U.K.: Oxford Uni- versity Press.",
                "links": null
            },
            "BIBREF11": {
                "ref_id": "b11",
                "title": "Cut-and-Paste Text Summarization",
                "authors": [
                    {
                        "first": "Hongyan",
                        "middle": [],
                        "last": "Jing",
                        "suffix": ""
                    }
                ],
                "year": 2001,
                "venue": "",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Jing, Hongyan (2001). Cut-and-Paste Text Summariza- tion, (Ph.D. thesis). Computer Science Department, Columbia University, New York, N.Y.",
                "links": null
            },
            "BIBREF12": {
                "ref_id": "b12",
                "title": "Acquiring a taxonomy from the German Wikipedia",
                "authors": [
                    {
                        "first": "Laura",
                        "middle": [],
                        "last": "Kassner",
                        "suffix": ""
                    },
                    {
                        "first": "Vivi",
                        "middle": [],
                        "last": "Nastase & Michael",
                        "suffix": ""
                    },
                    {
                        "first": "",
                        "middle": [],
                        "last": "Strube",
                        "suffix": ""
                    }
                ],
                "year": 2008,
                "venue": "Proceedings of the 6th International Conference on Language Resources and Evaluation",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Kassner, Laura, Vivi Nastase & Michael Strube (2008). Acquiring a taxonomy from the German Wikipedia. In Proceedings of the 6th International Conference on Language Resources and Evaluation, Marrakech, Mo- rocco, 26 May -1 June 2008.",
                "links": null
            },
            "BIBREF13": {
                "ref_id": "b13",
                "title": "Query-based sentence fusion is better defined and leads to more preferred results than generic sentence fusion",
                "authors": [
                    {
                        "first": "Emiel",
                        "middle": [],
                        "last": "Krahmer",
                        "suffix": ""
                    },
                    {
                        "first": "Erwin",
                        "middle": [],
                        "last": "Marsi",
                        "suffix": ""
                    },
                    {
                        "first": "&",
                        "middle": [],
                        "last": "Paul Van Pelt",
                        "suffix": ""
                    }
                ],
                "year": 2008,
                "venue": "Companion Volume to the Proceedings of the 46th Annual Meeting of the Association for Computational Linguistics",
                "volume": "",
                "issue": "",
                "pages": "193--196",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Krahmer, Emiel, Erwin Marsi & Paul van Pelt (2008). Query-based sentence fusion is better defined and leads to more preferred results than generic sentence fusion. In Companion Volume to the Proceedings of the 46th Annual Meeting of the Association for Com- putational Linguistics, Columbus, Ohio, 15-20 June 2008, pp. 193-196.",
                "links": null
            },
            "BIBREF14": {
                "ref_id": "b14",
                "title": "Generation that exploits corpus-based statistical knowledge",
                "authors": [
                    {
                        "first": "Irene & Kevin",
                        "middle": [],
                        "last": "Langkilde",
                        "suffix": ""
                    },
                    {
                        "first": "",
                        "middle": [],
                        "last": "Knight",
                        "suffix": ""
                    }
                ],
                "year": 1998,
                "venue": "Proceedings of the 17th International Conference on Computational Linguistics and 36th Annual Meeting of the Association for Computational Linguistics",
                "volume": "",
                "issue": "",
                "pages": "704--710",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Langkilde, Irene & Kevin Knight (1998). Generation that exploits corpus-based statistical knowledge. In Proceedings of the 17th International Conference on Computational Linguistics and 36th Annual Meet- ing of the Association for Computational Linguistics, Montr\u00e9al, Qu\u00e9bec, Canada, 10-14 August 1998, pp. 704-710.",
                "links": null
            },
            "BIBREF15": {
                "ref_id": "b15",
                "title": "GermaNet -representation, visualization, application",
                "authors": [
                    {
                        "first": "Lothar & Claudia",
                        "middle": [],
                        "last": "Lemnitzer",
                        "suffix": ""
                    },
                    {
                        "first": "",
                        "middle": [],
                        "last": "Kunze",
                        "suffix": ""
                    }
                ],
                "year": 2002,
                "venue": "Proceedings of the 3rd International Conference on Language Resources and Evaluation",
                "volume": "",
                "issue": "",
                "pages": "1485--1491",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Lemnitzer, Lothar & Claudia Kunze (2002). GermaNet -representation, visualization, application. In Pro- ceedings of the 3rd International Conference on Lan- guage Resources and Evaluation, Las Palmas, Canary Islands, Spain, 29-31 May 2002, pp. 1485-1491.",
                "links": null
            },
            "BIBREF16": {
                "ref_id": "b16",
                "title": "Explorations in sentence fusion",
                "authors": [
                    {
                        "first": "Erwin & Emiel",
                        "middle": [],
                        "last": "Marsi",
                        "suffix": ""
                    },
                    {
                        "first": "",
                        "middle": [],
                        "last": "Krahmer",
                        "suffix": ""
                    }
                ],
                "year": 2005,
                "venue": "Proceedings of the European Workshop on Natural Language Generation",
                "volume": "",
                "issue": "",
                "pages": "109--117",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Marsi, Erwin & Emiel Krahmer (2005). Explorations in sentence fusion. In Proceedings of the European Work- shop on Natural Language Generation, Aberdeen, Scotland, 8-10 August, 2005, pp. 109-117.",
                "links": null
            },
            "BIBREF17": {
                "ref_id": "b17",
                "title": "Towards multidocument summarization by reformulation: Progress and prospects",
                "authors": [
                    {
                        "first": "Kathleen",
                        "middle": [
                            "R"
                        ],
                        "last": "Mckeown",
                        "suffix": ""
                    },
                    {
                        "first": "Judith",
                        "middle": [
                            "L"
                        ],
                        "last": "Klavans",
                        "suffix": ""
                    },
                    {
                        "first": "Vassileios",
                        "middle": [],
                        "last": "Hatzivassiloglou",
                        "suffix": ""
                    }
                ],
                "year": 1999,
                "venue": "Proceedings of the 16th National Conference on Artificial Intelligence",
                "volume": "",
                "issue": "",
                "pages": "453--460",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "McKeown, Kathleen R., Judith L. Klavans, Vassileios Hatzivassiloglou, Regina Barzilay & Eleazar Eskin (1999). Towards multidocument summarization by re- formulation: Progress and prospects. In Proceedings of the 16th National Conference on Artificial Intelli- gence, Orlando, Flo., 18-22 July 1999, pp. 453-460.",
                "links": null
            },
            "BIBREF18": {
                "ref_id": "b18",
                "title": "Towards robust context-sensitive sentence alignment for monolingual corpora",
                "authors": [
                    {
                        "first": "Rani & Stuart",
                        "middle": [],
                        "last": "Nelken",
                        "suffix": ""
                    },
                    {
                        "first": "",
                        "middle": [],
                        "last": "Schieber",
                        "suffix": ""
                    }
                ],
                "year": 2006,
                "venue": "Proceedings of the 11th Conference of the European Chapter of the Association for Computational Linguistics",
                "volume": "",
                "issue": "",
                "pages": "161--168",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Nelken, Rani & Stuart Schieber (2006). Towards robust context-sensitive sentence alignment for monolingual corpora. In Proceedings of the 11th Conference of the European Chapter of the Association for Compu- tational Linguistics, Trento, Italy, 3-7 April 2006, pp. 161-168.",
                "links": null
            },
            "BIBREF19": {
                "ref_id": "b19",
                "title": "Building Natural Language Generation Systems",
                "authors": [
                    {
                        "first": "Ehud & Robert",
                        "middle": [],
                        "last": "Reiter",
                        "suffix": ""
                    },
                    {
                        "first": "",
                        "middle": [],
                        "last": "Dale",
                        "suffix": ""
                    }
                ],
                "year": 2000,
                "venue": "",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Reiter, Ehud & Robert Dale (2000). Building Natu- ral Language Generation Systems. Cambridge, U.K.: Cambridge University Press.",
                "links": null
            },
            "BIBREF20": {
                "ref_id": "b20",
                "title": "Linguistically informed statistical models of constituent structure for ordering in sentence realization",
                "authors": [
                    {
                        "first": "Eric",
                        "middle": [],
                        "last": "Ringger",
                        "suffix": ""
                    },
                    {
                        "first": "Michael",
                        "middle": [],
                        "last": "Gamon",
                        "suffix": ""
                    },
                    {
                        "first": "Robert",
                        "middle": [
                            "C"
                        ],
                        "last": "Moore",
                        "suffix": ""
                    },
                    {
                        "first": "David",
                        "middle": [],
                        "last": "Rojas",
                        "suffix": ""
                    },
                    {
                        "first": "Martine",
                        "middle": [],
                        "last": "Smets & Simon Corston-Oliver",
                        "suffix": ""
                    }
                ],
                "year": 2004,
                "venue": "Proceedings of the 20th International Conference on Computational Linguistics",
                "volume": "",
                "issue": "",
                "pages": "673--679",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Ringger, Eric, Michael Gamon, Robert C. Moore, David Rojas, Martine Smets & Simon Corston-Oliver (2004). Linguistically informed statistical models of con- stituent structure for ordering in sentence realization. In Proceedings of the 20th International Conference on Computational Linguistics, Geneva, Switzerland, 23-27 August 2004, pp. 673-679.",
                "links": null
            },
            "BIBREF21": {
                "ref_id": "b21",
                "title": "Probabilistic Part-of-Speech tagging using decision trees",
                "authors": [
                    {
                        "first": "Helmut",
                        "middle": [],
                        "last": "Schmid",
                        "suffix": ""
                    }
                ],
                "year": 1997,
                "venue": "New Methods in Language Processing",
                "volume": "",
                "issue": "",
                "pages": "154--164",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Schmid, Helmut (1997). Probabilistic Part-of-Speech tagging using decision trees. In Daniel Jones & Harold Somers (Eds.), New Methods in Language Processing, pp. 154-164. London, U.K.: UCL Press.",
                "links": null
            },
            "BIBREF22": {
                "ref_id": "b22",
                "title": "Automatic summarizing: Factors and directions",
                "authors": [
                    {
                        "first": "Sp\u00e4rck",
                        "middle": [],
                        "last": "Jones",
                        "suffix": ""
                    },
                    {
                        "first": "Karen",
                        "middle": [],
                        "last": "",
                        "suffix": ""
                    }
                ],
                "year": 1999,
                "venue": "Advances in Automatic Text Summarization",
                "volume": "",
                "issue": "",
                "pages": "1--12",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Sp\u00e4rck Jones, Karen (1999). Automatic summarizing: Factors and directions. In Inderjeet Mani & Mark T. Maybury (Eds.), Advances in Automatic Text Summa- rization, pp. 1-12. Cambridge, Mass.: MIT Press.",
                "links": null
            },
            "BIBREF24": {
                "ref_id": "b24",
                "title": "WikiRelate! Computing semantic relatedness using Wikipedia",
                "authors": [],
                "year": 2006,
                "venue": "Proceedings of the 21st National Conference on Artificial Intelligence",
                "volume": "",
                "issue": "",
                "pages": "1419--1424",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "WikiRelate! Computing semantic relatedness using Wikipedia. In Proceedings of the 21st National Con- ference on Artificial Intelligence, Boston, Mass., 16- 20 July 2006, pp. 1419-1424.",
                "links": null
            },
            "BIBREF25": {
                "ref_id": "b25",
                "title": "Global revision in summarization: Generating novel sentences with Prim's algorithm",
                "authors": [
                    {
                        "first": "Wan",
                        "middle": [],
                        "last": "",
                        "suffix": ""
                    }
                ],
                "year": 2007,
                "venue": "Proceedings of the 10th Conference of the Pacific Association for Computational Linguistics",
                "volume": "",
                "issue": "",
                "pages": "226--235",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Wan, Stephen, Robert Dale, Mark Dras & Cecile Paris (2007). Global revision in summarization: Generating novel sentences with Prim's algorithm. In Proceedings of the 10th Conference of the Pacific Association for Computational Linguistics, Melbourne, Australia, 19- 21 September, 2007, pp. 226-235.",
                "links": null
            }
        },
        "ref_entries": {
            "FIGREF0": {
                "uris": null,
                "type_str": "figure",
                "text": "After school Bohr studied physics and mathematics at the University of Copenhagen and got his PhD there'",
                "num": null
            },
            "FIGREF1": {
                "uris": null,
                "type_str": "figure",
                "text": "The dependency tree of the sentence Bohr studierte Mathematik und Physik an der Uni in Kopenhagen (Bohr studied mathematics and physics at university in Copenhagen) as produced by the parser (a) and after all transformations applied (b)",
                "num": null
            },
            "FIGREF2": {
                "uris": null,
                "type_str": "figure",
                "text": "Graph obtained from sentences He studied sciences with pleasure and He studied math and physics with Bohr",
                "num": null
            },
            "TABREF1": {
                "content": "<table/>",
                "type_str": "table",
                "num": null,
                "text": "Average readability and informativity on a five point scale, average length in words",
                "html": null
            }
        }
    }
}