File size: 109,563 Bytes
6fa4bc9
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
1001
1002
1003
1004
1005
1006
1007
1008
1009
1010
1011
1012
1013
1014
1015
1016
1017
1018
1019
1020
1021
1022
1023
1024
1025
1026
1027
1028
1029
1030
1031
1032
1033
1034
1035
1036
1037
1038
1039
1040
1041
1042
1043
1044
1045
1046
1047
1048
1049
1050
1051
1052
1053
1054
1055
1056
1057
1058
1059
1060
1061
1062
1063
1064
1065
1066
1067
1068
1069
1070
1071
1072
1073
1074
1075
1076
1077
1078
1079
1080
1081
1082
1083
1084
1085
1086
1087
1088
1089
1090
1091
1092
1093
1094
1095
1096
1097
1098
1099
1100
1101
1102
1103
1104
1105
1106
1107
1108
1109
1110
1111
1112
1113
1114
1115
1116
1117
1118
1119
1120
1121
1122
1123
1124
1125
1126
1127
1128
1129
1130
1131
1132
1133
1134
1135
1136
1137
1138
1139
1140
1141
1142
1143
1144
1145
1146
1147
1148
1149
1150
1151
1152
1153
1154
1155
1156
1157
1158
1159
1160
1161
1162
1163
1164
1165
1166
1167
1168
1169
1170
1171
1172
1173
1174
1175
1176
1177
1178
1179
1180
1181
1182
1183
1184
1185
1186
1187
1188
1189
1190
1191
1192
1193
1194
1195
1196
1197
1198
1199
1200
1201
1202
1203
1204
1205
1206
1207
1208
1209
1210
1211
1212
1213
1214
1215
1216
1217
1218
1219
1220
1221
1222
1223
1224
1225
1226
1227
1228
1229
1230
1231
1232
1233
1234
1235
1236
1237
1238
1239
1240
1241
1242
1243
1244
1245
1246
1247
1248
1249
1250
1251
1252
1253
1254
1255
1256
1257
1258
1259
1260
1261
1262
1263
1264
1265
1266
1267
1268
1269
1270
1271
1272
1273
1274
1275
1276
1277
1278
1279
1280
1281
1282
1283
1284
1285
1286
1287
1288
1289
1290
1291
1292
1293
1294
1295
1296
1297
1298
1299
1300
1301
1302
1303
1304
1305
1306
1307
1308
1309
1310
1311
1312
1313
1314
1315
1316
1317
1318
1319
1320
1321
1322
1323
1324
1325
1326
1327
1328
1329
1330
1331
1332
1333
1334
1335
1336
1337
1338
1339
1340
1341
1342
1343
1344
1345
1346
1347
1348
1349
1350
1351
1352
1353
1354
1355
1356
1357
1358
1359
1360
1361
1362
1363
1364
1365
1366
1367
1368
1369
1370
1371
1372
1373
1374
1375
1376
1377
1378
1379
1380
1381
1382
1383
1384
1385
1386
1387
1388
1389
1390
1391
1392
1393
1394
1395
1396
1397
1398
1399
1400
1401
1402
1403
1404
1405
1406
1407
1408
1409
1410
1411
1412
1413
1414
1415
1416
1417
1418
1419
1420
1421
1422
1423
1424
1425
1426
1427
1428
1429
1430
1431
1432
1433
1434
1435
1436
1437
1438
1439
1440
1441
1442
1443
1444
1445
1446
1447
1448
1449
1450
1451
1452
1453
1454
1455
1456
1457
1458
1459
1460
1461
1462
1463
1464
1465
1466
1467
1468
1469
1470
1471
1472
1473
1474
1475
1476
1477
1478
1479
1480
1481
1482
1483
1484
1485
1486
1487
1488
1489
1490
1491
1492
1493
1494
1495
1496
1497
1498
1499
1500
1501
1502
1503
1504
1505
1506
1507
1508
1509
1510
1511
1512
1513
1514
1515
1516
1517
1518
1519
1520
1521
1522
1523
1524
1525
1526
1527
1528
1529
1530
1531
1532
1533
1534
1535
1536
1537
1538
1539
1540
1541
1542
1543
1544
1545
1546
1547
1548
1549
1550
1551
1552
1553
1554
1555
1556
1557
1558
1559
1560
1561
1562
1563
1564
1565
1566
1567
1568
1569
1570
1571
1572
1573
1574
1575
1576
1577
1578
1579
1580
1581
1582
1583
1584
1585
1586
1587
1588
1589
1590
1591
1592
1593
1594
1595
1596
1597
1598
1599
1600
1601
1602
1603
1604
1605
1606
1607
1608
1609
1610
1611
1612
1613
1614
1615
1616
1617
1618
1619
1620
1621
1622
1623
1624
1625
1626
1627
1628
1629
1630
1631
1632
1633
1634
1635
1636
1637
1638
1639
1640
1641
1642
1643
1644
1645
1646
1647
1648
1649
1650
1651
1652
1653
1654
1655
1656
1657
1658
1659
1660
1661
1662
1663
1664
1665
1666
1667
1668
1669
1670
1671
1672
1673
1674
1675
1676
1677
1678
1679
1680
1681
1682
1683
1684
1685
1686
1687
1688
1689
1690
1691
1692
1693
1694
1695
1696
1697
1698
1699
1700
1701
1702
1703
1704
1705
1706
1707
1708
1709
1710
1711
1712
1713
1714
1715
1716
1717
1718
1719
1720
1721
1722
1723
1724
1725
1726
1727
1728
1729
1730
1731
1732
1733
1734
1735
1736
1737
1738
1739
1740
1741
1742
1743
1744
1745
1746
1747
1748
1749
1750
1751
1752
1753
1754
1755
1756
1757
1758
1759
1760
1761
1762
1763
1764
1765
1766
1767
1768
1769
1770
1771
1772
1773
1774
1775
1776
1777
1778
1779
1780
1781
1782
1783
1784
1785
1786
1787
1788
1789
1790
1791
1792
1793
1794
1795
1796
1797
1798
1799
1800
1801
1802
1803
1804
1805
1806
1807
1808
1809
1810
1811
1812
1813
1814
1815
1816
1817
1818
1819
1820
1821
1822
1823
1824
1825
1826
1827
1828
1829
1830
1831
1832
1833
1834
1835
1836
1837
1838
1839
1840
1841
1842
1843
1844
1845
1846
1847
1848
1849
1850
1851
1852
1853
1854
1855
1856
1857
1858
1859
1860
1861
1862
1863
1864
1865
1866
1867
1868
1869
1870
1871
1872
1873
1874
1875
1876
1877
1878
1879
1880
1881
1882
1883
1884
1885
1886
1887
1888
1889
1890
1891
1892
1893
1894
1895
1896
1897
1898
1899
1900
1901
1902
1903
1904
1905
{
    "paper_id": "2020",
    "header": {
        "generated_with": "S2ORC 1.0.0",
        "date_generated": "2023-01-19T13:27:16.664431Z"
    },
    "title": "Extending Implicit Discourse Relation Recognition to the PDTB-3",
    "authors": [
        {
            "first": "Li",
            "middle": [],
            "last": "Liang",
            "suffix": "",
            "affiliation": {
                "laboratory": "",
                "institution": "University of Edinburgh",
                "location": {}
            },
            "email": ""
        },
        {
            "first": "Zheng",
            "middle": [],
            "last": "Zhao",
            "suffix": "",
            "affiliation": {
                "laboratory": "",
                "institution": "University of Edinburgh",
                "location": {}
            },
            "email": "zheng.zhao@ed.ac.uk"
        },
        {
            "first": "Bonnie",
            "middle": [],
            "last": "Webber",
            "suffix": "",
            "affiliation": {
                "laboratory": "",
                "institution": "University of Edinburgh",
                "location": {}
            },
            "email": "bonnie@inf.ed.ac.uk"
        }
    ],
    "year": "",
    "venue": null,
    "identifiers": {},
    "abstract": "The PDTB-3 contains many more implicit discourse relations than the previous PDTB-2. This is in part because implicit relations have now been annotated within sentences as well as between them. In addition, some now cooccur with explicit discourse relations, instead of standing on their own. Here we show that while this can complicate the problem of identifying the location of implicit discourse relations, it can in turn simplify the problem of identifying their senses. We present data to support this claim, as well as methods that can serve as a non-trivial baseline for future stateof-the-art recognizers for implicit discourse relations.",
    "pdf_parse": {
        "paper_id": "2020",
        "_pdf_hash": "",
        "abstract": [
            {
                "text": "The PDTB-3 contains many more implicit discourse relations than the previous PDTB-2. This is in part because implicit relations have now been annotated within sentences as well as between them. In addition, some now cooccur with explicit discourse relations, instead of standing on their own. Here we show that while this can complicate the problem of identifying the location of implicit discourse relations, it can in turn simplify the problem of identifying their senses. We present data to support this claim, as well as methods that can serve as a non-trivial baseline for future stateof-the-art recognizers for implicit discourse relations.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Abstract",
                "sec_num": null
            }
        ],
        "body_text": [
            {
                "text": "Most readers will be familiar with the PDTB-2 (Prasad et al., 2008) . At the time of its creation, it was the largest public repository of annotated discourse relations (over 43K), including over 18.4K signalled by explicit discourse connectives (coordinating or subordinating conjunctions, or discourse adverbials). In the corpus, discourse relations comprise two arguments labelled Arg1 and Arg2, with each relation anchored by either an explicit discourse connective or adjacency. In the latter case, annotators inserted one or more implicit connectives to signal the sense(s) they inferred to hold between the arguments. The size and availability of the PDTB-2 spawned work on shallow discourse parsing, as in the 2015 and 2016 CoNLL shared tasks (Xue et al., 2015 .",
                "cite_spans": [
                    {
                        "start": 46,
                        "end": 67,
                        "text": "(Prasad et al., 2008)",
                        "ref_id": "BIBREF15"
                    },
                    {
                        "start": 751,
                        "end": 768,
                        "text": "(Xue et al., 2015",
                        "ref_id": "BIBREF24"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": "1"
            },
            {
                "text": "With the release of the PDTB-3 1 , there are now \u223c12.5K additional intra-sentential relations annotated (i.e., relations that lie wholly within the projection of a top-level S-node) and \u223c1K additional inter-sentential relations (Webber et al., 2019 Work on shallow discourse parsing (including the CoNLL shared tasks, as well as (Bai and Zhao, 2018; Dai and Huang, 2018; Rutherford et al., 2017; Shi and Demberg, 2017) ) consistently shows that recognizing and sense labelling implicit discourse relations poses more of a challenge than doing so for explicit discourse relations. Hence, implicit relations are the focus of the current work.",
                "cite_spans": [
                    {
                        "start": 228,
                        "end": 248,
                        "text": "(Webber et al., 2019",
                        "ref_id": null
                    },
                    {
                        "start": 329,
                        "end": 349,
                        "text": "(Bai and Zhao, 2018;",
                        "ref_id": "BIBREF0"
                    },
                    {
                        "start": 350,
                        "end": 370,
                        "text": "Dai and Huang, 2018;",
                        "ref_id": "BIBREF4"
                    },
                    {
                        "start": 371,
                        "end": 395,
                        "text": "Rutherford et al., 2017;",
                        "ref_id": "BIBREF19"
                    },
                    {
                        "start": 396,
                        "end": 418,
                        "text": "Shi and Demberg, 2017)",
                        "ref_id": "BIBREF22"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": "1"
            },
            {
                "text": "But there is another reason as well: Work on the PDTB-2 has assumed (correctly) that non-explicit discourse relations (i.e., implicit relations, AltLex relations (Prasad et al., 2010) and entity relations) only hold between adjacent sentences as they did in the PDTB-2, so that a sentence boundary is the only position that needs to be checked for the presence of a non-explicit relation. The difficult problem lay in assigning sense-labels to implicit relations.",
                "cite_spans": [
                    {
                        "start": 162,
                        "end": 183,
                        "text": "(Prasad et al., 2010)",
                        "ref_id": "BIBREF16"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": "1"
            },
            {
                "text": "In Section 2, we show that, with the PDTB-3, this is no longer the case because non-explicit relations can hold within sentences as well as between them. This in turn motivates a new approach to handle implicit discourse relations in shallow discourse parsing, involving both finding them as well as identifying their senses (Section 3). After showing that the sense-distribution of implicit relations within sentences differs from that between them (cf. Section 4), we argue that one should be able to take advantage of this fact in sense-labelling these relations. 2 Section 5 describes two different ways of doing so, along with a way of dealing with another difference in sense distribution -that of implicit relations that co-occur with explicit relations and implicit relations that do not. While the particular methods used here for sense-labelling may not advance the state-of-the-art, it is the way we use them that should deliver a new baseline for recognizing a fuller range of implicit relations and contribute to the next generation of shallow discourse parsers. 3 2 Discourse Annotation in Discourse annotation in the PDTB-3 differs from that in the PDTB-2 in two major ways: (1) many more discourse relations are annotated within sentences, and (2) there are changes in the sense hierarchy used in annotating them. While only the first requires changes to shallow discourse parsing, presenting changes to the senses used in annotating relations will allow us to show differences in the distribution of senses associated with different types of implicit discourse relations.",
                "cite_spans": [
                    {
                        "start": 1076,
                        "end": 1077,
                        "text": "3",
                        "ref_id": null
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": "1"
            },
            {
                "text": "It was a consequence of the way that the PDTB-2 was annotated, that there were over twice as many discourse relations annotated across sentences than within them. The former were either explicit relations associated with discourse adverbials or sentence-initial coordinating conjunctions 4 , or implicit relations between paragraph-internal adjacent sentences not otherwise linked by a discourse connective. Within sentences, only annotated were explicit relations associated with subordinating conjunctions, sentence-internal coordinating conjunctions, and discourse adverbials (both of whose arguments were in the same sentence). So it should not be surprising that there were many more intersentential relations than intra-sentential relations in the PDTB-2.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Additional Annotation in PDTB-3",
                "sec_num": "2.1"
            },
            {
                "text": "In contrast, of the over 13K additional discourse relations annotated in the PDTB-3, over 95% of them occur within individual sentences. Of the new relations, 5780 are implicit, some standing alone (like the implicit relations between sentences), with others co-occuring with an explicit discourse relation. Within a sentence, implicit relations occur at the boundaries of syntactic forms -for example, at the boundary between a free adjunct and its matrix clause (Ex. 1), or at the boundary between a to-clause and its matrix clause (Ex. 2), or between two punctuation-marked conjuncts (Ex. 3).",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Additional Annotation in PDTB-3",
                "sec_num": "2.1"
            },
            {
                "text": "3 It would not make sense to have separate processors for explicit discourse relations, as the decision process takes account of the discourse connective, thereby already learning whether the arguments are likely to occur across vs. within sentences.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Additional Annotation in PDTB-3",
                "sec_num": "2.1"
            },
            {
                "text": "4 Despite what people may have been taught, there are over 2100 tokens of sentence-initial \"But\" in the Penn WSJ corpus and over 660 tokens of sentence-initial \"And\".",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Additional Annotation in PDTB-3",
                "sec_num": "2.1"
            },
            {
                "text": "(1) Treasury bonds got off to a strong start, advancing modestly during overnight trading on foreign markets. Conn=specifically (ARG2-AS-DETAIL) [wsj 0351]",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Additional Annotation in PDTB-3",
                "sec_num": "2.1"
            },
            {
                "text": "(2) After a bad start, Treasury bonds were buoyed by a late burst of buying, to end modestly higher.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Additional Annotation in PDTB-3",
                "sec_num": "2.1"
            },
            {
                "text": "(3) Father McKenna moves through the house praying in Latin, urging the demon to split. (CONJUNCTION) [wsj 0413] Because implicit relations within sentences don't all occur at a single, well-defined position, this adds to the problems of shallow discourse parsing.",
                "cite_spans": [
                    {
                        "start": 102,
                        "end": 112,
                        "text": "[wsj 0413]",
                        "ref_id": null
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Conn=therefore (RESULT) [wsj 0400]",
                "sec_num": null
            },
            {
                "text": "In addition to stand-alone implicits in the PDTB-3, annotators were allowed to indicate implicit relations that co-occur with explicit relations (Rohde et al., 2017 (Rohde et al., , 2018 , as a way of indicating a relation that did not derive from the explicit connective, but rather from what the annotator inferred from the arguments themselves, as in Ex. 4-6: (4) We've got to get out of the Detroit mentality and Implicit=instead be part of the world mentality, declares Charles M. Jordan, GM's vice president for design . . . In Ex. 4, the annotators indicated that they inferred ARG2-AS-SUBST from the pair of arguments conjoined with and. The annotators took and itself to convey only that its arguments played the same role with respect to the prior text. It is the arguments themselves that led them to conclude that the second conjunct is meant to substitute for the first.",
                "cite_spans": [
                    {
                        "start": 145,
                        "end": 164,
                        "text": "(Rohde et al., 2017",
                        "ref_id": "BIBREF17"
                    },
                    {
                        "start": 165,
                        "end": 186,
                        "text": "(Rohde et al., , 2018",
                        "ref_id": "BIBREF18"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Conn=therefore (RESULT) [wsj 0400]",
                "sec_num": null
            },
            {
                "text": "Similarly, in Ex. 5, the annotators indicated that they inferred the temporal relation PRECEDENCE from the pair of arguments conjoined with but. The annotators took but itself to convey CONCESSION. It is the arguments themselves that led the annotators to conclude that the second conjunct follows the first in time.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Conn=therefore (RESULT) [wsj 0400]",
                "sec_num": null
            },
            {
                "text": "Finally, in Ex. 6, the annotators indicated that they inferred a CONCESSION relation from the pair of arguments linked by without. The annotators took without itself (like its positive version with) to convey MANNER. It is only the arguments that led them to conclude that Arg2 denies an expectation raised by Arg1.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Conn=therefore (RESULT) [wsj 0400]",
                "sec_num": null
            },
            {
                "text": "In the PDTB-3, when two relations co-occur, they are explicitly linked through a shared index. The consequence for shallow discourse parsing is that explicit relations now need to be checked for co-occurence with an implicit relation.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Conn=therefore (RESULT) [wsj 0400]",
                "sec_num": null
            },
            {
                "text": "The sense hierarchy used in annotating the PDTB-3 differs from that used in annotating the PDTB-2 in three ways:",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Changes to the Sense Hierarchy",
                "sec_num": "2.2"
            },
            {
                "text": "1. Rare and/or difficult to annotate senses were dropped, as with the different types of conditional senses;",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Changes to the Sense Hierarchy",
                "sec_num": "2.2"
            },
            {
                "text": "2. Sense relations at Level-3 now only encode directionality -for example, distinguishing ARG1-AS-SUBST (Ex. 7) from ARG2-AS-SUBST (Ex. 8)",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Changes to the Sense Hierarchy",
                "sec_num": "2.2"
            },
            {
                "text": "3. New senses were added that were found to be needed for annotating relations within sentences. More about the senses used in annotating the PDTB-3 can be found in Webber et al. (2019) . Senses are relevant to this discussion of implicit relations in shallow discourse parsing because (as set out in Section 4) implicit relations have been found to have different sense distributions depending on where they occur.",
                "cite_spans": [
                    {
                        "start": 165,
                        "end": 185,
                        "text": "Webber et al. (2019)",
                        "ref_id": null
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Changes to the Sense Hierarchy",
                "sec_num": "2.2"
            },
            {
                "text": "Both the PDTB-2 and PDTB-3 use stand-off annotation. What is relevant with respect to the experiments we report here, is what information is explicit in the annotation, as opposed to having to be computed. This information includes (1) the type of the relation (Explicit, Implicit, AltLex, Al-tLexC, Entity, Hypophora, NoRel); (2) the byte spans of the two arguments of the relation; and (3) the explicit index (aka link) of relations that co-occur by virtue of sharing the same or nearly the same arguments. The full field structure of discourse relations is set out in Section 8 of Webber et al. (2019) . What has to be recovered from the argument spans and the span of the projection of the top node in each sentence-level parse tree is whether a relation occurs wholely within a single sentence or involves multiple sentences.",
                "cite_spans": [
                    {
                        "start": 584,
                        "end": 604,
                        "text": "Webber et al. (2019)",
                        "ref_id": null
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Stand-off annotation in the PDTB-3",
                "sec_num": "2.3"
            },
            {
                "text": "The sense classifiers for implicit relations used in this paper are based on a Basic Model whose properties reflect consideration of data size and the interaction between lexical information and structural information. (A full description of the Basic Model is given in Appendix A.)",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Basic Model Architecture",
                "sec_num": "3"
            },
            {
                "text": "The architecture of Basic Model is shown in Figure 1 . It consists of two LSTMs (Hochreiter and Schmidhuber, 1997) and max-pooling layers, a hidden layer, a dense layer, and a softmax layer. Inputs to the model consist of pairs of discourse arguments, each represented as a sequence of word vectors. The output is a probability distribution of the senses between the discourse argument spans. The two sequences of word vectors are encoded by LSTMs in order to capture positional information within the sequential structure. Max-pooling on the output of the LSTMs is used to compose meaning and reduce parameters for the model, as it has been proven effective in Conneau et al. (2017) . Modeling the interaction between discourse arguments follows , who argue that discourse relations can only be determined by jointly analyzing the arguments. In addition, Rutherford et al. (2017) observed the influence of different configurations on the performance of the model for the implicit sense classification task, suggesting an interaction between the lexical information in word vectors and the structural information encoded in the model itself. We follow them in adopting a 300-dimension word2vec (Mikolov et al., 2013b) word embedding and hidden size of 100 for the Basic Model. Table 1 compares the distribution of intersentential and intra-sentential implicit relations with respect to the PDTB-3's Level-2 sense labels, along with the proportion of each label to the total inter-sentential and intra-sentential implicit relations. Besides differences in frequency -for example, relations expressing PURPOSE constitute 21.76% of intra-sentential implicit relations, while only 0.12% of inter-sentential implicits, while relations expressing INSTANTIATION constitute 8.89% of inter-sentential implicits, while only 1.4% of intra-sentential implicits -the senses of inter-sentential implicits are more unequally distributed. That is, three senses -CONTIN-GENCY.CAUSE, EXPANSION.CONJUNCTION and LEVEL-OF-DETAIL cover 67.08% of the intersentential implicits. In contrast, except for CON-TINGENCY.CAUSE and PURPOSE, most of the other intra-sentential implicits are more evenly distributed. As often happens with training on an imbalanced distribution, the unequal distribution of inter-sentential relations can lead the model to predict the majority class, ignoring minority classes.",
                "cite_spans": [
                    {
                        "start": 80,
                        "end": 114,
                        "text": "(Hochreiter and Schmidhuber, 1997)",
                        "ref_id": "BIBREF5"
                    },
                    {
                        "start": 662,
                        "end": 683,
                        "text": "Conneau et al. (2017)",
                        "ref_id": "BIBREF3"
                    },
                    {
                        "start": 856,
                        "end": 880,
                        "text": "Rutherford et al. (2017)",
                        "ref_id": "BIBREF19"
                    },
                    {
                        "start": 1194,
                        "end": 1217,
                        "text": "(Mikolov et al., 2013b)",
                        "ref_id": "BIBREF13"
                    }
                ],
                "ref_spans": [
                    {
                        "start": 44,
                        "end": 52,
                        "text": "Figure 1",
                        "ref_id": "FIGREF1"
                    },
                    {
                        "start": 1277,
                        "end": 1284,
                        "text": "Table 1",
                        "ref_id": null
                    }
                ],
                "eq_spans": [],
                "section": "Basic Model Architecture",
                "sec_num": "3"
            },
            {
                "text": "As for the 1753 implicits that co-occur with explicit relations, Table 2 shows that their sense distribution differs sharply from that of stand-alone implicit relations. For example, over 70% convey either CAUSE or ASYNCHRONOUS, while this holds of only 28.7% of stand-alone implicit relations. As such, linked implicits should be more predictable than stand-alone implicit relations.",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 65,
                        "end": 72,
                        "text": "Table 2",
                        "ref_id": "TABREF4"
                    }
                ],
                "eq_spans": [],
                "section": "Basic Model Architecture",
                "sec_num": "3"
            },
            {
                "text": "Differences in the distribution of implicit relations within sentences and across sentences suggest that we exploit this difference in sense-labelling implicit relations. In this section, we first assume that we know where implicit relations are located within a sentence, so that we can simply consider their arguments. We then present work we have done towards relaxing this assumption.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Inter-and intra-sentential Implicits",
                "sec_num": "5"
            },
            {
                "text": "Task 1: Consider the location of implicit relations in classification. There are different ways to take the location of implicit relations into consideration. Here we present two models, Model 1 (Section 5.2) and Model 2 (Section 5.3), both based on the basic model architecture described in Section 3. We compare them with the Basic Model, which uses the same classifier on all tokens. We compare their performance not just using the standard training-development-test split, where the ratio of inter-to intra-sentential implicits in the training set, WSJ section 2-21, is 12787:5014. In addition, we follow Shi and Demberg (2017) , who argue that evaluation through cross-validation is more predictive, given the wide variation in texts that appear in different sections of the Penn Wall Street Journal corpus. The average ratio of interto intra-sentential implicits in training sets of crossvalidation is 12747:4992. The scores of 3 models are weighted by the proportion of inter-and intrasentential tokens in the test set. Table 1 : Distribution of inter-sentential/intra-sentential implicit relations among Level 2 labels and the proportion of each label with respect to inter-sentential/intra-sentential implicit relations lations hold within sentences, two recognizers to identify implicit relations and find argument spans are provided. The first recognizer (Section 5.4) takes syntactic features to identify sentences that contain intra-sentential relations. The second recognizer (Section 5.5) exploits the properties that some explicit relations are linked with implicit relations, checking the explicit relations for cooccurrence with implicit relations to obtain the shared arguments.",
                "cite_spans": [
                    {
                        "start": 609,
                        "end": 631,
                        "text": "Shi and Demberg (2017)",
                        "ref_id": "BIBREF22"
                    }
                ],
                "ref_spans": [
                    {
                        "start": 1027,
                        "end": 1034,
                        "text": "Table 1",
                        "ref_id": null
                    }
                ],
                "eq_spans": [],
                "section": "Inter-and intra-sentential Implicits",
                "sec_num": "5"
            },
            {
                "text": "The Basic Model uses the same classifier on all tokens. Since we know which tokens are intersentential and which are intra-sentential, we can compare how well the Basic Model does on each.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Basic Model",
                "sec_num": "5.1"
            },
            {
                "text": "To compute the F 1 scores for the overall performance of the model, the scores of the model are combined, weighted by the proportion of inter-or intra-sentential tokens in the test set. This is shown on the first line of Table 3 , elaborated in the confusion matrix shown in Figure 2 . A Chi-squared test on the results show the performance of the Basic Model appears to depend to a statistically significant extent on whether the sense appears inter-or intra-sententially (p=1.50e-03).",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 221,
                        "end": 228,
                        "text": "Table 3",
                        "ref_id": "TABREF5"
                    },
                    {
                        "start": 275,
                        "end": 283,
                        "text": "Figure 2",
                        "ref_id": "FIGREF2"
                    }
                ],
                "eq_spans": [],
                "section": "Basic Model",
                "sec_num": "5.1"
            },
            {
                "text": "Model architecture: The idea behind Model 1 is to separate the classification task into intrasentential and inter-sentential implicit sense clas-sification, with separate classifiers for each. The model architecture and configuration of each classifier are the same as in the Basic Model (Section 3). We expect each classifier to capture different sense distributions of intra-sentential or inter-sentential implicits.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Model 1",
                "sec_num": "5.2"
            },
            {
                "text": "Training and evaluation: Based on their argument spans and the spans associated with each sentence in a file, tokens can be labeled as intersentential or intra-sentential. For the standard training-development-test framework, the tokens are allocated into separate inter-sentential/intrasentential training, development, and test sets. The inter-sentential training set is used in training the inter-sentential implicit sense classifier, and similarly for intra-sentential classification. Test set tokens labeled as inter-sentential or intra-sentential are fed into the appropriate classifier.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Model 1",
                "sec_num": "5.2"
            },
            {
                "text": "Results: The second line of Table 3 presents F 1 scores for Model 1 evaluated on the main evaluation test set and by cross-validation. It shows that Model 1 improves on the Basic Model in predicting intra-sentential implicit relations. The performance of the model significantly depends on the location of relations (p = 2.41e-09). The confusion matrix for Model 1 5 (cf. Figure 2) shows that labels with a relatively larger sample size in each set are predicted more often, includ- ing CONTINGENCY.PURPOSE (frequent in intrasentential implicits), EXPANSION.CONJUNCTION (frequent in inter-sentential implicits) and CONTIN-GENCY.CAUSE (frequent in both). The confusion matrix also shows that less frequent senses are confused with these frequent labels more often. Model 1 also reduces the ignorance problem of the Basic Model, in that it correctly classifies some samples into TEMPORAL.SYNCHRONOUS, which is a label ignored by the basic model.",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 28,
                        "end": 35,
                        "text": "Table 3",
                        "ref_id": "TABREF5"
                    },
                    {
                        "start": 372,
                        "end": 381,
                        "text": "Figure 2)",
                        "ref_id": "FIGREF2"
                    }
                ],
                "eq_spans": [],
                "section": "Model 1",
                "sec_num": "5.2"
            },
            {
                "text": "Model architecture: Model 2 treats being intersentential or intra-sentential as a single binary feature. Model 2 is created by modifying the Basic Model to include this feature after obtaining the combined representations of the two arguments. We concatenate the binary feature f S with the output of the dense layer before applying the softmax function, expecting it to affect the final prediction.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Model 2",
                "sec_num": "5.3"
            },
            {
                "text": "Training and evaluation: The data selection follows the standard and cross-validation data split process. The evaluation assumes that each token in the test set has been given an inter-sentential or intra-sentential feature. The scores are computed following the general process as the basic model.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Model 2",
                "sec_num": "5.3"
            },
            {
                "text": "The third line of Table 3 shows that Model 2 improves over the Basic Model with respect to both inter-and intra-sentential implicit sense prediction, though the performance of the model still has a statistically significant dependence on the location of relations (p = 4.53e-04). The improvement of Model 2 on intra-sentential labels is not as dramatic as Model 1. Compared to the previous model, Model 2 doesn't sharpen its focus on those frequent labels in inter-or intra-sentential sets. Instead, the integrated feature in the representations distributes the benefits on the prediction ability of different labels more evenly. In addition, the confusion matrix in Figure 2 shows that Model 2 reduces the confusion between INSTANTIATION and LEVEL-OF-DETAIL, which Scholman and Demberg (2017) have hightlighted as a common source of confusion. The confusion matrix for Model 2 also shows some attention to less frequent labels such as COMPARISON.CONTRAST, which are not predicted in either the Basic Model or Model 1. ",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 18,
                        "end": 25,
                        "text": "Table 3",
                        "ref_id": "TABREF5"
                    },
                    {
                        "start": 667,
                        "end": 675,
                        "text": "Figure 2",
                        "ref_id": "FIGREF2"
                    }
                ],
                "eq_spans": [],
                "section": "Results:",
                "sec_num": null
            },
            {
                "text": "The results presented above reflect \"gold knowledge\" of where implicit discourse relations hold within sentences. But in truth, their locations need to be identified before (or jointly with) labelling their senses. We have viewed this as a two-step process: Recognizing sentences that contain at least one implicit intra-sentential relation, and then recognizing the arguments to each relation. The first step has been implemented using a recognizer that takes a linearized parse tree of a sentences as the input. The second step is future work.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Towards finding implicits within sentences",
                "sec_num": "5.4"
            },
            {
                "text": "Model architecture: Similar to the Basic Model, inputs are represented as a sequence of word vectors, and word embeddings are initialized using pretrained fastText (Bojanowski et al., 2017) vectors (16B tokens). These vectors are fed to a BiLSTM whose outputs are then fed to a linear layer to produce a binary label, indicating the existence of at least one implicit intra-sentential relation. Word embeddings are set to 200, hidden dimensions, to 256, and vocabulary size, to 25k.",
                "cite_spans": [
                    {
                        "start": 164,
                        "end": 189,
                        "text": "(Bojanowski et al., 2017)",
                        "ref_id": "BIBREF2"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Towards finding implicits within sentences",
                "sec_num": "5.4"
            },
            {
                "text": "Training and evaluation: To train our recognizer, we first created a dataset of triplets comprising a sentence from PDTB-3, its corresponding parse tree, and a binary label. We obtain the parse trees from the Penn TreeBank (PTB - Marcus et al. 1993 ) and set the binary label to 1 if there exist at least one implicit or AltLex relation in that sentence. For example, the sentence in Ex. 9 is labelled 1, while that in Ex. 10 is labelled 0. Intra-sentential AltLex relations are included here because they are simply Implicit relations whose alternative lexicalization reliably signals its sensefor example, the phrases \"resulting in\", \"avoiding\", and \"contributing to\" are all taken to be alternative lexicalizations that reliably signal RESULT. This is not true of the earlier Examples 1-3, which are classed as Implicits. On the other hand, we do not label \"linked\" implicit relations as 1 because the visible evidence is an explicit connective signalling an explicit relation, and we don't want that to be taken per se as evidence for an implicit relation. For recognizing linked implicits, we have built a separate model which will be discussed in Section 5.5. Our training used the Adam optimizer (Kingma and Ba, 2015) with a learning rate of 1e-4. We randomly split the dataset into training (60%), development (20%) and test (20%). To understand what happens if \"gold parse trees\" are not used, we also created variants of the dataset using parse trees from the widely used Berkeley parser (Kitaev and Klein, 2018) and Stanford parser (Manning et al., 2014) .",
                "cite_spans": [
                    {
                        "start": 230,
                        "end": 248,
                        "text": "Marcus et al. 1993",
                        "ref_id": "BIBREF11"
                    },
                    {
                        "start": 1498,
                        "end": 1522,
                        "text": "(Kitaev and Klein, 2018)",
                        "ref_id": "BIBREF9"
                    },
                    {
                        "start": 1527,
                        "end": 1565,
                        "text": "Stanford parser (Manning et al., 2014)",
                        "ref_id": null
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Towards finding implicits within sentences",
                "sec_num": "5.4"
            },
            {
                "text": "Results: As the dataset is heavily imbalanced, we also added a simple baseline which predicts the most frequent label. Test set results of the recognizer on the three datasets are presented in Table 4 . Even though the baseline achieved an accuracy of \u223c0.9, it doesn't convey any useful information, as it labels all instances as 0. We can observe that the model with gold Penn TreeBank parse trees obtain the best performance, followed by the Berkeley parser. Stanford parse trees result in worst perfor- Table 4 : Results on task of identifying sentences that contain at least one intra-sentential relation, comparing gold parse trees from the PTB with the parse trees output by the Berkeley parser and by the Stanford parser.",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 193,
                        "end": 200,
                        "text": "Table 4",
                        "ref_id": null
                    },
                    {
                        "start": 506,
                        "end": 513,
                        "text": "Table 4",
                        "ref_id": null
                    }
                ],
                "eq_spans": [],
                "section": "Towards finding implicits within sentences",
                "sec_num": "5.4"
            },
            {
                "text": "Baseline refers to the model that predicts the most frequent label.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Towards finding implicits within sentences",
                "sec_num": "5.4"
            },
            {
                "text": "mance. Examining these trees led us to conclude that, while the Stanford parser does well for basic syntactic structures, which are the most common, it has trouble with challenging structures such as those associated with conjunction. An example is provided in Ex. 11. Here, \"steps\" has been incorrectly labelled NNS, when it is actually a VBZ, heading the second conjunct. If there were only two conjuncts, explicitly conjoined with \"and\", the sentence would not contain an implicit relation. With three conjuncts, however, the first two would normally be comma-conjoined, with the discourse relation between them taken to be implicit. But the error in PoS-tagging has eliminated evidence of a second conjunct, with an implicit discourse relation to the first conjunct. Errors in PoS-tagging and mis-parsing associated with rare constructions, means that the accuracy is lower than that of the Berkeley parser. However, as Precision, Recall, and F 1 are measured for 1 labels, these metrics are more adversely affected when compared to those of the Berkeley parser. Table 5 : Precision, Recall and F 1 scores of linked/stand-alone labels predicted by the recognizer using main evaluation metrics and their proportion in test data.",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 1067,
                        "end": 1074,
                        "text": "Table 5",
                        "ref_id": null
                    }
                ],
                "eq_spans": [],
                "section": "Towards finding implicits within sentences",
                "sec_num": "5.4"
            },
            {
                "text": "described in Section 5.4, we actually know the location of their arguments, because co-occurring (aka \"linked\") relations share their argument spans. Hence, recognizing explicit relations linked with implicit ones means that we also obtain argument spans of these implicits. Here we describe a first attempt to automatically discriminate explicit relations linked with implicit relations from ones that are not so linked. It comprises two steps: extracting sentences that contain explicit relations as our datasets, and then recognizing the ones linked with implicit relations.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Towards finding implicits within sentences",
                "sec_num": "5.4"
            },
            {
                "text": "Model architecture: To detect linked implicit relations from explicit relations, we use a naive Bayes classifier -specifically, the one provided in NLTK (Bird and Loper, 2004) . Production rules are selected as input feature as it has been proven notably effective in feature-based implicit discourse relation recognition task among different features (Park and Cardie, 2012) . Models trained in Task 1 will be adopted for linked sense classification.",
                "cite_spans": [
                    {
                        "start": 153,
                        "end": 175,
                        "text": "(Bird and Loper, 2004)",
                        "ref_id": "BIBREF1"
                    },
                    {
                        "start": 352,
                        "end": 375,
                        "text": "(Park and Cardie, 2012)",
                        "ref_id": "BIBREF14"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Towards finding implicits within sentences",
                "sec_num": "5.4"
            },
            {
                "text": "We follow the standard split to select the training and test set. Each token in the training set consists of Arg1, connective and Arg2, and are parsed to extract syntactic productions used in parent-child nodes in the argument parse trees. The 100 most-frequent production rules are used to build a feature dictionary for input. A production rule feature is labeled as 1 in the dictionary if it appears in the parse tree of the token, otherwise it will be 0. The linked/stand-alone label is determined by whether the explicit relation shares the same index value with an implicit relation. The recognizer is evaluated by how well it distinguishes explicit relations that have a linked implicit relation from ones that don't. Classifiers are evaluated on the recognized implicit relations.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Training and evaluation:",
                "sec_num": null
            },
            {
                "text": "Results: The low Recall for linked relations in Table 5 shows that the recognizer performs better on predicting stand-alone relations, which are a majority of the data. Linked implicits in the test set (WSJ Section 23) are mostly linked to conjoined clauses or conjoined VPs, and are signaled by implicit connective like \"and\" (81.08%) or \"but\" or an adverbial. Most correctly recognized relations are VPs conjoined with \"and\". All the recognized linked implicit relations are found intra-sentential. We adopt the intra-sentential classifier in Model 1 and the Basic Model to test the classifier based on the recognized results. The intra-sentential classifier achieves an F 1 score of 75, compared with 68.182 using the Basic Model. This again emphasizes that knowing the location of implicit discourse relation would benefit sense identification.",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 48,
                        "end": 55,
                        "text": "Table 5",
                        "ref_id": null
                    }
                ],
                "eq_spans": [],
                "section": "Training and evaluation:",
                "sec_num": null
            },
            {
                "text": "We have shown that recognizing implicit discourse relations as annotated in the PDTB-3 now requires finding them, as well as figuring out what sense relation(s) holds between the arguments. However, we have also shown that the latter task is simplified by differences in the sense distribution of different implicit relations. We still have to develop a way of recognizing precisely where implicit relations hold in those sentences that can be identified as containing them, and a more accurate approach to sense labelling implicit relations that co-occur with explicit ones. We are also interested in whether these different sense distributions hold in other news corpora and other genres. While it is likely not the case that all languages show the same difference in the sense distribution of discourse relations, we would not be surprised if the discourse relations realized within sentences differed from those realized across sentences. In conclusion, we hope that the current effort will contribute to future work on shallow discourse parsing as annotated in the PDTB-3.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Conclusion and future work",
                "sec_num": "6"
            },
            {
                "text": "EQUATION",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [
                    {
                        "start": 0,
                        "end": 8,
                        "text": "EQUATION",
                        "ref_id": "EQREF",
                        "raw_str": "a 1 j = max k\u2208n 1 (H Arg2 j k ) (5) a 2 j = max k\u2208n 2 (H Arg1 j k ) (6) A Arg1 = [a 1 1 , a 1 2 , ..., a 1 hidden size ]",
                        "eq_num": "(7)"
                    }
                ],
                "section": "Conclusion and future work",
                "sec_num": "6"
            },
            {
                "text": "A Arg2 = [a 2 1 , a 2 2 , ..., a 2 hidden size ]",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Conclusion and future work",
                "sec_num": "6"
            },
            {
                "text": "Inter-argument interaction modeling: The modeling of the interaction between two discourse argument representations follows , which argues that discourse relations can only be determined by jointly analyzing the arguments. In our model, argument representations A Arg1 and A Arg2 are weighted by W 1 and W 2 separately. The combination of the weighted argument representations is then transformed non-linearly with tanh function in the first hidden layer H hid . It is then fed into a dense layer H dense 7 . Finally, we predict the discourse relation sense using a softmax function.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Conclusion and future work",
                "sec_num": "6"
            },
            {
                "text": "H hid = tanh(W 1 \u2022A Arg1 +W 2 \u2022A Arg2 +b hid ) (9) H dense = tanh(W dense \u2022 H 1 + b dense ) (10) output = sof tmax(W output \u2022 H dense + b output ) (11) A.2 Configuration Implementation:",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Conclusion and future work",
                "sec_num": "6"
            },
            {
                "text": "The model is implemented with PyTorch. The cost function is the standard crossentropy loss function and Adam optimizer with an initial learning rate of 0.001 and a batch size of 32. We determine convergence if the performance of the model on the development set does not improve after more than 3 epochs.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Conclusion and future work",
                "sec_num": "6"
            },
            {
                "text": "One problem that challenges the training of the model is the limitation on the size of the data. We introduce other resources to overcome it and adopt different techniques to avoid overfitting. Word vectors are directly taken from Word2vec embeddings (Mikolov et al., 2013a) trained with the skip-gram algorithm on Brown corpus, and are fixed during training. To avoid overfitting, we apply a 0.25 dropout ratio to the input of the LSTM layer. Batch normalization is added to normalize the activation between the hidden layer and the dense layer to accelerate the training speed and further prevent overfitting with regularization. Hyperparameter Settings: (Rutherford et al., 2017) observed the influence of different configurations on the performance of the model for the implicit sense classification task, suggesting an interaction between the lexical information in word vectors and the structural information encoded in the model itself. To determine the configuration for our model, we trained our model with different combinations of the dimension of word embedding (50, 300) and hidden size (50, 100), and evaluate it on Level 2 labels on the WSJ section 23. Table 6 presents the performance of the model with different configurations. The baseline is Most Frequent Sense heuristic, using the most frequent sense CONTINGENCY.CAUSE in the training data for each target. Our result is in line with their finding of sequential LSTM model, showing larger hidden size 100 is effective when it is accompanied with 300-dimension word embedding. Based on the performance on Level 2 labels, we choose 300dimension Word2vec word embedding and hidden size 100 as our configuration for the Basic Model.",
                "cite_spans": [
                    {
                        "start": 251,
                        "end": 274,
                        "text": "(Mikolov et al., 2013a)",
                        "ref_id": "BIBREF12"
                    },
                    {
                        "start": 657,
                        "end": 682,
                        "text": "(Rutherford et al., 2017)",
                        "ref_id": "BIBREF19"
                    }
                ],
                "ref_spans": [
                    {
                        "start": 1168,
                        "end": 1175,
                        "text": "Table 6",
                        "ref_id": null
                    }
                ],
                "eq_spans": [],
                "section": "Conclusion and future work",
                "sec_num": "6"
            },
            {
                "text": "Our model scores 34.778 at Level 3 (31-way classification). Using cross-validation, our model obtains 41.463 at Level 2.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Conclusion and future work",
                "sec_num": "6"
            },
            {
                "text": "It is worth examining the performance of the model on each Level 2 label individually. Table 7 displays the precision, recall and F 1 scores of each label along with its proportion in the test data.",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 87,
                        "end": 94,
                        "text": "Table 7",
                        "ref_id": null
                    }
                ],
                "eq_spans": [],
                "section": "A.3 Discussion",
                "sec_num": null
            },
            {
                "text": "The classifier obtains relatively higher scores on some types of labels. The first type is senses with larger sample size in the corpus, suggesting the imbalanced classification problem. Two senses occur frequently in the corpus (CONTINGENCY.CAUSE and EXPANSION.CONJUNCTION) are recognized with high Recall, but low Precision. This could indicate a strong signal, but one that is likely to be ambiguous. Other less frequent labels are constantly misclassified into these frequent labels. For example, the amount of EXPAN-SION.MANNER samples is largely reduced by our method dealing with multi-label instances, and the classifier fails to recognize the minority class. Another type of senses achieving high scores are those occurring predominantly in intrasentential relations (CONTINGENCY.PURPOSE and CONTINGENCY.CONDITION) or in intersentential relations (EXPANSION.INSTANTIATION and EXPANSION.LEVEL-OF-DETAIL). The model recognize these senses with high Precision, but different levels of Recall, which could be due to a difference in the strength of evidence signalling the relation. Additionally, TEMPO-RAL.ASYNCHRONOUS sense that associates with much higher proportion in linked relations than stand-alone ones obtain similar Recall and Precision scores.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "A.3 Discussion",
                "sec_num": null
            },
            {
                "text": "Some previous approaches to discourse parsing have also distinguished relations that occur within a sentence from those that occur across sentences(Joty et al., 2013(Joty et al., , 2015, but it was not felt to be needed in the PDTB-2, where implicit relations only appeared across sentences.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "",
                "sec_num": null
            },
            {
                "text": "Differences in the distribution of sense relationsTo argue for separating the recognition of intrasentential implicits from inter-sentential implicits, and the recognition of linked implicits from standalone implicits, we show how their sense distributions are different.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "",
                "sec_num": null
            },
            {
                "text": "combining results of the inter-sentential and intrasentential classifiers",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "",
                "sec_num": null
            },
            {
                "text": "These labels are not used in the basic model described in this work, but serve for statistical tests and further experiments.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "",
                "sec_num": null
            },
            {
                "text": "The default size of the dense layer is hidden size//5.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "",
                "sec_num": null
            }
        ],
        "back_matter": [
            {
                "text": "We would like to thank the anonymous reviewers for their valuable comments. We would also like to thank Annie Louis for her contributions to the work on recognizing the presence of sentence-internal implicit discourse relations.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Acknowledgments",
                "sec_num": null
            },
            {
                "text": "Here we describe the basic model architecture for implicit relation sense classification in PDTB-3. The configuration for the model is chosen based on consideration of data size and the interaction between lexical information and structural information. A further analysis on the predictive performance of the basic model on each labels is provided as well.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "A Specifics of the Basic Model",
                "sec_num": null
            },
            {
                "text": "Figure 1 (repeated here as Figure 3 ) illustrates the overall model architecture of the neural implicit sense classifier that consists of two LSTM and maxpooling layers, a hidden layer, a dense layer, and a softmax layer. The input for the model is the discourse argument pairs with additional labels 6 , and the output is a probability distribution of the senses between the discourse argument spans.Word vectors: In our model, arguments Arg1 and Arg2 are viewed as two sequences of word vectors with length of n 1 and n 2 . Word vectors for the word in arguments are taken from word embeddings.Arg1Argument representations: The two sequences of word vectors are encoded by LSTM respectively. The hidden states H Arg1 and H Arg2 of LSTM are taken. The max-pooling function is employed to compose meaning in the hidden states and reduce parameters for the model, as it has been proven effective in (Conneau et al., 2017) . As shown in eq. 6, it will select the maximum value along the sequence at each dimension of the hidden states. a 1 j (a 2 j ) represents a maximum value from all the values in a sequence with length of n 1 (n 2 ) at dimension j of the hidden states H Arg1 (H Arg2 ). By concatenating the output of max-pooling function, we have abstract representations A Arg1 and A Arg2 of arguments Arg1 and Arg2 individually. Table 7 : Precision, Recall and F 1 scores of different labels predicted by the basic model using main evaluation metric and their proportions in test data",
                "cite_spans": [
                    {
                        "start": 898,
                        "end": 920,
                        "text": "(Conneau et al., 2017)",
                        "ref_id": "BIBREF3"
                    }
                ],
                "ref_spans": [
                    {
                        "start": 27,
                        "end": 35,
                        "text": "Figure 3",
                        "ref_id": null
                    },
                    {
                        "start": 1335,
                        "end": 1342,
                        "text": "Table 7",
                        "ref_id": null
                    }
                ],
                "eq_spans": [],
                "section": "A.1 Model architecture",
                "sec_num": null
            }
        ],
        "bib_entries": {
            "BIBREF0": {
                "ref_id": "b0",
                "title": "Deep enhanced representation for implicit discourse relation recognition",
                "authors": [
                    {
                        "first": "Hongxiao",
                        "middle": [],
                        "last": "Bai",
                        "suffix": ""
                    },
                    {
                        "first": "Hai",
                        "middle": [],
                        "last": "Zhao",
                        "suffix": ""
                    }
                ],
                "year": 2018,
                "venue": "Proceedings of the 27th International Conference on Computational Linguistics",
                "volume": "",
                "issue": "",
                "pages": "571--583",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Hongxiao Bai and Hai Zhao. 2018. Deep enhanced representation for implicit discourse relation recog- nition. In Proceedings of the 27th International Con- ference on Computational Linguistics, pages 571- 583, Santa Fe, New Mexico, USA. Association for Computational Linguistics.",
                "links": null
            },
            "BIBREF1": {
                "ref_id": "b1",
                "title": "NLTK: The natural language toolkit",
                "authors": [
                    {
                        "first": "Steven",
                        "middle": [],
                        "last": "Bird",
                        "suffix": ""
                    },
                    {
                        "first": "Edward",
                        "middle": [],
                        "last": "Loper",
                        "suffix": ""
                    }
                ],
                "year": 2004,
                "venue": "Proceedings of the ACL Interactive Poster and Demonstration Sessions",
                "volume": "",
                "issue": "",
                "pages": "214--217",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Steven Bird and Edward Loper. 2004. NLTK: The nat- ural language toolkit. In Proceedings of the ACL In- teractive Poster and Demonstration Sessions, pages 214-217, Barcelona, Spain. Association for Compu- tational Linguistics.",
                "links": null
            },
            "BIBREF2": {
                "ref_id": "b2",
                "title": "Enriching word vectors with subword information",
                "authors": [
                    {
                        "first": "Piotr",
                        "middle": [],
                        "last": "Bojanowski",
                        "suffix": ""
                    },
                    {
                        "first": "Edouard",
                        "middle": [],
                        "last": "Grave",
                        "suffix": ""
                    },
                    {
                        "first": "Armand",
                        "middle": [],
                        "last": "Joulin",
                        "suffix": ""
                    },
                    {
                        "first": "Tomas",
                        "middle": [],
                        "last": "Mikolov",
                        "suffix": ""
                    }
                ],
                "year": 2017,
                "venue": "Transactions of the Association for Computational Linguistics",
                "volume": "5",
                "issue": "",
                "pages": "135--146",
                "other_ids": {
                    "DOI": [
                        "10.1162/tacl_a_00051"
                    ]
                },
                "num": null,
                "urls": [],
                "raw_text": "Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. 2017. Enriching word vectors with subword information. Transactions of the Associa- tion for Computational Linguistics, 5:135-146.",
                "links": null
            },
            "BIBREF3": {
                "ref_id": "b3",
                "title": "Supervised learning of universal sentence representations from natural language inference data",
                "authors": [
                    {
                        "first": "Alexis",
                        "middle": [],
                        "last": "Conneau",
                        "suffix": ""
                    },
                    {
                        "first": "Douwe",
                        "middle": [],
                        "last": "Kiela",
                        "suffix": ""
                    },
                    {
                        "first": "Holger",
                        "middle": [],
                        "last": "Schwenk",
                        "suffix": ""
                    },
                    {
                        "first": "Lo\u00efc",
                        "middle": [],
                        "last": "Barrault",
                        "suffix": ""
                    },
                    {
                        "first": "Antoine",
                        "middle": [],
                        "last": "Bordes",
                        "suffix": ""
                    }
                ],
                "year": 2017,
                "venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing",
                "volume": "",
                "issue": "",
                "pages": "670--680",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Alexis Conneau, Douwe Kiela, Holger Schwenk, Lo\u00efc Barrault, and Antoine Bordes. 2017. Supervised learning of universal sentence representations from natural language inference data. In Proceedings of the 2017 Conference on Empirical Methods in Nat- ural Language Processing, pages 670-680, Copen- hagen, Denmark.",
                "links": null
            },
            "BIBREF4": {
                "ref_id": "b4",
                "title": "Improving implicit discourse relation classification by modeling inter-dependencies of discourse units in a paragraph",
                "authors": [
                    {
                        "first": "Zeyu",
                        "middle": [],
                        "last": "Dai",
                        "suffix": ""
                    },
                    {
                        "first": "Ruihong",
                        "middle": [],
                        "last": "Huang",
                        "suffix": ""
                    }
                ],
                "year": 2018,
                "venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
                "volume": "1",
                "issue": "",
                "pages": "141--151",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Zeyu Dai and Ruihong Huang. 2018. Improving im- plicit discourse relation classification by modeling inter-dependencies of discourse units in a paragraph. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Compu- tational Linguistics: Human Language Technolo- gies, Volume 1 (Long Papers), pages 141-151, New Orleans, Louisiana. Association for Computational Linguistics.",
                "links": null
            },
            "BIBREF5": {
                "ref_id": "b5",
                "title": "Long short-term memory",
                "authors": [
                    {
                        "first": "Sepp",
                        "middle": [],
                        "last": "Hochreiter",
                        "suffix": ""
                    },
                    {
                        "first": "J\u00fcrgen",
                        "middle": [],
                        "last": "Schmidhuber",
                        "suffix": ""
                    }
                ],
                "year": 1997,
                "venue": "Neural Comput",
                "volume": "9",
                "issue": "8",
                "pages": "1735--1780",
                "other_ids": {
                    "DOI": [
                        "10.1162/neco.1997.9.8.1735"
                    ]
                },
                "num": null,
                "urls": [],
                "raw_text": "Sepp Hochreiter and J\u00fcrgen Schmidhuber. 1997. Long short-term memory. Neural Comput., 9(8):1735-1780.",
                "links": null
            },
            "BIBREF6": {
                "ref_id": "b6",
                "title": "Combining intra-and multisentential rhetorical parsing for document-level discourse analysis",
                "authors": [
                    {
                        "first": "Shafiq",
                        "middle": [],
                        "last": "Joty",
                        "suffix": ""
                    },
                    {
                        "first": "Giuseppe",
                        "middle": [],
                        "last": "Carenini",
                        "suffix": ""
                    },
                    {
                        "first": "Raymond",
                        "middle": [],
                        "last": "Ng",
                        "suffix": ""
                    },
                    {
                        "first": "Yashar",
                        "middle": [],
                        "last": "Mehdad",
                        "suffix": ""
                    }
                ],
                "year": 2013,
                "venue": "Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics",
                "volume": "",
                "issue": "",
                "pages": "486--496",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Shafiq Joty, Giuseppe Carenini, Raymond Ng, and Yashar Mehdad. 2013. Combining intra-and multi- sentential rhetorical parsing for document-level dis- course analysis. In Proceedings of the 51st Annual Meeting of the Association for Computational Lin- guistics, pages 486-496, Sofia, Bulgaria.",
                "links": null
            },
            "BIBREF7": {
                "ref_id": "b7",
                "title": "CODRA: A novel discriminative framework for rhetorical analysis",
                "authors": [
                    {
                        "first": "Shafiq",
                        "middle": [],
                        "last": "Joty",
                        "suffix": ""
                    },
                    {
                        "first": "Giuseppe",
                        "middle": [],
                        "last": "Carenini",
                        "suffix": ""
                    },
                    {
                        "first": "Raymond",
                        "middle": [
                            "T"
                        ],
                        "last": "Ng",
                        "suffix": ""
                    }
                ],
                "year": 2015,
                "venue": "Computational Linguistics",
                "volume": "41",
                "issue": "3",
                "pages": "385--435",
                "other_ids": {
                    "DOI": [
                        "10.1162/COLI_a_00226"
                    ]
                },
                "num": null,
                "urls": [],
                "raw_text": "Shafiq Joty, Giuseppe Carenini, and Raymond T. Ng. 2015. CODRA: A novel discriminative framework for rhetorical analysis. Computational Linguistics, 41(3):385-435.",
                "links": null
            },
            "BIBREF8": {
                "ref_id": "b8",
                "title": "Adam: A method for stochastic optimization",
                "authors": [
                    {
                        "first": "P",
                        "middle": [],
                        "last": "Diederik",
                        "suffix": ""
                    },
                    {
                        "first": "Jimmy",
                        "middle": [],
                        "last": "Kingma",
                        "suffix": ""
                    },
                    {
                        "first": "",
                        "middle": [],
                        "last": "Ba",
                        "suffix": ""
                    }
                ],
                "year": 2015,
                "venue": "3rd International Conference on Learning Representations",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Diederik P. Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In 3rd Inter- national Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings.",
                "links": null
            },
            "BIBREF9": {
                "ref_id": "b9",
                "title": "Constituency parsing with a self-attentive encoder",
                "authors": [
                    {
                        "first": "Nikita",
                        "middle": [],
                        "last": "Kitaev",
                        "suffix": ""
                    },
                    {
                        "first": "Dan",
                        "middle": [],
                        "last": "Klein",
                        "suffix": ""
                    }
                ],
                "year": 2018,
                "venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics",
                "volume": "1",
                "issue": "",
                "pages": "2676--2686",
                "other_ids": {
                    "DOI": [
                        "10.18653/v1/P18-1249"
                    ]
                },
                "num": null,
                "urls": [],
                "raw_text": "Nikita Kitaev and Dan Klein. 2018. Constituency pars- ing with a self-attentive encoder. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2676-2686, Melbourne, Australia. Associa- tion for Computational Linguistics.",
                "links": null
            },
            "BIBREF10": {
                "ref_id": "b10",
                "title": "The stanford corenlp natural language processing toolkit",
                "authors": [
                    {
                        "first": "D",
                        "middle": [],
                        "last": "Christopher",
                        "suffix": ""
                    },
                    {
                        "first": "Mihai",
                        "middle": [],
                        "last": "Manning",
                        "suffix": ""
                    },
                    {
                        "first": "John",
                        "middle": [],
                        "last": "Surdeanu",
                        "suffix": ""
                    },
                    {
                        "first": "Jenny",
                        "middle": [
                            "Rose"
                        ],
                        "last": "Bauer",
                        "suffix": ""
                    },
                    {
                        "first": "Steven",
                        "middle": [],
                        "last": "Finkel",
                        "suffix": ""
                    },
                    {
                        "first": "David",
                        "middle": [],
                        "last": "Bethard",
                        "suffix": ""
                    },
                    {
                        "first": "",
                        "middle": [],
                        "last": "Mc-Closky",
                        "suffix": ""
                    }
                ],
                "year": 2014,
                "venue": "Proceedings of 52nd annual meeting of the association for computational linguistics: system demonstrations",
                "volume": "",
                "issue": "",
                "pages": "55--60",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Christopher D Manning, Mihai Surdeanu, John Bauer, Jenny Rose Finkel, Steven Bethard, and David Mc- Closky. 2014. The stanford corenlp natural language processing toolkit. In Proceedings of 52nd annual meeting of the association for computational linguis- tics: system demonstrations, pages 55-60.",
                "links": null
            },
            "BIBREF11": {
                "ref_id": "b11",
                "title": "Building a large annotated corpus of English: The Penn Treebank",
                "authors": [
                    {
                        "first": "Mitchell",
                        "middle": [
                            "P"
                        ],
                        "last": "Marcus",
                        "suffix": ""
                    },
                    {
                        "first": "Beatrice",
                        "middle": [],
                        "last": "Santorini",
                        "suffix": ""
                    },
                    {
                        "first": "Mary",
                        "middle": [
                            "Ann"
                        ],
                        "last": "Marcinkiewicz",
                        "suffix": ""
                    }
                ],
                "year": 1993,
                "venue": "Computational Linguistics",
                "volume": "19",
                "issue": "2",
                "pages": "313--330",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Mitchell P. Marcus, Beatrice Santorini, and Mary Ann Marcinkiewicz. 1993. Building a large annotated corpus of English: The Penn Treebank. Computa- tional Linguistics, 19(2):313-330.",
                "links": null
            },
            "BIBREF12": {
                "ref_id": "b12",
                "title": "Efficient estimation of word representations in vector space",
                "authors": [
                    {
                        "first": "Tomas",
                        "middle": [],
                        "last": "Mikolov",
                        "suffix": ""
                    },
                    {
                        "first": "Kai",
                        "middle": [],
                        "last": "Chen",
                        "suffix": ""
                    },
                    {
                        "first": "G",
                        "middle": [],
                        "last": "Corrado",
                        "suffix": ""
                    },
                    {
                        "first": "Jeffrey",
                        "middle": [],
                        "last": "Dean",
                        "suffix": ""
                    }
                ],
                "year": 2013,
                "venue": "Proceedings of Workshop at ICLR",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Tomas Mikolov, Kai Chen, G.s Corrado, and Jeffrey Dean. 2013a. Efficient estimation of word represen- tations in vector space. Proceedings of Workshop at ICLR, 2013.",
                "links": null
            },
            "BIBREF13": {
                "ref_id": "b13",
                "title": "Distributed representations of words and phrases and their compositionality",
                "authors": [
                    {
                        "first": "Tomas",
                        "middle": [],
                        "last": "Mikolov",
                        "suffix": ""
                    },
                    {
                        "first": "Ilya",
                        "middle": [],
                        "last": "Sutskever",
                        "suffix": ""
                    },
                    {
                        "first": "Kai",
                        "middle": [],
                        "last": "Chen",
                        "suffix": ""
                    },
                    {
                        "first": "Greg",
                        "middle": [],
                        "last": "Corrado",
                        "suffix": ""
                    },
                    {
                        "first": "Jeffrey",
                        "middle": [],
                        "last": "Dean",
                        "suffix": ""
                    }
                ],
                "year": 2013,
                "venue": "Proceedings of the 26th International Conference on Neural Information Processing Systems",
                "volume": "2",
                "issue": "",
                "pages": "3111--3119",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg Cor- rado, and Jeffrey Dean. 2013b. Distributed represen- tations of words and phrases and their composition- ality. In Proceedings of the 26th International Con- ference on Neural Information Processing Systems -Volume 2, NIPS'13, page 3111-3119, Red Hook, NY, USA. Curran Associates Inc.",
                "links": null
            },
            "BIBREF14": {
                "ref_id": "b14",
                "title": "Improving implicit discourse relation recognition through feature set optimization",
                "authors": [
                    {
                        "first": "Joonsuk",
                        "middle": [],
                        "last": "Park",
                        "suffix": ""
                    },
                    {
                        "first": "Claire",
                        "middle": [],
                        "last": "Cardie",
                        "suffix": ""
                    }
                ],
                "year": 2012,
                "venue": "Proceedings of the 13th Annual Meeting of the Special Interest Group on Discourse and Dialogue",
                "volume": "",
                "issue": "",
                "pages": "108--112",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Joonsuk Park and Claire Cardie. 2012. Improving im- plicit discourse relation recognition through feature set optimization. In Proceedings of the 13th Annual Meeting of the Special Interest Group on Discourse and Dialogue, pages 108-112, Seoul, South Korea. Association for Computational Linguistics.",
                "links": null
            },
            "BIBREF15": {
                "ref_id": "b15",
                "title": "The penn discourse treebank 2.0",
                "authors": [
                    {
                        "first": "Rashmi",
                        "middle": [],
                        "last": "Prasad",
                        "suffix": ""
                    },
                    {
                        "first": "Nikhil",
                        "middle": [],
                        "last": "Dinesh",
                        "suffix": ""
                    },
                    {
                        "first": "Alan",
                        "middle": [],
                        "last": "Lee",
                        "suffix": ""
                    },
                    {
                        "first": "Eleni",
                        "middle": [],
                        "last": "Miltsakaki",
                        "suffix": ""
                    },
                    {
                        "first": "Livio",
                        "middle": [],
                        "last": "Robaldo",
                        "suffix": ""
                    },
                    {
                        "first": "Aravind",
                        "middle": [],
                        "last": "Joshi",
                        "suffix": ""
                    },
                    {
                        "first": "Bonnie",
                        "middle": [],
                        "last": "Webber",
                        "suffix": ""
                    }
                ],
                "year": 2008,
                "venue": "Proceedings of the Sixth International Language Resources and Evaluation (LREC'08)",
                "volume": "",
                "issue": "",
                "pages": "2961--2968",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Rashmi Prasad, Nikhil Dinesh, Alan Lee, Eleni Milt- sakaki, Livio Robaldo, Aravind Joshi, and Bonnie Webber. 2008. The penn discourse treebank 2.0. In Proceedings of the Sixth International Language Resources and Evaluation (LREC'08), pages 2961- 2968. European Language Resources Association (ELRA).",
                "links": null
            },
            "BIBREF16": {
                "ref_id": "b16",
                "title": "Realization of discourse relations by other means: Alternative lexicalizations",
                "authors": [
                    {
                        "first": "Rashmi",
                        "middle": [],
                        "last": "Prasad",
                        "suffix": ""
                    },
                    {
                        "first": "Aravind",
                        "middle": [],
                        "last": "Joshi",
                        "suffix": ""
                    },
                    {
                        "first": "Bonnie",
                        "middle": [],
                        "last": "Webber",
                        "suffix": ""
                    }
                ],
                "year": 2010,
                "venue": "Proceedings of the 23rd International Conference on Computational Linguistics (COLING)",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Rashmi Prasad, Aravind Joshi, and Bonnie Webber. 2010. Realization of discourse relations by other means: Alternative lexicalizations. In Proceedings of the 23rd International Conference on Computa- tional Linguistics (COLING), Beijing, China.",
                "links": null
            },
            "BIBREF17": {
                "ref_id": "b17",
                "title": "Exploring substitutability through discourse adverbials and multiple judgments",
                "authors": [
                    {
                        "first": "Hannah",
                        "middle": [],
                        "last": "Rohde",
                        "suffix": ""
                    },
                    {
                        "first": "Anna",
                        "middle": [],
                        "last": "Dickinson",
                        "suffix": ""
                    },
                    {
                        "first": "Nathan",
                        "middle": [],
                        "last": "Schneider",
                        "suffix": ""
                    },
                    {
                        "first": "Christopher",
                        "middle": [],
                        "last": "Clark",
                        "suffix": ""
                    },
                    {
                        "first": "Annie",
                        "middle": [],
                        "last": "Louis",
                        "suffix": ""
                    },
                    {
                        "first": "Bonnie",
                        "middle": [],
                        "last": "Webber",
                        "suffix": ""
                    }
                ],
                "year": 2017,
                "venue": "Proceedings, 12th International Conference on Computational Semantics (IWCS 2017)",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Hannah Rohde, Anna Dickinson, Nathan Schneider, Christopher Clark, Annie Louis, and Bonnie Webber. 2017. Exploring substitutability through discourse adverbials and multiple judgments. In Proceedings, 12th International Conference on Computational Se- mantics (IWCS 2017), Montpellier, France.",
                "links": null
            },
            "BIBREF18": {
                "ref_id": "b18",
                "title": "Discourse coherence: Concurrent explicit and implicit relations",
                "authors": [
                    {
                        "first": "Hannah",
                        "middle": [],
                        "last": "Rohde",
                        "suffix": ""
                    },
                    {
                        "first": "Alexander",
                        "middle": [],
                        "last": "Johnson",
                        "suffix": ""
                    },
                    {
                        "first": "Nathan",
                        "middle": [],
                        "last": "Schneider",
                        "suffix": ""
                    },
                    {
                        "first": "Bonnie",
                        "middle": [],
                        "last": "Webber",
                        "suffix": ""
                    }
                ],
                "year": 2018,
                "venue": "Proceedings of the 56 th Annual Meeting of the ACL",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Hannah Rohde, Alexander Johnson, Nathan Schneider, and Bonnie Webber. 2018. Discourse coherence: Concurrent explicit and implicit relations. In Pro- ceedings of the 56 th Annual Meeting of the ACL.",
                "links": null
            },
            "BIBREF19": {
                "ref_id": "b19",
                "title": "A systematic study of neural discourse models for implicit discourse relation",
                "authors": [
                    {
                        "first": "Attapol",
                        "middle": [],
                        "last": "Rutherford",
                        "suffix": ""
                    },
                    {
                        "first": "Vera",
                        "middle": [],
                        "last": "Demberg",
                        "suffix": ""
                    },
                    {
                        "first": "Nianwen",
                        "middle": [],
                        "last": "Xue",
                        "suffix": ""
                    }
                ],
                "year": 2017,
                "venue": "Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics",
                "volume": "",
                "issue": "",
                "pages": "281--291",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Attapol Rutherford, Vera Demberg, and Nianwen Xue. 2017. A systematic study of neural discourse mod- els for implicit discourse relation. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics (EACL 2017), pages 281-291.",
                "links": null
            },
            "BIBREF20": {
                "ref_id": "b20",
                "title": "Robust non-explicit neural discourse parser in English and Chinese",
                "authors": [
                    {
                        "first": "Attapol",
                        "middle": [],
                        "last": "Rutherford",
                        "suffix": ""
                    },
                    {
                        "first": "Nianwen",
                        "middle": [],
                        "last": "Xue",
                        "suffix": ""
                    }
                ],
                "year": 2016,
                "venue": "Proceedings of the CoNLL-16 shared task",
                "volume": "",
                "issue": "",
                "pages": "55--59",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Attapol Rutherford and Nianwen Xue. 2016. Robust non-explicit neural discourse parser in English and Chinese. In Proceedings of the CoNLL-16 shared task, pages 55-59, Berlin, Germany.",
                "links": null
            },
            "BIBREF21": {
                "ref_id": "b21",
                "title": "Examples and specifications that prove a point: Identifying elaborative and argumentative discourse relations",
                "authors": [
                    {
                        "first": "Merel",
                        "middle": [],
                        "last": "Scholman",
                        "suffix": ""
                    },
                    {
                        "first": "Vera",
                        "middle": [],
                        "last": "Demberg",
                        "suffix": ""
                    }
                ],
                "year": 2017,
                "venue": "Dialogue & Discourse",
                "volume": "8",
                "issue": "",
                "pages": "56--83",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Merel Scholman and Vera Demberg. 2017. Exam- ples and specifications that prove a point: Identi- fying elaborative and argumentative discourse rela- tions. Dialogue & Discourse, 8:56-83.",
                "links": null
            },
            "BIBREF22": {
                "ref_id": "b22",
                "title": "On the need of cross validation for discourse relation classification",
                "authors": [
                    {
                        "first": "Wei",
                        "middle": [],
                        "last": "Shi",
                        "suffix": ""
                    },
                    {
                        "first": "Vera",
                        "middle": [],
                        "last": "Demberg",
                        "suffix": ""
                    }
                ],
                "year": 2017,
                "venue": "Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics",
                "volume": "2",
                "issue": "",
                "pages": "150--156",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Wei Shi and Vera Demberg. 2017. On the need of cross validation for discourse relation classification. In Proceedings of the 15th Conference of the Euro- pean Chapter of the Association for Computational Linguistics: Volume 2, Short Papers, pages 150-156, Valencia, Spain. Association for Computational Lin- guistics.",
                "links": null
            },
            "BIBREF24": {
                "ref_id": "b24",
                "title": "The CoNLL-2015 shared task on shallow discourse parsing",
                "authors": [
                    {
                        "first": "Nianwen",
                        "middle": [],
                        "last": "Xue",
                        "suffix": ""
                    },
                    {
                        "first": "Sameer",
                        "middle": [],
                        "last": "Hwee Tou Ng",
                        "suffix": ""
                    },
                    {
                        "first": "Rashmi",
                        "middle": [],
                        "last": "Pradhan",
                        "suffix": ""
                    },
                    {
                        "first": "Christopher",
                        "middle": [],
                        "last": "Prasad",
                        "suffix": ""
                    },
                    {
                        "first": "Attapol",
                        "middle": [],
                        "last": "Bryant",
                        "suffix": ""
                    },
                    {
                        "first": "",
                        "middle": [],
                        "last": "Rutherford",
                        "suffix": ""
                    }
                ],
                "year": 2015,
                "venue": "Proceedings of the Nineteenth Conference on Computational Natural Language Learning -Shared Task",
                "volume": "",
                "issue": "",
                "pages": "1--16",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Nianwen Xue, Hwee Tou Ng, Sameer Pradhan, Rashmi Prasad, Christopher Bryant, and Attapol Rutherford. 2015. The CoNLL-2015 shared task on shallow discourse parsing. In Proceedings of the Nine- teenth Conference on Computational Natural Lan- guage Learning -Shared Task, pages 1-16, Beijing, China. Association for Computational Linguistics.",
                "links": null
            },
            "BIBREF25": {
                "ref_id": "b25",
                "title": "CoNLL 2016 shared task on multilingual shallow discourse parsing",
                "authors": [
                    {
                        "first": "Nianwen",
                        "middle": [],
                        "last": "Xue",
                        "suffix": ""
                    },
                    {
                        "first": "Sameer",
                        "middle": [],
                        "last": "Hwee Tou Ng",
                        "suffix": ""
                    },
                    {
                        "first": "Attapol",
                        "middle": [],
                        "last": "Pradhan",
                        "suffix": ""
                    },
                    {
                        "first": "Bonnie",
                        "middle": [],
                        "last": "Rutherford",
                        "suffix": ""
                    },
                    {
                        "first": "Chuan",
                        "middle": [],
                        "last": "Webber",
                        "suffix": ""
                    },
                    {
                        "first": "Hongmin",
                        "middle": [],
                        "last": "Wang",
                        "suffix": ""
                    },
                    {
                        "first": "",
                        "middle": [],
                        "last": "Wang",
                        "suffix": ""
                    }
                ],
                "year": 2016,
                "venue": "Proceedings of the CoNLL-16 shared task",
                "volume": "",
                "issue": "",
                "pages": "1--19",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Nianwen Xue, Hwee Tou Ng, Sameer Pradhan, At- tapol Rutherford, Bonnie Webber, Chuan Wang, and Hongmin Wang. 2016. CoNLL 2016 shared task on multilingual shallow discourse parsing. In Pro- ceedings of the CoNLL-16 shared task, pages 1- 19, Berlin, Germany. Association for Computational Linguistics.",
                "links": null
            }
        },
        "ref_entries": {
            "FIGREF0": {
                "num": null,
                "type_str": "figure",
                "uris": null,
                "text": ". . . Exxon Corp. built the plant but (Implicit=then) closed it in 1985. [wsj 1748] (COMPARISON.CONCESSION.ARG2-AS-DENIER, TEMPORAL.ASYNCHRONOUS.PRECEDENCE) (6) . . . which [i.e., the line item veto] would enable him to kill individual items in a big spending bill without (Implicit=however) having to kill the entire bill. [wsj 1133] (EXPANSION.MANNER.ARG2-AS-MANNER, COMPARISON.CONCESSION.ARG2-AS-DENIER)"
            },
            "FIGREF1": {
                "num": null,
                "type_str": "figure",
                "uris": null,
                "text": "The overall model architecture for implicit sense classification"
            },
            "FIGREF2": {
                "num": null,
                "type_str": "figure",
                "uris": null,
                "text": "Confusion matrix of the Basic Model, Model 1 and Model 2"
            },
            "FIGREF3": {
                "num": null,
                "type_str": "figure",
                "uris": null,
                "text": "MARKET MOVES, these managers don't. ( ( S-HLN ( S ( NP-SBJ ( NN MARKET ) ) ( VP ( VBZ MOVES ) ) ) ( , , ) ( S ( NP-SBJ ( DT these ) ( NNS managers ) ) ( VP ( VBP do ) ( RB n't ) ( VP ( -NONE-*?* ) ) ) ) ( . . ) ) ) [wsj 1825] (10) Oil-tool prices are even edging up. ( ( S ( NP-SBJ ( NN Oil-tool ) ( NNS prices ) ) ( VP ( VBP are ) ( ADVP ( RB even ) ) ( VP ( VBG edging ) ( ADVP-DIR ( RP up ) ) ) ) ( . . ) ) ) [wsj 0725]"
            },
            "FIGREF4": {
                "num": null,
                "type_str": "figure",
                "uris": null,
                "text": "With three minutes left on the clock, Mr. Aikman takes the snap, steps back and fires a 21-yard pass -straight into the hands of an Atlanta defensive back. IN CD NNS VBD IN DT NN , NNP NNP VBZ DT NN , NNS RB CC VBZ DT JJ NN : RB IN DT NNS IN DT NNP NN RB . ((S (SBAR (IN With) (S (NP (CD three) (NNS minutes)) (VP (VBD left) (PP (IN on) (NP (DT the) (NN clock)))))) (, ,) (NP (NNP Mr.) (NNP Aikman)) (VP (VP (VBZ takes) (NP (NP (DT the) (NN snap)) (, ,) (NP (NNS steps))) (ADVP (RB back))) (CC and) (VP (VBZ fires) (NP (DT a) (JJ 21-yard) (NN pass)) (: -) (PP (RB straight) (IN into) (NP (NP (DT the) (NNS hands)) (PP (IN of) (NP (DT an) (NNP Atlanta) (NN defensive)))))) (ADVP (RB back))) (. .))) [wsj 1411]"
            },
            "FIGREF5": {
                "num": null,
                "type_str": "figure",
                "uris": null,
                "text": "The overall model architecture for implicit sense classification"
            },
            "TABREF2": {
                "type_str": "table",
                "content": "<table><tr><td/><td/><td colspan=\"3\">inter-sentential intra-sentential</td></tr><tr><td colspan=\"2\">Comparison Concession</td><td colspan=\"3\">1355 (8.70%) 136 (2.19%)</td></tr><tr><td/><td colspan=\"2\">Concession+SpeechAct 7</td><td>(0.04%) 3</td><td>(0.05%)</td></tr><tr><td/><td>Contrast</td><td colspan=\"3\">700 (4.50%) 156 (2.51%)</td></tr><tr><td/><td>Similarity</td><td>14</td><td>(0.09%) 14</td><td>(0.23%)</td></tr><tr><td colspan=\"2\">Contingency Cause</td><td colspan=\"3\">4153 (26.67%) 1613 (25.97%)</td></tr><tr><td/><td>Cause+SpeechAct</td><td>21</td><td>(0.13%) 1</td><td>(0.02%)</td></tr><tr><td/><td>Cause+Belief</td><td colspan=\"2\">105 (0.67%) 94</td><td>(1.51%)</td></tr><tr><td/><td>Condition</td><td>1</td><td colspan=\"2\">(0.01%) 198 (3.19%)</td></tr><tr><td/><td>Condition+SpeechAct</td><td>1</td><td>(0.01%) 1</td><td>(0.02%)</td></tr><tr><td/><td>Purpose</td><td>19</td><td colspan=\"2\">(0.12%) 1351 (21.76%)</td></tr><tr><td>Expansion</td><td>Conjunction</td><td colspan=\"3\">3648 (23.43%) 733 (11.80%)</td></tr><tr><td/><td>Disjunction</td><td>9</td><td>(0.06%) 21</td><td>(0.34%)</td></tr><tr><td/><td>Equivalence</td><td colspan=\"2\">286 (1.84%) 48</td><td>(0.77%)</td></tr><tr><td/><td>Exception</td><td>4</td><td>(0.03%) 1</td><td>(0.02%)</td></tr><tr><td/><td>Instantiation</td><td colspan=\"2\">1385 (8.89%) 87</td><td>(1.40%)</td></tr><tr><td/><td>Level-of-detail</td><td colspan=\"3\">2644 (16.98%) 589 (9.48%)</td></tr><tr><td/><td>Manner</td><td>4</td><td colspan=\"2\">(0.03%) 223 (3.59%)</td></tr><tr><td/><td>Substitution</td><td colspan=\"3\">221 (1.42%) 145 (2.33%)</td></tr><tr><td>Temporal</td><td>Asynchronous</td><td colspan=\"3\">647 (4.15%) 608 (9.79%)</td></tr><tr><td/><td>Synchronous</td><td colspan=\"3\">348 (2.23%) 188 (3.03%)</td></tr><tr><td>total</td><td/><td>15572</td><td>6210</td></tr></table>",
                "html": null,
                "num": null,
                "text": "Task 2: Identify the location of implicit relations. To reduce the dependency on the gold standard annotations of where implicit discourse re-"
            },
            "TABREF4": {
                "type_str": "table",
                "content": "<table><tr><td/><td colspan=\"2\">Main evaluation metric</td><td/><td>Cross</td></tr><tr><td/><td colspan=\"4\">inter-sentential intra-sentential overall validation</td></tr><tr><td>Basic model</td><td>35.791</td><td>47.154</td><td>38.608</td><td>41.463</td></tr><tr><td>Model 1</td><td>34.973</td><td>56.666</td><td>40.222</td><td>43.418</td></tr><tr><td>Model 2</td><td>37.701</td><td>50.410</td><td>40.827</td><td>42.174</td></tr></table>",
                "html": null,
                "num": null,
                "text": "Distribution of linked and stand-alone implicit relations among Level 2 labels and the proportion of each label with respect to the total linked/stand-alone implicit relations"
            },
            "TABREF5": {
                "type_str": "table",
                "content": "<table/>",
                "html": null,
                "num": null,
                "text": ""
            }
        }
    }
}