File size: 116,603 Bytes
6fa4bc9
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
1001
1002
1003
1004
1005
1006
1007
1008
1009
1010
1011
1012
1013
1014
1015
1016
1017
1018
1019
1020
1021
1022
1023
1024
1025
1026
1027
1028
1029
1030
1031
1032
1033
1034
1035
1036
1037
1038
1039
1040
1041
1042
1043
1044
1045
1046
1047
1048
1049
1050
1051
1052
1053
1054
1055
1056
1057
1058
1059
1060
1061
1062
1063
1064
1065
1066
1067
1068
1069
1070
1071
1072
1073
1074
1075
1076
1077
1078
1079
1080
1081
1082
1083
1084
1085
1086
1087
1088
1089
1090
1091
1092
1093
1094
1095
1096
1097
1098
1099
1100
1101
1102
1103
1104
1105
1106
1107
1108
1109
1110
1111
1112
1113
1114
1115
1116
1117
1118
1119
1120
1121
1122
1123
1124
1125
1126
1127
1128
1129
1130
1131
1132
1133
1134
1135
1136
1137
1138
1139
1140
1141
1142
1143
1144
1145
1146
1147
1148
1149
1150
1151
1152
1153
1154
1155
1156
1157
1158
1159
1160
1161
1162
1163
1164
1165
1166
1167
1168
1169
1170
1171
1172
1173
1174
1175
1176
1177
1178
1179
1180
1181
1182
1183
1184
1185
1186
1187
1188
1189
1190
1191
1192
1193
1194
1195
1196
1197
1198
1199
1200
1201
1202
1203
1204
1205
1206
1207
1208
1209
1210
1211
1212
1213
1214
1215
1216
1217
1218
1219
1220
1221
1222
1223
1224
1225
1226
1227
1228
1229
1230
1231
1232
1233
1234
1235
1236
1237
1238
1239
1240
1241
1242
1243
1244
1245
1246
1247
1248
1249
1250
1251
1252
1253
1254
1255
1256
1257
1258
1259
1260
1261
1262
1263
1264
1265
1266
1267
1268
1269
1270
1271
1272
1273
1274
1275
1276
1277
1278
1279
1280
1281
1282
1283
1284
1285
1286
1287
1288
1289
1290
1291
1292
1293
1294
1295
1296
1297
1298
1299
1300
1301
1302
1303
1304
1305
1306
1307
1308
1309
1310
1311
1312
1313
1314
1315
1316
1317
1318
1319
1320
1321
1322
1323
1324
1325
1326
1327
1328
1329
1330
1331
1332
1333
1334
1335
1336
1337
1338
1339
1340
1341
1342
1343
1344
1345
1346
1347
1348
1349
1350
1351
1352
1353
1354
1355
1356
1357
1358
1359
1360
1361
1362
1363
1364
1365
1366
1367
1368
1369
1370
1371
1372
1373
1374
1375
1376
1377
1378
1379
1380
1381
1382
1383
1384
1385
1386
1387
1388
1389
1390
1391
1392
1393
1394
1395
1396
1397
1398
1399
1400
1401
1402
1403
1404
1405
1406
1407
1408
1409
1410
1411
1412
1413
1414
1415
1416
1417
1418
1419
1420
1421
1422
1423
1424
1425
1426
1427
1428
1429
1430
1431
1432
1433
1434
1435
1436
1437
1438
1439
1440
1441
1442
1443
1444
1445
1446
1447
1448
1449
1450
1451
1452
1453
1454
1455
1456
1457
1458
1459
1460
1461
1462
1463
1464
1465
1466
1467
1468
1469
1470
1471
1472
1473
1474
1475
1476
1477
1478
1479
1480
1481
1482
1483
1484
1485
1486
1487
1488
1489
1490
1491
1492
1493
1494
1495
1496
1497
1498
1499
1500
1501
1502
1503
1504
1505
1506
1507
1508
1509
1510
1511
1512
1513
1514
1515
1516
1517
1518
1519
1520
1521
1522
1523
1524
1525
1526
1527
1528
1529
1530
1531
1532
1533
1534
1535
1536
1537
1538
1539
1540
1541
1542
1543
1544
1545
1546
1547
1548
1549
1550
1551
1552
1553
1554
1555
1556
1557
1558
1559
1560
1561
1562
1563
1564
1565
1566
1567
1568
1569
1570
1571
1572
1573
1574
1575
1576
1577
1578
1579
1580
1581
1582
1583
1584
1585
1586
1587
1588
1589
1590
1591
1592
1593
1594
1595
1596
1597
1598
1599
1600
1601
1602
1603
1604
1605
1606
1607
1608
1609
1610
1611
1612
1613
1614
1615
1616
1617
1618
1619
1620
1621
1622
1623
1624
1625
1626
1627
1628
1629
1630
1631
1632
1633
1634
1635
1636
1637
1638
1639
1640
1641
1642
1643
1644
1645
1646
1647
1648
1649
1650
1651
1652
1653
1654
1655
1656
1657
1658
1659
1660
1661
1662
1663
1664
1665
1666
1667
1668
1669
1670
1671
1672
1673
1674
1675
1676
1677
1678
1679
1680
1681
1682
1683
1684
1685
1686
1687
1688
1689
1690
1691
1692
1693
1694
1695
1696
1697
1698
1699
1700
1701
1702
1703
1704
1705
1706
1707
1708
1709
1710
1711
1712
1713
1714
1715
1716
1717
1718
1719
1720
1721
1722
1723
1724
1725
1726
1727
1728
1729
1730
1731
1732
1733
1734
1735
1736
1737
1738
1739
1740
1741
1742
1743
1744
1745
1746
1747
1748
1749
1750
1751
1752
1753
1754
1755
1756
1757
1758
1759
1760
1761
1762
1763
1764
1765
1766
1767
1768
1769
1770
1771
1772
1773
1774
1775
1776
1777
1778
1779
1780
1781
1782
1783
1784
1785
1786
1787
1788
1789
1790
1791
1792
1793
1794
1795
1796
1797
1798
1799
1800
1801
1802
1803
1804
1805
1806
1807
1808
1809
1810
1811
1812
1813
1814
1815
1816
1817
1818
1819
1820
1821
1822
1823
1824
1825
1826
1827
1828
1829
1830
1831
1832
1833
1834
1835
1836
1837
1838
1839
1840
1841
1842
1843
1844
1845
1846
1847
1848
1849
1850
1851
1852
1853
1854
1855
1856
1857
1858
1859
1860
1861
1862
1863
1864
1865
1866
1867
1868
1869
1870
1871
1872
1873
1874
1875
1876
1877
1878
1879
1880
1881
1882
1883
1884
1885
1886
1887
1888
1889
1890
1891
1892
1893
1894
1895
1896
1897
1898
1899
1900
1901
1902
1903
1904
1905
1906
1907
1908
1909
1910
1911
1912
1913
1914
1915
1916
1917
1918
1919
1920
1921
1922
1923
1924
1925
1926
1927
1928
1929
1930
1931
1932
1933
1934
1935
1936
1937
1938
1939
1940
1941
1942
1943
1944
1945
1946
1947
1948
1949
1950
1951
1952
1953
1954
1955
1956
1957
1958
1959
1960
1961
1962
1963
1964
1965
1966
1967
1968
1969
1970
1971
1972
1973
1974
1975
1976
1977
1978
1979
1980
1981
1982
1983
1984
1985
1986
1987
1988
1989
1990
1991
1992
1993
1994
1995
1996
1997
1998
1999
2000
2001
2002
2003
2004
2005
2006
2007
2008
2009
2010
2011
2012
2013
2014
2015
2016
2017
2018
2019
2020
2021
2022
2023
2024
2025
2026
2027
2028
2029
2030
2031
2032
2033
2034
2035
2036
2037
2038
2039
2040
2041
2042
2043
2044
2045
2046
2047
2048
2049
2050
2051
2052
2053
2054
2055
2056
2057
2058
2059
2060
2061
2062
2063
2064
2065
2066
2067
2068
2069
2070
2071
2072
2073
2074
2075
2076
2077
2078
2079
2080
2081
2082
2083
2084
2085
2086
2087
2088
2089
2090
2091
2092
2093
2094
2095
2096
2097
2098
2099
2100
2101
2102
2103
2104
2105
2106
2107
2108
2109
2110
2111
2112
2113
2114
2115
2116
2117
2118
2119
2120
2121
2122
2123
2124
2125
2126
2127
2128
2129
2130
2131
2132
2133
2134
2135
2136
2137
2138
2139
2140
2141
2142
2143
2144
2145
2146
2147
2148
2149
2150
2151
2152
2153
2154
2155
2156
2157
2158
2159
2160
2161
2162
2163
2164
2165
2166
2167
2168
2169
2170
2171
2172
2173
2174
2175
2176
2177
2178
2179
2180
2181
2182
2183
2184
2185
2186
2187
2188
2189
2190
2191
2192
2193
2194
2195
2196
2197
2198
2199
2200
2201
2202
2203
2204
2205
2206
2207
2208
2209
2210
2211
2212
2213
2214
2215
2216
2217
2218
2219
2220
2221
{
    "paper_id": "2020",
    "header": {
        "generated_with": "S2ORC 1.0.0",
        "date_generated": "2023-01-19T13:27:11.320227Z"
    },
    "title": "Computational Interpretations of Recency for the Choice of Referring Expressions in Discourse",
    "authors": [
        {
            "first": "Fahime",
            "middle": [],
            "last": "Same",
            "suffix": "",
            "affiliation": {
                "laboratory": "",
                "institution": "University of Cologne",
                "location": {}
            },
            "email": "f.same@uni-koeln.de"
        }
    ],
    "year": "",
    "venue": null,
    "identifiers": {},
    "abstract": "First, we discuss the most common linguistic perspectives on the concept of recency and propose a taxonomy of recency metrics employed in Machine Learning studies for choosing the form of referring expressions in discourse context. We then report on a Multi-Layer Perceptron study and a Sequential Forward Search experiment, followed by Bayes Factor analysis of the outcomes. The results suggest that recency metrics counting paragraphs and sentences contribute to referential choice prediction more than other recency-related metrics. Based on the results of our analysis, we argue that, sensitivity to discourse structure is important for recency metrics used in determining referring expression forms.",
    "pdf_parse": {
        "paper_id": "2020",
        "_pdf_hash": "",
        "abstract": [
            {
                "text": "First, we discuss the most common linguistic perspectives on the concept of recency and propose a taxonomy of recency metrics employed in Machine Learning studies for choosing the form of referring expressions in discourse context. We then report on a Multi-Layer Perceptron study and a Sequential Forward Search experiment, followed by Bayes Factor analysis of the outcomes. The results suggest that recency metrics counting paragraphs and sentences contribute to referential choice prediction more than other recency-related metrics. Based on the results of our analysis, we argue that, sensitivity to discourse structure is important for recency metrics used in determining referring expression forms.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Abstract",
                "sec_num": null
            }
        ],
        "body_text": [
            {
                "text": "Speakers use various linguistic forms such as pronouns, proper names, and common nouns, to refer to entities in discourse. A great number of studies have addressed the issue of referring, and the factors that play a role in speakers' choice of the form of referring expressions. These factors include grammatical function (Brennan, 1995) , animacy (Fukumura and van Gompel, 2011) , competition (Arnold and Griffin, 2007) , frequency (Ariel, 1990) and recency (McCoy and Strube, 1999; Ariel, 2001 ), among others. The focus of this article is on recency.",
                "cite_spans": [
                    {
                        "start": 322,
                        "end": 337,
                        "text": "(Brennan, 1995)",
                        "ref_id": "BIBREF9"
                    },
                    {
                        "start": 348,
                        "end": 379,
                        "text": "(Fukumura and van Gompel, 2011)",
                        "ref_id": "BIBREF14"
                    },
                    {
                        "start": 394,
                        "end": 420,
                        "text": "(Arnold and Griffin, 2007)",
                        "ref_id": "BIBREF4"
                    },
                    {
                        "start": 433,
                        "end": 446,
                        "text": "(Ariel, 1990)",
                        "ref_id": "BIBREF0"
                    },
                    {
                        "start": 459,
                        "end": 483,
                        "text": "(McCoy and Strube, 1999;",
                        "ref_id": "BIBREF29"
                    },
                    {
                        "start": 484,
                        "end": 495,
                        "text": "Ariel, 2001",
                        "ref_id": "BIBREF1"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": "1"
            },
            {
                "text": "Broadly speaking, we understand recency to be the distance between the current mention of a referent and its antecedent. Therefore, in this work, we employ recency metrics to predict the form of subsequent mentions, and are not interested in the choice of \"first-mention\" expressions.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": "1"
            },
            {
                "text": "Recency has received much attention in both linguistic and computational studies, but in many cases, the notion of recency itself has been left largely undefined even though, as we shall see, recency can be understood in different ways. This paper has three objectives. The first is to survey different computational \"interpretations\" of the notion of recency. The second goal is to determine which of these computational interpretations is most effective for predicting the form of a referring expression in discourse context. In other words, we will ask, \"what is the best way to operationalize the notion of recency in computational and data-oriented studies?\" And the final objective is to see to which extent the choice of recency metrics should depend on the corpus.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": "1"
            },
            {
                "text": "The structure of this paper is as follows: in section 2, we summarize how recency has been used in linguistic studies. In section 3, we provide a brief overview of the notion of recency in Machine Learning (ML) studies, with the purpose of creating a taxonomy of recency metrics discussed in section 4. Sections 5 and 6 report two new studies. The former analyzes single recency metrics, the latter takes their combination into account. Finally, section 7 gives a brief summary and review of the findings.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": "1"
            },
            {
                "text": "There is a long tradition of work in linguistics considering recency as a factor influencing the salience of a referent. The general idea is that the greater the distance between the two mentions, the greater the chance of using a full noun phrase anaphor (Vonk et al., 1992; Giv\u00f3n, 1992; Arnold, 2010) ; conversely, the shorter the distance between the two mentions, the greater the chance of pronominalization. Some studies have kept the notion of recency or \"distance to the previous mention\" opaque by not defining what long and short distance mean; while others have presented different interpretations of the notion of distance. In this paper, we focus on the three most frequent interpretations that are found in the literature.",
                "cite_spans": [
                    {
                        "start": 256,
                        "end": 275,
                        "text": "(Vonk et al., 1992;",
                        "ref_id": "BIBREF37"
                    },
                    {
                        "start": 276,
                        "end": 288,
                        "text": "Giv\u00f3n, 1992;",
                        "ref_id": "BIBREF17"
                    },
                    {
                        "start": 289,
                        "end": 302,
                        "text": "Arnold, 2010)",
                        "ref_id": "BIBREF2"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Different interpretations of the notion of recency/distance",
                "sec_num": "2"
            },
            {
                "text": "In the studies where the main focus is on the pronominalization problem, the notion of distance is often concerned with whether or not the antecedent is present in the same or previous utterance (or clause) . In a corpus study, Hobbs (1978) noticed that in 98% of the cases, the antecedent of a pronoun anaphor is in the previous or in the same sentence. Ariel (1990) used the same sentence metrics in her corpus study, where she focused on the distribution of pronouns, demonstratives and full NPs. She demonstrated that with respect to distance from the antecedent, in more than 80% of cases, pronouns favor short distances, where the antecedent is in the same sentence or only one sentence away. In centering-based studies such as Hitzeman and Poesio (1998) , Poesio et al. (2004) and Henschel et al. (2000) too, long distance antecedents are those which are more than one utterance or one clause away.",
                "cite_spans": [
                    {
                        "start": 195,
                        "end": 206,
                        "text": "(or clause)",
                        "ref_id": null
                    },
                    {
                        "start": 234,
                        "end": 240,
                        "text": "(1978)",
                        "ref_id": null
                    },
                    {
                        "start": 355,
                        "end": 367,
                        "text": "Ariel (1990)",
                        "ref_id": "BIBREF0"
                    },
                    {
                        "start": 734,
                        "end": 760,
                        "text": "Hitzeman and Poesio (1998)",
                        "ref_id": "BIBREF21"
                    },
                    {
                        "start": 763,
                        "end": 783,
                        "text": "Poesio et al. (2004)",
                        "ref_id": "BIBREF32"
                    },
                    {
                        "start": 788,
                        "end": 810,
                        "text": "Henschel et al. (2000)",
                        "ref_id": "BIBREF20"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Immediate context",
                "sec_num": "2.1"
            },
            {
                "text": "In some other corpus-based studies, a larger span of text was taken into account. In a comprehensive work on topic continuity in discourse, Giv\u00f3n (1983) measured the distance to the previous mention up to 20 clauses back. The work by Giv\u00f3n is one of the first attempts in quantifying the role of distance in discourse. In a computational pronominalization study, McCoy and Strube (1999) hypothesized that \"when the last mention of an item is several sentences back in the text, a definite description is preferred\". For this study which was conducted on a corpus of The New York Times articles, they found out that in long-distance situations (where the antecedent is more than two sentences away), a definite description is almost always used. In a psycholinguistics experiment, Arnold et al. (2009) examined the choice of referring expressions made by high-functioning children and adolescents with autism. Arnold et al. grouped the distance to the antecedent into 4 categories and demonstrated that the participants in their experiment had sensitivity to the discourse context.",
                "cite_spans": [
                    {
                        "start": 140,
                        "end": 152,
                        "text": "Giv\u00f3n (1983)",
                        "ref_id": "BIBREF16"
                    },
                    {
                        "start": 363,
                        "end": 386,
                        "text": "McCoy and Strube (1999)",
                        "ref_id": "BIBREF29"
                    },
                    {
                        "start": 780,
                        "end": 800,
                        "text": "Arnold et al. (2009)",
                        "ref_id": "BIBREF3"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Non-local context",
                "sec_num": "2.2"
            },
            {
                "text": "While the distance patterns explained in the previous paragraphs account for a large number of pronominalization cases, according to Fox (1987) , they cannot handle all various types of anaphoric patterns. She showed that pronouns can be used to refer to a referent over long stretches of distance until the goal of the narrative changes (cited in Smith (2003) ). In line with this idea, Ariel (1990) proposed the notion of unity, meaning, the antecedent being in the same frame, segment or paragraph. Vonk et al. (1992) and Tomlin (1987) also emphasized the importance of episode or unit boundaries, mostly realized as paragraph boundaries in written text, as factors contributing to the recency of mention.",
                "cite_spans": [
                    {
                        "start": 133,
                        "end": 143,
                        "text": "Fox (1987)",
                        "ref_id": "BIBREF13"
                    },
                    {
                        "start": 348,
                        "end": 360,
                        "text": "Smith (2003)",
                        "ref_id": "BIBREF35"
                    },
                    {
                        "start": 502,
                        "end": 520,
                        "text": "Vonk et al. (1992)",
                        "ref_id": "BIBREF37"
                    },
                    {
                        "start": 525,
                        "end": 538,
                        "text": "Tomlin (1987)",
                        "ref_id": "BIBREF36"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Unit boundary",
                "sec_num": "2.3"
            },
            {
                "text": "As explained, there are three different interpretations of recency in the literature. The first two interpretations are concerned with measuring the distance in sentences (or clauses), while the third one goes beyond the sentential level, and focuses on paragraphs. Which of these interpretations does best in algorithms to predict referential choice in discourse contexts?",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Unit boundary",
                "sec_num": "2.3"
            },
            {
                "text": "3 Recency in ML studies Within Natural Language Generation (Gatt and Krahmer, 2018) , reference production is computationally modelled in an area known as Referring Expression Generation (REG) (Krahmer and van Deemter, 2019; van Deemter, 2016) . REG models have various shapes and forms, with feature-based ML models playing a substantial role. GREC ) was a series of Shared Task Evaluation tasks that is still regarded as a natural starting point when it comes to the generation of referring expressions in context. Different ML algorithms were submitted to these shared tasks, a number of which have exploited recency metrics. Some of the metrics used in these algorithms are pursuant to the interpretations mentioned in section 2. For example, the recency feature in Greenbacker and McCoy (2009) resembles the metric defined in McCoy and Strube (1999) . Another example is a binary feature used by Bohnet (2008) , which captures whether or not the antecedent occurs in the same sentence. This metric is similar to the interpretation discussed above under the heading \"Immediate Context\". Some of the other recency metrics used in these algorithms, however, are not in accordance with the interpretations introduced in section 2. For instance, Bohnet (2008) and Jamison and Mehay (2008) used distance metrics measuring number of words between the two mentions. In a more recent ML study, Kibrik et al. (2016) stated that referential choice belongs to a large group of multifactorial processes. They used 7 different distance-related metrics in their study and concluded that these metrics are essential for successful prediction of referential choice, but there is no indication which metrics are the most relevant ones. Further studies that include recency metrics are Ferreira et al. 2016, Modi et al. (2017) and Saha et al. (2011) , among others.",
                "cite_spans": [
                    {
                        "start": 59,
                        "end": 83,
                        "text": "(Gatt and Krahmer, 2018)",
                        "ref_id": "BIBREF15"
                    },
                    {
                        "start": 193,
                        "end": 224,
                        "text": "(Krahmer and van Deemter, 2019;",
                        "ref_id": "BIBREF28"
                    },
                    {
                        "start": 225,
                        "end": 243,
                        "text": "van Deemter, 2016)",
                        "ref_id": "BIBREF11"
                    },
                    {
                        "start": 770,
                        "end": 798,
                        "text": "Greenbacker and McCoy (2009)",
                        "ref_id": "BIBREF18"
                    },
                    {
                        "start": 831,
                        "end": 854,
                        "text": "McCoy and Strube (1999)",
                        "ref_id": "BIBREF29"
                    },
                    {
                        "start": 901,
                        "end": 914,
                        "text": "Bohnet (2008)",
                        "ref_id": "BIBREF8"
                    },
                    {
                        "start": 1246,
                        "end": 1259,
                        "text": "Bohnet (2008)",
                        "ref_id": "BIBREF8"
                    },
                    {
                        "start": 1264,
                        "end": 1288,
                        "text": "Jamison and Mehay (2008)",
                        "ref_id": "BIBREF25"
                    },
                    {
                        "start": 1794,
                        "end": 1812,
                        "text": "Modi et al. (2017)",
                        "ref_id": "BIBREF30"
                    },
                    {
                        "start": 1817,
                        "end": 1835,
                        "text": "Saha et al. (2011)",
                        "ref_id": "BIBREF34"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Unit boundary",
                "sec_num": "2.3"
            },
            {
                "text": "We saw that the metrics used in the ML studies are based on different units of measurement (e.g. word distance versus sentence distance). Likewise, different strategies are used to encode these metrics. For instance, some distances are measured in natural numbers while others are categorized in a smaller class of broader \"bins\". In the following example taken from the GREC-2.0 corpus , one could say that the distance between the expression \"its\" and its antecedent \"Berlin\" is 21 words (a natural number). Another solution would be, for instance, to follow Ferreira et al. (2016) in grouping the numerical distances into five groups consisting of 0-10 words, 11-20 words, 21-30 words, 31-40 words and more than 40 words. With this approach, the distance between \"its\" and its antecedent falls into the third bin, 21-30 words.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Unit boundary",
                "sec_num": "2.3"
            },
            {
                "text": "(1) Berlin (1) is (2) the (3) capital (4) city (5) and (6) one (7) of (8) the (9) sixteen (10) federal (11) states (12) of (13) Germany (14) .",
                "cite_spans": [
                    {
                        "start": 136,
                        "end": 140,
                        "text": "(14)",
                        "ref_id": null
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Unit boundary",
                "sec_num": "2.3"
            },
            {
                "text": "(15) With (16) a (17) population (18) of (19) 3.4 (20) million (21) in (22) its (23) city (24) limits (25) ,...",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Unit boundary",
                "sec_num": "2.3"
            },
            {
                "text": "The question is which of these metrics work best in ML studies. The existing diversity motivated us to collect as many recency metrics as possible from the ML literature and create a taxonomy of recency metrics.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Unit boundary",
                "sec_num": "2.3"
            },
            {
                "text": "This section begins with subsection 4.1 introducing recency metrics collected from different ML studies. Later, subsection 4.2 presents the two corpora used in our assessments and highlights their main differences. And finally, subsection 4.3 introduces the baseline algorithm and the ML method employed in our assessments. Table 1 presents the metrics measuring the distance from the current expression to its antecedent 1 . As 1 Greenbacker and McCoy defined the recency metric in their study as: \"Referring expressions which were separated mentioned in the previous section, recency metrics vary a great deal. The most important differences between these metrics are:",
                "cite_spans": [
                    {
                        "start": 429,
                        "end": 430,
                        "text": "1",
                        "ref_id": null
                    }
                ],
                "ref_spans": [
                    {
                        "start": 324,
                        "end": 331,
                        "text": "Table 1",
                        "ref_id": "TABREF1"
                    }
                ],
                "eq_spans": [],
                "section": "Methodology",
                "sec_num": "4"
            },
            {
                "text": "I. Antecedent type In most metrics, the antecedent is the nearest previous mention of the same entity. In one of the metrics (metric 14 in Table 1 ), however, instead of the distance to the nearest mention, the distance to the nearest full NP mention is measured.",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 139,
                        "end": 146,
                        "text": "Table 1",
                        "ref_id": "TABREF1"
                    }
                ],
                "eq_spans": [],
                "section": "Taxonomy of recency/distance metrics",
                "sec_num": "4.1"
            },
            {
                "text": "II. Unit of measurement The units in which the distance is measured vary in the recency metrics. The units of measurements used in the metrics outlined in Table 1 include distance in number of:",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 155,
                        "end": 162,
                        "text": "Table 1",
                        "ref_id": "TABREF1"
                    }
                ],
                "eq_spans": [],
                "section": "Taxonomy of recency/distance metrics",
                "sec_num": "4.1"
            },
            {
                "text": "\u2022 words [metrics 1-3] \u2022 sentences [metrics 4-11] \u2022 NPs [metric 12]",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Taxonomy of recency/distance metrics",
                "sec_num": "4.1"
            },
            {
                "text": "\u2022 markables, defined as the textual expressions, between which coreferential relations can be established (Chiarcos and Krasavina, 2005) .",
                "cite_spans": [
                    {
                        "start": 106,
                        "end": 136,
                        "text": "(Chiarcos and Krasavina, 2005)",
                        "ref_id": "BIBREF10"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Taxonomy of recency/distance metrics",
                "sec_num": "4.1"
            },
            {
                "text": "[metrics 13-14]",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Taxonomy of recency/distance metrics",
                "sec_num": "4.1"
            },
            {
                "text": "\u2022 paragraphs [metric 15]",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Taxonomy of recency/distance metrics",
                "sec_num": "4.1"
            },
            {
                "text": "III. Type of encoding As shown in Example (1), the major difference between encoding of the metrics is whether the distance is reported as a numeric value or defined bins. Among the metrics presented below, metrics 2, 3, 5, 6, 7 and 10 are categorical, the rest are numeric. Another difference in type of encoding concerns how numeric values are encoded. Of the metrics used in this assessment, metrics 1, 4 and 12-15 are reported as natural numbers (including 0), metric 8 is the natural logarithm of the number of intervening sentences, metric 9 is its exponential variant 2 and metric 11, which will be explained below, is the normalized distance.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Taxonomy of recency/distance metrics",
                "sec_num": "4.1"
            },
            {
                "text": "Scaled/normalized sentence distance The distance between the mentions ranges from 0 to 19 sentences in MSR and 0 to 146 sentences in WSJ. To overcome this sparsity, we decided to bound from the most recent reference by more than two sentences were marked as long distance references\" (2009, p. 101). We have two different interpretations of this sentence which are presented as metric 5 and metric 6. ",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Taxonomy of recency/distance metrics",
                "sec_num": "4.1"
            },
            {
                "text": "EQUATION",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [
                    {
                        "start": 0,
                        "end": 8,
                        "text": "EQUATION",
                        "ref_id": "EQREF",
                        "raw_str": "x norm = x i \u2212 x min x max \u2212 x min",
                        "eq_num": "(1)"
                    }
                ],
                "section": "Taxonomy of recency/distance metrics",
                "sec_num": "4.1"
            },
            {
                "text": "In this section, we introduced 14 metrics from the ML literature, plus one additional metric we decided to include in the study. The assessment of these metrics will be presented in section 5.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Taxonomy of recency/distance metrics",
                "sec_num": "4.1"
            },
            {
                "text": "As indicated earlier, we are also interested to find out the extent to which the choice of recency metrics should take the corpus itself into account. Corpora can be different from each other in terms of, for instance, size, genre (e.g. Wikipedia article, newspaper articles and medical reports) and structure of their documents (e.g. length and sentence structure). For this study, we have chosen two corpora which are different from each other in terms of text genre and length-related attributes (which will be referred to as text structure in this article).",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Corpora used in this study",
                "sec_num": "4.2"
            },
            {
                "text": "Considering that the GREC Shared Tasks were among the first systematic studies tackling the referential choice in context, we decided to start our assessment of the metrics with GREC-2.0 (henceforth MSR 3 ), one of the underlying corpora of these Shared Tasks 4 . MSR consists of more than 1500 introductory sections of Wikipedia articles in 5 different classes (people, city, country, river and mountain). The major pitfall of MSR is that only mentions to the main reference of the article are annotated.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Corpora used in this study",
                "sec_num": "4.2"
            },
            {
                "text": "In addition to MSR, we decided to include the Wall Street Journal portion (henceforth WSJ) of the OntoNotes corpus (Hovy et al., 2006; Pradhan et al., 2013) in this study. The genres of the two corpora are different, with the former containing Wikipedia articles, and the latter having newspaper articles. Also, the structure of the documents, such as length of each document, number of sentences and number of paragraphs are radically different across both corpora. The existing differences between the two corpora make it possible to explore whether the choice of recency metrics should depend on the text structure. Table 2 illustrates the major differences between the two corpora. In order to apply the recency metrics to MSR, we conducted tokenization and sentence segmentation using the spaCy python library. The texts of WSJ were already segmented and tokenized.",
                "cite_spans": [
                    {
                        "start": 115,
                        "end": 134,
                        "text": "(Hovy et al., 2006;",
                        "ref_id": "BIBREF23"
                    },
                    {
                        "start": 135,
                        "end": 156,
                        "text": "Pradhan et al., 2013)",
                        "ref_id": "BIBREF33"
                    }
                ],
                "ref_spans": [
                    {
                        "start": 619,
                        "end": 626,
                        "text": "Table 2",
                        "ref_id": "TABREF3"
                    }
                ],
                "eq_spans": [],
                "section": "Corpora used in this study",
                "sec_num": "4.2"
            },
            {
                "text": "It is also important to note that four referring expression types, namely common noun, proper name, pronoun and zero anaphor are annotated in MSR. In WSJ, zero cases are not annotated, and only realized expressions are considered. For this reason, we decided to include only realized expressions (namely common nouns, proper names and pronouns) in our study and exclude the covert references. Hence, as mentioned before, the task in this study is to predict whether a target referring expression is a pronoun, a proper name or a common noun. The total number of referring expressions is 9306 in MSR and 21565 in WSJ, of which we placed 70% in a training set and 30% in a test set. As shown in Table 2 , the documents in WSJ are roughly 4 times longer than the documents in MSR. Also, each document has a greater number of sentences and paragraphs. We expect that in the ML studies, the WSJ algorithms overall have a lower accuracy than the MSR algorithms.",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 693,
                        "end": 700,
                        "text": "Table 2",
                        "ref_id": "TABREF3"
                    }
                ],
                "eq_spans": [],
                "section": "Corpora used in this study",
                "sec_num": "4.2"
            },
            {
                "text": "In order to assess the recency metrics, the first step is to create a baseline algorithm which contains no recency metric. This enables us to compare the performance of the experimental algorithms incorporating recency metrics against the baseline. We could have chosen different features, but we chose grammatical role of the current mention and grammatical role of the previous mention as the features of the baseline system for the following reasons: Using grammatical role is a safe choice, because the same syntactic categories were used in both corpora, so any differences in performance between the two corpora will not be due to differences in the annotations. Furthermore, we wanted to make sure that the features in the baseline algorithm are not confounding with recency metrics. For example, a competition-based feature such as the number of competing discourse entities between the two mentions would be confounding because the more competition there is, the greater the distance between the referent and the antecedent is likely to be. For this reason, we chose an algorithm that did not use anything other than grammatical role.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Baseline algorithms and ML method",
                "sec_num": "4.3"
            },
            {
                "text": "In this study, we use Multi-Layer Perceptron (henceforth MLP), a class of feedforward artificial neural networks as our ML approach. The model has two hidden layers with respectively 16 and 8 units. While hidden layers use the rectified linear activation function (ReLU), the output layer uses the softmax activation function. The model will be fit for 50 training epochs, and 50 samples (batch size) are being propagated through the network. It is noteworthy that since MLP cannot handle categorical data, all categorical metrics have been onehot encoded in this study.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Baseline algorithms and ML method",
                "sec_num": "4.3"
            },
            {
                "text": "This section firstly reports on the success of the baseline algorithms, and continues with the algorithms incorporating the recency metrics.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Assessing recency metrics using MLP",
                "sec_num": "5"
            },
            {
                "text": "We mentioned in the previous section that the baseline algorithms are made up of two features, the grammatical role of the current mention and the grammatical role of its antecedent. Table 3 shows the accuracy of the two baseline algorithms. MSR WSJ baseline 0.585 0.55 Table 3 : Accuracy of the MSR and WSJ baseline algorithms",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 183,
                        "end": 190,
                        "text": "Table 3",
                        "ref_id": null
                    },
                    {
                        "start": 270,
                        "end": 277,
                        "text": "Table 3",
                        "ref_id": null
                    }
                ],
                "eq_spans": [],
                "section": "Baseline algorithms",
                "sec_num": "5.1"
            },
            {
                "text": "Each experimental algorithm is composed of two baseline features and one recency metric. For instance, model 4 includes grammatical role of the current mention and the antecedent plus metric 4, which is the numerical distance in sentences. Since there are 15 different recency metrics and two different corpora, the total number of experimental algorithms is 30. If, for instance, an experimental algorithm would have 2 recency metrics instead of one, we would not be able to firmly test whether both features contribute to the performance of the algorithm, or only one of them is involved. For this reason, each metric is tested individually, and not in combination with other recency metrics. The overall accuracy of the experimental algorithms incorporating different recency metrics is reported in Table 4 : Accuracy of the experimental algorithms. The first column, Meas(urement) Unit specifies metrics' units of measurement detailed in section 4.1, II.",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 802,
                        "end": 809,
                        "text": "Table 4",
                        "ref_id": "TABREF4"
                    }
                ],
                "eq_spans": [],
                "section": "Assessing recency metrics",
                "sec_num": "5.2"
            },
            {
                "text": "The reported accuracies are all higher than the baseline accuracy, but it is still unclear whether the recency metrics are strongly informative of the probability of the increase in the accuracy of the algorithms.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Unit of measurement",
                "sec_num": null
            },
            {
                "text": "We conducted Bayes Factor (henceforth BF) analysis using a beta distribution to investigate whether the outcomes of the experimental and the baseline algorithms come from distributions with the same underlying probability parameter, or ones with different underlying parameters. Hence, in the case of our current assessment, BF is used to determine whether or not there is good evidence for saying that the difference in accuracy rates of the models is less or greater than 0.01 (henceforth threshold). If the difference in accuracy is below the threshold, the evidence is in favor of similar distributions; if it is above the threshold, there is good evidence that the outcomes come from different distributions. In case of being from different distributions, we infer that the inclusion of recency metrics leads to an improvement in the performance of experimental algorithms.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Unit of measurement",
                "sec_num": null
            },
            {
                "text": "Additionally, the strength of evidence for each experimental model versus the baseline will be assessed according to the scale of Kass and Raftery (1995) .",
                "cite_spans": [
                    {
                        "start": 130,
                        "end": 153,
                        "text": "Kass and Raftery (1995)",
                        "ref_id": "BIBREF26"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Unit of measurement",
                "sec_num": null
            },
            {
                "text": "Not worth more than a bare mention 3 to 20 Positive 20 to 150 Strong >150 Very strong Table 5 : Interpretation of Bayes Factors according to Kass and Raftery (1995, p. 777) For the sake of space, we only report the results suggesting that the outcomes of the experimental and the baseline algorithms come from different distributions.",
                "cite_spans": [
                    {
                        "start": 141,
                        "end": 172,
                        "text": "Kass and Raftery (1995, p. 777)",
                        "ref_id": null
                    }
                ],
                "ref_spans": [
                    {
                        "start": 86,
                        "end": 93,
                        "text": "Table 5",
                        "ref_id": null
                    }
                ],
                "eq_spans": [],
                "section": "BF Interpretation 1 to 3",
                "sec_num": null
            },
            {
                "text": "Comparing the rate of correct predictions of each experimental model to that of the baseline shows positive evidence that the accuracy of model 15, the one incorporating distance in paragraph as its recency metric, comes from different distribution than the baseline (BF=3.286). The other models were doing better than the baseline too, but there is insufficient evidence to say they are different from the baseline. More research is needed to investigate why other experimental models are not statistically different from the baseline.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "BF analysis of the MSR models",
                "sec_num": "5.2.1"
            },
            {
                "text": "In the case of WSJ, the accuracy rates of 8 models are different from the accuracy of the baseline. Similar to MSR, the outcome of model 15, utilizing the paragraph-based recency metric, comes from distributions with different underlying probabilities than the baseline. Additionally, except the outcome of model 5, there is very strong evidence that the accuracy of all other models (6 models in total) incorporating sentence-based recency metrics are being shifted by more than 0.01 beyond the baseline. This means, 6 out of 7 sentence-based recency metrics have improved the performance of the algorithms over the baseline. The remaining model with a different accuracy than the baseline is model 12, having NP distance as its recency metric. Table 5 , there is very strong evidence that the accuracy rates of all these models are different from the baseline. The column Def presents very briefly the definition of the metrics according to Table 1 . For instance, cat(4) means the categorical distance in 4 bins.",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 746,
                        "end": 753,
                        "text": "Table 5",
                        "ref_id": null
                    },
                    {
                        "start": 943,
                        "end": 950,
                        "text": "Table 1",
                        "ref_id": "TABREF1"
                    }
                ],
                "eq_spans": [],
                "section": "BF analysis of the WSJ models",
                "sec_num": "5.2.2"
            },
            {
                "text": "As a next step, we compare the best performing models of each unit of measurement with each other. Since the only difference between the models is in their recency metrics, if there is good evidence that the difference in the accuracy of the models is greater than the threshold, we conclude that this difference is due to the differences in the recency metrics. Table 7 illustrates the best performing algorithms of each unit of measurement.",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 363,
                        "end": 370,
                        "text": "Table 7",
                        "ref_id": null
                    }
                ],
                "eq_spans": [],
                "section": "BF analysis of the best performing models",
                "sec_num": "5.2.3"
            },
            {
                "text": "We conducted a one to one comparison between the best performing models of each unit. The evidence suggests that these models are not statistically different from each other. Table 7 : Best performing algorithms of each unit of measurement from each other. In other words, if we only focus on the WSJ corpus, we do not have enough evidence to prefer one model over another, and we can conclude that the best performing models incorporating sentence, paragraph and NP level recency metrics are equally good. But when we did a one to one comparison between these three models and the best performing models of word and markable units, we found out that the accuracy rates of each of these models have been shifted by more than 0.01 beyond the accuracy rates of the word and markable models. This means, the models incorporating paragraph, sentence and NP level metrics are statistically different from the models incorporating word and markable level information.",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 175,
                        "end": 182,
                        "text": "Table 7",
                        "ref_id": null
                    }
                ],
                "eq_spans": [],
                "section": "I. MSR models",
                "sec_num": null
            },
            {
                "text": "As discussed in this section, the recency metrics clearly made a bigger improvement in the WSJ models. In the case of MSR, only one model had a distinguishable performance; while in the case of WSJ, 8 models performed statistically better than the baseline. Furthermore, sentence, paragraph and NP-based metrics evidentially improved the performance of the WSJ algorithms.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "II. WSJ models",
                "sec_num": null
            },
            {
                "text": "The results reported in this section were based on the assessment of single recency metrics; yet, there is no assessment of the combination of these metrics. In the next section, we report on a feature selection study we conducted to investigate which combinations of recency metrics lead to best results.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "II. WSJ models",
                "sec_num": null
            },
            {
                "text": "In order to investigate the extent to which the combination of different recency metrics improves the performance, we run a Sequential Forward Search (SFS) algorithm. The algorithm starts with an empty set and adds features to the model up to the point that no further improvement occurs. For this study, we used the R package mlr (Bischl et al., 2016) with the learner classif.mlp, and 5-fold cross-validation resampling strategy.",
                "cite_spans": [
                    {
                        "start": 331,
                        "end": 352,
                        "text": "(Bischl et al., 2016)",
                        "ref_id": "BIBREF7"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Sequential Forward Search",
                "sec_num": "6"
            },
            {
                "text": "The result of the MSR experiment shows that the two recency metrics playing the most important roles are metric 15, distance in paragraph, and metric 9, exponential distance in sentences. Retraining the MLP algorithm on the new model, the accuracy is 0.637. The Bayes Factor analysis provides strong evidence that the outcome of this model is statistically different from the baseline (BF = 26.11).",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Sequential Forward Search",
                "sec_num": "6"
            },
            {
                "text": "In the WSJ SFS experiment, metric 15, distance in paragraphs, and metric 8, log distance in sentences, were chosen as the two recency features whose combination produced the best result. The model trained on the combination of these two metrics had the accuracy of 0.631. The Bayes Factor analysis finds very strong evidence that the outcomes of the baseline and this model are coming from different distributions.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Sequential Forward Search",
                "sec_num": "6"
            },
            {
                "text": "What stands out in this experiment is that in the case of both MSR and WSJ, distance in paragraph is chosen as one of the recency metrics. The other chosen measures are exponential distance in MSR and logarithmic distance in WSJ. This could indicate that the algorithm is sensitive to the encoding of the sentence-based metrics. More experimentation in a more elaborated feature-based study is necessary to test this point.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Sequential Forward Search",
                "sec_num": "6"
            },
            {
                "text": "Our goal was to shed light on different interpretations of recency, and to find out which of these interpretations are most effective for referential choice prediction. A subsidiary goal was to investigate whether the choice of recency metric should take corpus-specific features such as text genre and text structure into consideration.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Conclusion",
                "sec_num": "7"
            },
            {
                "text": "The findings of this study should be of interest to theoretical and computational linguists alike, because both groups of researchers have studied the relation between recency and referential choice. In the linguistic tradition, the notion of recency has often been studied without a clear definition being offered (section 2). In the computational tradition, by contrast, researchers have dwelt less on theoretical justification but have had to provide precise definitions, to ensure that their algorithms are able to deal with a broad range of inputs. For example, Kibrik et al. 2016 Another difference is that in the linguistic tradition, researchers usually think of recency as operating solely on the sentence or paragraph levels; while in computational works, less conventional metrics such as measuring the distance in words or NPs have been also practiced. We believe that the existence of a wider range of recency metrics in computational feature-based studies has the potential to open new windows into a better understanding of recency, and can encourage a re-evaluation of recency in the linguistic tradition. What is missing from many computational works is an explanation of why a certain metric or a certain way of encoding has been chosen over another. The findings from this study make the following contributions to the literature:",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Conclusion",
                "sec_num": "7"
            },
            {
                "text": "Creating a taxonomy of recency metrics After providing an overview of the most prevalent interpretations of recency in the linguistic tradition, we scrutinized the feature-based ML studies and provided, for the first time as far as we know, a taxonomy of recency metrics. The importance of this taxonomy is firstly that we do not know of any available work classifying and analyzing this notion comprehensively, so this work could be a starting point for getting deeper into the notion of recency.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Conclusion",
                "sec_num": "7"
            },
            {
                "text": "Secondly, we have shed light on the differences between these metrics. Knowing what the differences are, and where they stem from, could be the first step in dissecting various aspects of this notion and developing new, improved recency metrics.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Conclusion",
                "sec_num": "7"
            },
            {
                "text": "Assessing a wide range of recency metrics We have assessed individual metrics using the Multilayer Perceptron algorithm, and conducted a Bayes Factor analysis using a beta distribution to investigate whether there is evidence that the models incorporating recency metrics come from different distributions than the baseline algorithms. Additionally, we conducted a Bayes Factor analysis between the best performing models of each measurement unit to see whether there is enough evidence that the outcomes of models are different from each other.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Conclusion",
                "sec_num": "7"
            },
            {
                "text": "The evidence reported in Table 6 for the models built on the WSJ corpus suggests that the outcome of the models incorporating NP, paragraph and sentence metrics have been shifted by more than 0.01 beyond the baseline's outcome. Also, we have strong evidence to believe that these models are sta-tistically different from the models incorporating word and markable distance measures.",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 25,
                        "end": 32,
                        "text": "Table 6",
                        "ref_id": "TABREF6"
                    }
                ],
                "eq_spans": [],
                "section": "Conclusion",
                "sec_num": "7"
            },
            {
                "text": "Additionally, the results of the Sequential Forward Search experiment show that, for both corpora, a combination of the paragraph-based and one of the sentence-based metrics leads to the best performance. This finding is important because it provides some direction in choosing recency metrics for feature-based computational studies. Furthermore, the Bayes Factor analysis and SFS combined suggest that \"higher-level\" metrics such as distance in paragraphs and sentences might result in greater changes in the performance of the algorithms than \"lower-level\" metrics based on counting words or markables. Finally, it raises the question of why a measurement such as distance in the number of sentences performs better than a measurement such as distance in the number of words. This is notable because the distance in words might be more indicative of the physical distance between the mentions, considering that sentences can vary enormously in length.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Conclusion",
                "sec_num": "7"
            },
            {
                "text": "Another interesting observation is that some encoding solutions are more successful than others. For instance, the sentential distance in metric 5 is grouped into 2 bins of +/-2 sentences, while in metric 6, the distance is grouped into 4 bins of 0, 1, 2 or more than 2 sentences. While the former metric leads to a marginal difference in the performance of the algorithms, the latter contributes more to the improvement of the accuracy. These subtle differences in encoding and the great impact that they can make should be the focus of more experimentation.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Conclusion",
                "sec_num": "7"
            },
            {
                "text": "Another major finding was the important role of distance measured in paragraphs. The Bayes Factor analysis showed that there is strong evidence for the differences between the performance of the baseline and the algorithms incorporating this metric. Also, using the SFS algorithm, this metric was selected in both MSR and WSJ as a feature contributing to the improvement of the results. The important role of paragraph information is in line with what we presented in section 2 under the topic \"Unit boundary\". According to Vonk et al. (1992) , episode boundaries can decrease the accessibility of a referent, resulting in re-mentioning with full NPs. This might be the reason that including paragraph distance, and signaling whether or not the antecedent is in a different paragraph, makes the referential choice prediction simpler for the algo-rithms. The surprising point is that despite the major role of paragraph information, the only study from subsection 4.1 which has used the paragraph distance metric is Kibrik et al. (2016) . The results from the current study could motivate a greater focus on paragraph-based information in featurebased studies.",
                "cite_spans": [
                    {
                        "start": 524,
                        "end": 542,
                        "text": "Vonk et al. (1992)",
                        "ref_id": "BIBREF37"
                    },
                    {
                        "start": 1015,
                        "end": 1035,
                        "text": "Kibrik et al. (2016)",
                        "ref_id": "BIBREF27"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Conclusion",
                "sec_num": "7"
            },
            {
                "text": "Importance of the choice of corpus Surprisingly, the results of this study showed that recency measures were of greater importance when applied to WSJ than to MSR. In case of the MSR models, the only metric which in isolation led to a distribution different from the baseline was distance in the number of paragraphs, while in the case of WSJ, 8 different recency metrics led to major differences. One possible reason for the different behavior of recency metrics could be that due to unbalanced number of referring expression types (more than 50% pronouns and less than 20% common names), MSR is, most likely, not a suitable corpus for a three-way referential choice task.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Conclusion",
                "sec_num": "7"
            },
            {
                "text": "It can be seen from the data in Table 2 that except the length of the sentences which is almost equal in both corpora, other text structure features, such as the number of words, sentences and paragraphs are very different from each other (with WSJ having almost 4 times more words, sentences and paragraphs). One speculation is that lengthrelated features modulate the importance of the recency metrics in the ML models.",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 32,
                        "end": 39,
                        "text": "Table 2",
                        "ref_id": "TABREF3"
                    }
                ],
                "eq_spans": [],
                "section": "Conclusion",
                "sec_num": "7"
            },
            {
                "text": "Further research is needed to identify the causes of this difference. However, based on our study, one might conclude that the more complex the discourse structure, the greater the role of recency measures. If this is true, it would be of great importance to carefully inspect the characteristics of the textual source prior to deciding which features to include in the study, as apparently, the choice of recency metric should depend on text genre and structure.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Conclusion",
                "sec_num": "7"
            },
            {
                "text": "The exponential distance is not reported for WSJ in this study.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "",
                "sec_num": null
            },
            {
                "text": "As this corpus is used in the GREC-MSR Shared Tasks, we abbreviate its name to MSR.4 We decided to exclude GREC-People, the other corpus used in these Shared Tasks because after the exclusion of the first mention expressions, only 121 instances of common nouns (2.16% of the whole data) were left. In a pilot study, we found out that the data is not enough for a three-way referential choice prediction task.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "",
                "sec_num": null
            }
        ],
        "back_matter": [],
        "bib_entries": {
            "BIBREF0": {
                "ref_id": "b0",
                "title": "Accessing Noun-Phrase Antecedents",
                "authors": [
                    {
                        "first": "Mira",
                        "middle": [
                            "Ariel"
                        ],
                        "last": "",
                        "suffix": ""
                    }
                ],
                "year": 1990,
                "venue": "",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Mira Ariel. 1990. Accessing Noun-Phrase Antecedents. Routledge.",
                "links": null
            },
            "BIBREF1": {
                "ref_id": "b1",
                "title": "Accessibility theory: An overview. Text representation: Linguistic and psycholinguistic aspects",
                "authors": [
                    {
                        "first": "Mira",
                        "middle": [
                            "Ariel"
                        ],
                        "last": "",
                        "suffix": ""
                    }
                ],
                "year": 2001,
                "venue": "",
                "volume": "8",
                "issue": "",
                "pages": "29--87",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Mira Ariel. 2001. Accessibility theory: An overview. Text representation: Linguistic and psycholinguistic aspects, 8:29-87.",
                "links": null
            },
            "BIBREF2": {
                "ref_id": "b2",
                "title": "How speakers refer: The role of accessibility",
                "authors": [
                    {
                        "first": "Jennifer",
                        "middle": [
                            "E"
                        ],
                        "last": "",
                        "suffix": ""
                    },
                    {
                        "first": "Arnold",
                        "middle": [],
                        "last": "",
                        "suffix": ""
                    }
                ],
                "year": 2010,
                "venue": "Language and Linguistics Compass",
                "volume": "4",
                "issue": "4",
                "pages": "187--203",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Jennifer E Arnold. 2010. How speakers refer: The role of accessibility. Language and Linguistics Compass, 4(4):187-203.",
                "links": null
            },
            "BIBREF3": {
                "ref_id": "b3",
                "title": "Reference production in young speakers with and without autism: Effects of discourse status and processing constraints",
                "authors": [
                    {
                        "first": "Jennifer",
                        "middle": [
                            "E"
                        ],
                        "last": "Arnold",
                        "suffix": ""
                    },
                    {
                        "first": "Loisa",
                        "middle": [],
                        "last": "Bennetto",
                        "suffix": ""
                    },
                    {
                        "first": "Joshua",
                        "middle": [
                            "J"
                        ],
                        "last": "Diehl",
                        "suffix": ""
                    }
                ],
                "year": 2009,
                "venue": "Cognition",
                "volume": "110",
                "issue": "2",
                "pages": "131--146",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Jennifer E Arnold, Loisa Bennetto, and Joshua J Diehl. 2009. Reference production in young speakers with and without autism: Effects of discourse status and processing constraints. Cognition, 110(2):131-146.",
                "links": null
            },
            "BIBREF4": {
                "ref_id": "b4",
                "title": "The effect of additional characters on choice of referring expression: Everyone counts",
                "authors": [
                    {
                        "first": "E",
                        "middle": [],
                        "last": "Jennifer",
                        "suffix": ""
                    },
                    {
                        "first": "",
                        "middle": [],
                        "last": "Arnold",
                        "suffix": ""
                    },
                    {
                        "first": "M",
                        "middle": [],
                        "last": "Zenzi",
                        "suffix": ""
                    },
                    {
                        "first": "",
                        "middle": [],
                        "last": "Griffin",
                        "suffix": ""
                    }
                ],
                "year": 2007,
                "venue": "Journal of memory and language",
                "volume": "56",
                "issue": "4",
                "pages": "521--536",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Jennifer E Arnold and Zenzi M Griffin. 2007. The ef- fect of additional characters on choice of referring expression: Everyone counts. Journal of memory and language, 56(4):521-536.",
                "links": null
            },
            "BIBREF5": {
                "ref_id": "b5",
                "title": "The GREC challenges 2010: overview and evaluation results",
                "authors": [
                    {
                        "first": "Anja",
                        "middle": [],
                        "last": "Belz",
                        "suffix": ""
                    },
                    {
                        "first": "Eric",
                        "middle": [],
                        "last": "Kow",
                        "suffix": ""
                    }
                ],
                "year": 2010,
                "venue": "Proceedings of the 6th international natural language generation conference",
                "volume": "",
                "issue": "",
                "pages": "219--229",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Anja Belz and Eric Kow. 2010. The GREC challenges 2010: overview and evaluation results. In Proceed- ings of the 6th international natural language gen- eration conference, pages 219-229. Association for Computational Linguistics.",
                "links": null
            },
            "BIBREF6": {
                "ref_id": "b6",
                "title": "Generating referring expressions in context: The task evaluation challenges",
                "authors": [
                    {
                        "first": "Anja",
                        "middle": [],
                        "last": "Belz",
                        "suffix": ""
                    },
                    {
                        "first": "Eric",
                        "middle": [],
                        "last": "Kow",
                        "suffix": ""
                    },
                    {
                        "first": "Jette",
                        "middle": [],
                        "last": "Viethen",
                        "suffix": ""
                    },
                    {
                        "first": "Albert",
                        "middle": [],
                        "last": "Gatt",
                        "suffix": ""
                    }
                ],
                "year": 2010,
                "venue": "Empirical methods in natural language generation",
                "volume": "",
                "issue": "",
                "pages": "294--327",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Anja Belz, Eric Kow, Jette Viethen, and Albert Gatt. 2010. Generating referring expressions in context: The task evaluation challenges. In Empirical meth- ods in natural language generation, pages 294-327. Springer.",
                "links": null
            },
            "BIBREF7": {
                "ref_id": "b7",
                "title": "mlr: Machine Learning in R",
                "authors": [
                    {
                        "first": "Bernd",
                        "middle": [],
                        "last": "Bischl",
                        "suffix": ""
                    },
                    {
                        "first": "Michel",
                        "middle": [],
                        "last": "Lang",
                        "suffix": ""
                    },
                    {
                        "first": "Lars",
                        "middle": [],
                        "last": "Kotthoff",
                        "suffix": ""
                    },
                    {
                        "first": "Julia",
                        "middle": [],
                        "last": "Schiffner",
                        "suffix": ""
                    },
                    {
                        "first": "Jakob",
                        "middle": [],
                        "last": "Richter",
                        "suffix": ""
                    },
                    {
                        "first": "Erich",
                        "middle": [],
                        "last": "Studerus",
                        "suffix": ""
                    },
                    {
                        "first": "Giuseppe",
                        "middle": [],
                        "last": "Casalicchio",
                        "suffix": ""
                    },
                    {
                        "first": "Zachary",
                        "middle": [
                            "M"
                        ],
                        "last": "Jones",
                        "suffix": ""
                    }
                ],
                "year": 2016,
                "venue": "The Journal of Machine Learning Research",
                "volume": "17",
                "issue": "1",
                "pages": "5938--5942",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Bernd Bischl, Michel Lang, Lars Kotthoff, Julia Schiffner, Jakob Richter, Erich Studerus, Giuseppe Casalicchio, and Zachary M Jones. 2016. mlr: Ma- chine Learning in R. The Journal of Machine Learn- ing Research, 17(1):5938-5942.",
                "links": null
            },
            "BIBREF8": {
                "ref_id": "b8",
                "title": "IS-G: The comparison of different learning techniques for the selection of the main subject references",
                "authors": [
                    {
                        "first": "Bernd",
                        "middle": [],
                        "last": "Bohnet",
                        "suffix": ""
                    }
                ],
                "year": 2008,
                "venue": "Proceedings of the Fifth International Natural Language Generation Conference",
                "volume": "",
                "issue": "",
                "pages": "192--193",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Bernd Bohnet. 2008. IS-G: The comparison of differ- ent learning techniques for the selection of the main subject references. In Proceedings of the Fifth Inter- national Natural Language Generation Conference, pages 192-193. Association for Computational Lin- guistics.",
                "links": null
            },
            "BIBREF9": {
                "ref_id": "b9",
                "title": "Centering attention in discourse. Language and Cognitive processes",
                "authors": [
                    {
                        "first": "E",
                        "middle": [],
                        "last": "Susan",
                        "suffix": ""
                    },
                    {
                        "first": "",
                        "middle": [],
                        "last": "Brennan",
                        "suffix": ""
                    }
                ],
                "year": 1995,
                "venue": "",
                "volume": "10",
                "issue": "",
                "pages": "137--167",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Susan E Brennan. 1995. Centering attention in discourse. Language and Cognitive processes, 10(2):137-167.",
                "links": null
            },
            "BIBREF10": {
                "ref_id": "b10",
                "title": "Annotation guidelines. pocos-potsdam coreference scheme",
                "authors": [
                    {
                        "first": "Christian",
                        "middle": [],
                        "last": "Chiarcos",
                        "suffix": ""
                    },
                    {
                        "first": "Olga",
                        "middle": [],
                        "last": "Krasavina",
                        "suffix": ""
                    }
                ],
                "year": 2005,
                "venue": "",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Christian Chiarcos and Olga Krasavina. 2005. Annota- tion guidelines. pocos-potsdam coreference scheme. Unpublished manuscript.",
                "links": null
            },
            "BIBREF11": {
                "ref_id": "b11",
                "title": "Computational models of referring: a study in cognitive science",
                "authors": [
                    {
                        "first": "",
                        "middle": [],
                        "last": "Kees Van Deemter",
                        "suffix": ""
                    }
                ],
                "year": 2016,
                "venue": "",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Kees van Deemter. 2016. Computational models of re- ferring: a study in cognitive science. MIT Press.",
                "links": null
            },
            "BIBREF12": {
                "ref_id": "b12",
                "title": "Towards more variation in text generation: Developing and evaluating variation models for choice of referential form",
                "authors": [
                    {
                        "first": "Emiel",
                        "middle": [],
                        "last": "Thiago Castro Ferreira",
                        "suffix": ""
                    },
                    {
                        "first": "Sander",
                        "middle": [],
                        "last": "Krahmer",
                        "suffix": ""
                    },
                    {
                        "first": "",
                        "middle": [],
                        "last": "Wubben",
                        "suffix": ""
                    }
                ],
                "year": 2016,
                "venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics",
                "volume": "1",
                "issue": "",
                "pages": "568--577",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Thiago Castro Ferreira, Emiel Krahmer, and Sander Wubben. 2016. Towards more variation in text gen- eration: Developing and evaluating variation models for choice of referential form. In Proceedings of the 54th Annual Meeting of the Association for Compu- tational Linguistics (Volume 1: Long Papers), pages 568-577.",
                "links": null
            },
            "BIBREF13": {
                "ref_id": "b13",
                "title": "Discourse Structure and Anaphora: Written and Conversational English. Cambridge Studies in Linguistics",
                "authors": [
                    {
                        "first": "Barbara",
                        "middle": [
                            "A"
                        ],
                        "last": "Fox",
                        "suffix": ""
                    }
                ],
                "year": 1987,
                "venue": "",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Barbara A. Fox. 1987. Discourse Structure and Anaphora: Written and Conversational English. Cambridge Studies in Linguistics. Cambridge Uni- versity Press.",
                "links": null
            },
            "BIBREF14": {
                "ref_id": "b14",
                "title": "The effect of animacy on the choice of referring expression",
                "authors": [
                    {
                        "first": "Kumiko",
                        "middle": [],
                        "last": "Fukumura",
                        "suffix": ""
                    },
                    {
                        "first": "",
                        "middle": [],
                        "last": "Roger Pg Van Gompel",
                        "suffix": ""
                    }
                ],
                "year": 2011,
                "venue": "Language and cognitive processes",
                "volume": "26",
                "issue": "10",
                "pages": "1472--1504",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Kumiko Fukumura and Roger PG van Gompel. 2011. The effect of animacy on the choice of referring expression. Language and cognitive processes, 26(10):1472-1504.",
                "links": null
            },
            "BIBREF15": {
                "ref_id": "b15",
                "title": "Survey of the state of the art in natural language generation: Core tasks, applications and evaluation",
                "authors": [
                    {
                        "first": "Albert",
                        "middle": [],
                        "last": "Gatt",
                        "suffix": ""
                    },
                    {
                        "first": "Emiel",
                        "middle": [],
                        "last": "Krahmer",
                        "suffix": ""
                    }
                ],
                "year": 2018,
                "venue": "Journal of Artificial Intelligence Research",
                "volume": "61",
                "issue": "",
                "pages": "65--170",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Albert Gatt and Emiel Krahmer. 2018. Survey of the state of the art in natural language generation: Core tasks, applications and evaluation. Journal of Artifi- cial Intelligence Research, 61:65-170.",
                "links": null
            },
            "BIBREF16": {
                "ref_id": "b16",
                "title": "Topic continuity in discourse: An introduction. Topic continuity in discourse: A quantitative cross-language study",
                "authors": [
                    {
                        "first": "",
                        "middle": [],
                        "last": "Talmy Giv\u00f3n",
                        "suffix": ""
                    }
                ],
                "year": 1983,
                "venue": "",
                "volume": "3",
                "issue": "",
                "pages": "1--42",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Talmy Giv\u00f3n. 1983. Topic continuity in discourse: An introduction. Topic continuity in discourse: A quan- titative cross-language study, 3:1-42.",
                "links": null
            },
            "BIBREF17": {
                "ref_id": "b17",
                "title": "The grammar of referential coherence as mental processing instructions. Linguistics",
                "authors": [
                    {
                        "first": "",
                        "middle": [],
                        "last": "Talmy Giv\u00f3n",
                        "suffix": ""
                    }
                ],
                "year": 1992,
                "venue": "",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Talmy Giv\u00f3n. 1992. The grammar of referential coher- ence as mental processing instructions. Linguistics.",
                "links": null
            },
            "BIBREF18": {
                "ref_id": "b18",
                "title": "Udel: generating referring expressions guided by psycholinguistic findings",
                "authors": [
                    {
                        "first": "Charles",
                        "middle": [],
                        "last": "Greenbacker",
                        "suffix": ""
                    },
                    {
                        "first": "Kathleen",
                        "middle": [],
                        "last": "Mccoy",
                        "suffix": ""
                    }
                ],
                "year": 2009,
                "venue": "Proceedings of the 2009 Workshop on Language Generation and Summarisation",
                "volume": "",
                "issue": "",
                "pages": "101--102",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Charles Greenbacker and Kathleen McCoy. 2009. Udel: generating referring expressions guided by psycholinguistic findings. In Proceedings of the 2009 Workshop on Language Generation and Sum- marisation, pages 101-102. Association for Compu- tational Linguistics.",
                "links": null
            },
            "BIBREF19": {
                "ref_id": "b19",
                "title": "Cnts: Memory-based learning of generating repeated references",
                "authors": [
                    {
                        "first": "Iris",
                        "middle": [],
                        "last": "Hendrickx",
                        "suffix": ""
                    },
                    {
                        "first": "Walter",
                        "middle": [],
                        "last": "Daelemans",
                        "suffix": ""
                    },
                    {
                        "first": "Kim",
                        "middle": [],
                        "last": "Luyckx",
                        "suffix": ""
                    },
                    {
                        "first": "Roser",
                        "middle": [],
                        "last": "Morante",
                        "suffix": ""
                    },
                    {
                        "first": "Vincent",
                        "middle": [],
                        "last": "Van Asch",
                        "suffix": ""
                    }
                ],
                "year": 2008,
                "venue": "Proceedings of the Fifth International Natural Language Generation Conference",
                "volume": "",
                "issue": "",
                "pages": "194--195",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Iris Hendrickx, Walter Daelemans, Kim Luyckx, Roser Morante, and Vincent Van Asch. 2008. Cnts: Memory-based learning of generating repeated refer- ences. In Proceedings of the Fifth International Nat- ural Language Generation Conference, pages 194- 195. Association for Computational Linguistics.",
                "links": null
            },
            "BIBREF20": {
                "ref_id": "b20",
                "title": "Pronominalization revisited",
                "authors": [
                    {
                        "first": "Renate",
                        "middle": [],
                        "last": "Henschel",
                        "suffix": ""
                    },
                    {
                        "first": "Hua",
                        "middle": [],
                        "last": "Cheng",
                        "suffix": ""
                    },
                    {
                        "first": "Massimo",
                        "middle": [],
                        "last": "Poesio",
                        "suffix": ""
                    }
                ],
                "year": 2000,
                "venue": "Proceedings of the 18th conference on Computational linguistics",
                "volume": "1",
                "issue": "",
                "pages": "306--312",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Renate Henschel, Hua Cheng, and Massimo Poesio. 2000. Pronominalization revisited. In Proceedings of the 18th conference on Computational linguistics- Volume 1, pages 306-312. Association for Computa- tional Linguistics.",
                "links": null
            },
            "BIBREF21": {
                "ref_id": "b21",
                "title": "Long distance pronominalisation and global focus",
                "authors": [
                    {
                        "first": "Janet",
                        "middle": [],
                        "last": "Hitzeman",
                        "suffix": ""
                    },
                    {
                        "first": "Massimo",
                        "middle": [],
                        "last": "Poesio",
                        "suffix": ""
                    }
                ],
                "year": 1998,
                "venue": "The 17th International Conference on Computational Linguistics",
                "volume": "1",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Janet Hitzeman and Massimo Poesio. 1998. Long dis- tance pronominalisation and global focus. In COL- ING 1998 Volume 1: The 17th International Confer- ence on Computational Linguistics.",
                "links": null
            },
            "BIBREF22": {
                "ref_id": "b22",
                "title": "Resolving pronoun references",
                "authors": [
                    {
                        "first": "R",
                        "middle": [],
                        "last": "Jerry",
                        "suffix": ""
                    },
                    {
                        "first": "",
                        "middle": [],
                        "last": "Hobbs",
                        "suffix": ""
                    }
                ],
                "year": 1978,
                "venue": "Lingua",
                "volume": "44",
                "issue": "4",
                "pages": "311--338",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Jerry R Hobbs. 1978. Resolving pronoun references. Lingua, 44(4):311-338.",
                "links": null
            },
            "BIBREF23": {
                "ref_id": "b23",
                "title": "Ontonotes: the 90% solution",
                "authors": [
                    {
                        "first": "Eduard",
                        "middle": [],
                        "last": "Hovy",
                        "suffix": ""
                    },
                    {
                        "first": "Mitchell",
                        "middle": [],
                        "last": "Marcus",
                        "suffix": ""
                    },
                    {
                        "first": "Martha",
                        "middle": [],
                        "last": "Palmer",
                        "suffix": ""
                    },
                    {
                        "first": "Lance",
                        "middle": [],
                        "last": "Ramshaw",
                        "suffix": ""
                    },
                    {
                        "first": "Ralph",
                        "middle": [],
                        "last": "Weischedel",
                        "suffix": ""
                    }
                ],
                "year": 2006,
                "venue": "Proceedings of the human language technology conference of the NAACL",
                "volume": "",
                "issue": "",
                "pages": "57--60",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Eduard Hovy, Mitchell Marcus, Martha Palmer, Lance Ramshaw, and Ralph Weischedel. 2006. Ontonotes: the 90% solution. In Proceedings of the human lan- guage technology conference of the NAACL, Com- panion Volume: Short Papers, pages 57-60.",
                "links": null
            },
            "BIBREF24": {
                "ref_id": "b24",
                "title": "Using discourse features for referring expression generation",
                "authors": [
                    {
                        "first": "Emily",
                        "middle": [],
                        "last": "Jamison",
                        "suffix": ""
                    }
                ],
                "year": 2008,
                "venue": "Proceedings of the 5th Meeting of the Midwest Computational Linguistics Colloquium (MCLC)",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Emily Jamison. 2008. Using discourse features for re- ferring expression generation. In Proceedings of the 5th Meeting of the Midwest Computational Linguis- tics Colloquium (MCLC).",
                "links": null
            },
            "BIBREF25": {
                "ref_id": "b25",
                "title": "Osu-2: Generating referring expressions with a maximum entropy classifier",
                "authors": [
                    {
                        "first": "Emily",
                        "middle": [],
                        "last": "Jamison",
                        "suffix": ""
                    },
                    {
                        "first": "Dennis",
                        "middle": [],
                        "last": "Mehay",
                        "suffix": ""
                    }
                ],
                "year": 2008,
                "venue": "Proceedings of the Fifth International Natural Language Generation Conference",
                "volume": "",
                "issue": "",
                "pages": "196--197",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Emily Jamison and Dennis Mehay. 2008. Osu-2: Gen- erating referring expressions with a maximum en- tropy classifier. In Proceedings of the Fifth Inter- national Natural Language Generation Conference, pages 196-197. Association for Computational Lin- guistics.",
                "links": null
            },
            "BIBREF26": {
                "ref_id": "b26",
                "title": "Bayes factors",
                "authors": [
                    {
                        "first": "E",
                        "middle": [],
                        "last": "Robert",
                        "suffix": ""
                    },
                    {
                        "first": "Adrian",
                        "middle": [
                            "E"
                        ],
                        "last": "Kass",
                        "suffix": ""
                    },
                    {
                        "first": "",
                        "middle": [],
                        "last": "Raftery",
                        "suffix": ""
                    }
                ],
                "year": 1995,
                "venue": "Journal of the american statistical association",
                "volume": "90",
                "issue": "430",
                "pages": "773--795",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Robert E Kass and Adrian E Raftery. 1995. Bayes fac- tors. Journal of the american statistical association, 90(430):773-795.",
                "links": null
            },
            "BIBREF27": {
                "ref_id": "b27",
                "title": "Referential choice: Predictability and its limits",
                "authors": [
                    {
                        "first": "A",
                        "middle": [],
                        "last": "Andrej",
                        "suffix": ""
                    },
                    {
                        "first": "Mariya",
                        "middle": [
                            "V"
                        ],
                        "last": "Kibrik",
                        "suffix": ""
                    },
                    {
                        "first": "",
                        "middle": [],
                        "last": "Khudyakova",
                        "suffix": ""
                    },
                    {
                        "first": "B",
                        "middle": [],
                        "last": "Grigory",
                        "suffix": ""
                    },
                    {
                        "first": "Anastasia",
                        "middle": [],
                        "last": "Dobrov",
                        "suffix": ""
                    },
                    {
                        "first": "Dmitrij A",
                        "middle": [],
                        "last": "Linnik",
                        "suffix": ""
                    },
                    {
                        "first": "",
                        "middle": [],
                        "last": "Zalmanov",
                        "suffix": ""
                    }
                ],
                "year": null,
                "venue": "Frontiers in psychology",
                "volume": "7",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Andrej A Kibrik, Mariya V Khudyakova, Grigory B Dobrov, Anastasia Linnik, and Dmitrij A Zalmanov. 2016. Referential choice: Predictability and its lim- its. Frontiers in psychology, 7(1429).",
                "links": null
            },
            "BIBREF28": {
                "ref_id": "b28",
                "title": "Computational Generation of Referring Expressions: An Updated Survey",
                "authors": [
                    {
                        "first": "Emiel",
                        "middle": [],
                        "last": "Krahmer",
                        "suffix": ""
                    },
                    {
                        "first": "",
                        "middle": [],
                        "last": "Kees Van Deemter",
                        "suffix": ""
                    }
                ],
                "year": 2019,
                "venue": "",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Emiel Krahmer and Kees van Deemter. 2019. Com- putational Generation of Referring Expressions: An Updated Survey. Oxford University Press.",
                "links": null
            },
            "BIBREF29": {
                "ref_id": "b29",
                "title": "Generating anaphoric expressions: pronoun or definite description",
                "authors": [
                    {
                        "first": "F",
                        "middle": [],
                        "last": "Kathleen",
                        "suffix": ""
                    },
                    {
                        "first": "Michael",
                        "middle": [],
                        "last": "Mccoy",
                        "suffix": ""
                    },
                    {
                        "first": "",
                        "middle": [],
                        "last": "Strube",
                        "suffix": ""
                    }
                ],
                "year": 1999,
                "venue": "The Relation of Discourse/Dialogue Structure and Reference",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Kathleen F McCoy and Michael Strube. 1999. Gen- erating anaphoric expressions: pronoun or definite description? In The Relation of Discourse/Dialogue Structure and Reference.",
                "links": null
            },
            "BIBREF30": {
                "ref_id": "b30",
                "title": "Modeling semantic expectation: Using script knowledge for referent prediction",
                "authors": [
                    {
                        "first": "Ashutosh",
                        "middle": [],
                        "last": "Modi",
                        "suffix": ""
                    },
                    {
                        "first": "Ivan",
                        "middle": [],
                        "last": "Titov",
                        "suffix": ""
                    },
                    {
                        "first": "Vera",
                        "middle": [],
                        "last": "Demberg",
                        "suffix": ""
                    }
                ],
                "year": 2017,
                "venue": "Transactions of the Association for Computational Linguistics",
                "volume": "5",
                "issue": "",
                "pages": "31--44",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Ashutosh Modi, Ivan Titov, Vera Demberg, Asad Say- eed, and Manfred Pinkal. 2017. Modeling seman- tic expectation: Using script knowledge for referent prediction. Transactions of the Association for Com- putational Linguistics, 5:31-44.",
                "links": null
            },
            "BIBREF31": {
                "ref_id": "b31",
                "title": "WLV: A confidence-based machine learning method for the GREC-NEG'09 task",
                "authors": [
                    {
                        "first": "Constantin",
                        "middle": [],
                        "last": "Or\u0203san",
                        "suffix": ""
                    },
                    {
                        "first": "Iustin",
                        "middle": [],
                        "last": "Dornescu",
                        "suffix": ""
                    }
                ],
                "year": 2009,
                "venue": "Proceedings of the 2009 Workshop on Language Generation and Summarisation",
                "volume": "",
                "issue": "",
                "pages": "107--108",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Constantin Or\u0203san and Iustin Dornescu. 2009. WLV: A confidence-based machine learning method for the GREC-NEG'09 task. In Proceedings of the 2009 Workshop on Language Generation and Summarisa- tion (UCNLG+Sum 2009), pages 107-108. Associa- tion for Computational Linguistics.",
                "links": null
            },
            "BIBREF32": {
                "ref_id": "b32",
                "title": "Centering: A parametric theory and its instantiations",
                "authors": [
                    {
                        "first": "Massimo",
                        "middle": [],
                        "last": "Poesio",
                        "suffix": ""
                    },
                    {
                        "first": "Rosemary",
                        "middle": [],
                        "last": "Stevenson",
                        "suffix": ""
                    },
                    {
                        "first": "Barbara",
                        "middle": [
                            "Di"
                        ],
                        "last": "Eugenio",
                        "suffix": ""
                    },
                    {
                        "first": "Janet",
                        "middle": [],
                        "last": "Hitzeman",
                        "suffix": ""
                    }
                ],
                "year": 2004,
                "venue": "Computational linguistics",
                "volume": "30",
                "issue": "3",
                "pages": "309--363",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Massimo Poesio, Rosemary Stevenson, Barbara Di Eu- genio, and Janet Hitzeman. 2004. Centering: A para- metric theory and its instantiations. Computational linguistics, 30(3):309-363.",
                "links": null
            },
            "BIBREF33": {
                "ref_id": "b33",
                "title": "Towards robust linguistic analysis using ontonotes",
                "authors": [
                    {
                        "first": "Alessandro",
                        "middle": [],
                        "last": "Sameer Pradhan",
                        "suffix": ""
                    },
                    {
                        "first": "Nianwen",
                        "middle": [],
                        "last": "Moschitti",
                        "suffix": ""
                    },
                    {
                        "first": "Hwee Tou",
                        "middle": [],
                        "last": "Xue",
                        "suffix": ""
                    },
                    {
                        "first": "Anders",
                        "middle": [],
                        "last": "Ng",
                        "suffix": ""
                    },
                    {
                        "first": "Olga",
                        "middle": [],
                        "last": "Bj\u00f6rkelund",
                        "suffix": ""
                    },
                    {
                        "first": "Yuchen",
                        "middle": [],
                        "last": "Uryupina",
                        "suffix": ""
                    },
                    {
                        "first": "Zhi",
                        "middle": [],
                        "last": "Zhang",
                        "suffix": ""
                    },
                    {
                        "first": "",
                        "middle": [],
                        "last": "Zhong",
                        "suffix": ""
                    }
                ],
                "year": 2013,
                "venue": "Proceedings of the Seventeenth Conference on Computational Natural Language Learning",
                "volume": "",
                "issue": "",
                "pages": "143--152",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Sameer Pradhan, Alessandro Moschitti, Nianwen Xue, Hwee Tou Ng, Anders Bj\u00f6rkelund, Olga Uryupina, Yuchen Zhang, and Zhi Zhong. 2013. Towards ro- bust linguistic analysis using ontonotes. In Pro- ceedings of the Seventeenth Conference on Computa- tional Natural Language Learning, pages 143-152.",
                "links": null
            },
            "BIBREF34": {
                "ref_id": "b34",
                "title": "Single and multi-objective optimization for feature selection in anaphora resolution",
                "authors": [
                    {
                        "first": "Sriparna",
                        "middle": [],
                        "last": "Saha",
                        "suffix": ""
                    },
                    {
                        "first": "Asif",
                        "middle": [],
                        "last": "Ekbal",
                        "suffix": ""
                    },
                    {
                        "first": "Olga",
                        "middle": [],
                        "last": "Uryupina",
                        "suffix": ""
                    },
                    {
                        "first": "Massimo",
                        "middle": [],
                        "last": "Poesio",
                        "suffix": ""
                    }
                ],
                "year": 2011,
                "venue": "Proceedings of 5th International Joint Conference on Natural Language Processing",
                "volume": "",
                "issue": "",
                "pages": "93--101",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Sriparna Saha, Asif Ekbal, Olga Uryupina, and Mas- simo Poesio. 2011. Single and multi-objective opti- mization for feature selection in anaphora resolution. In Proceedings of 5th International Joint Conference on Natural Language Processing, pages 93-101.",
                "links": null
            },
            "BIBREF35": {
                "ref_id": "b35",
                "title": "Referring expressions in discourse, Cambridge Studies in Linguistics",
                "authors": [
                    {
                        "first": "Carlota",
                        "middle": [
                            "S"
                        ],
                        "last": "Smith",
                        "suffix": ""
                    }
                ],
                "year": 2003,
                "venue": "",
                "volume": "",
                "issue": "",
                "pages": "123--152",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Carlota S. Smith. 2003. Referring expressions in discourse, Cambridge Studies in Linguistics, page 123-152. Cambridge University Press.",
                "links": null
            },
            "BIBREF36": {
                "ref_id": "b36",
                "title": "Coherence and grounding in discourse: outcome of a symposium",
                "authors": [
                    {
                        "first": "",
                        "middle": [],
                        "last": "Russell S Tomlin",
                        "suffix": ""
                    }
                ],
                "year": 1984,
                "venue": "John Benjamins Publishing",
                "volume": "11",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Russell S Tomlin. 1987. Coherence and grounding in discourse: outcome of a symposium, Eugene, Ore- gon, June 1984, volume 11. John Benjamins Pub- lishing.",
                "links": null
            },
            "BIBREF37": {
                "ref_id": "b37",
                "title": "The use of referential expressions in structuring discourse. Language and cognitive processes",
                "authors": [
                    {
                        "first": "Wietske",
                        "middle": [],
                        "last": "Vonk",
                        "suffix": ""
                    },
                    {
                        "first": "Gmm",
                        "middle": [],
                        "last": "Lettica",
                        "suffix": ""
                    },
                    {
                        "first": "",
                        "middle": [],
                        "last": "Hustinx",
                        "suffix": ""
                    },
                    {
                        "first": "",
                        "middle": [],
                        "last": "Wim",
                        "suffix": ""
                    },
                    {
                        "first": "",
                        "middle": [],
                        "last": "Simons",
                        "suffix": ""
                    }
                ],
                "year": 1992,
                "venue": "",
                "volume": "7",
                "issue": "",
                "pages": "301--333",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Wietske Vonk, Lettica GMM Hustinx, and Wim HG Si- mons. 1992. The use of referential expressions in structuring discourse. Language and cognitive pro- cesses, 7(3-4):301-333.",
                "links": null
            }
        },
        "ref_entries": {
            "FIGREF0": {
                "type_str": "figure",
                "num": null,
                "text": "defined 7 different implementations of the notion of recency taking different units of measurement into account; while Saha et al. (2011) employed various implementations of sentence-related metrics.",
                "uris": null
            },
            "TABREF1": {
                "html": null,
                "num": null,
                "text": "List of metrics collected from different ML studies",
                "content": "<table><tr><td>the values between two numbers [0,1], using the</td></tr><tr><td>following formula:</td></tr></table>",
                "type_str": "table"
            },
            "TABREF3": {
                "html": null,
                "num": null,
                "text": "Comparison of the MSR and WSJ corpora in terms of length-related features and number of different types of referring expressions. Mean n of chains, meaning mean number of different annotated referents in a document, is not reported for MSR because only one chain per document is annotated.",
                "content": "<table/>",
                "type_str": "table"
            },
            "TABREF4": {
                "html": null,
                "num": null,
                "text": "",
                "content": "<table><tr><td colspan=\"2\">Meas Unit Name</td><td>MSR</td><td>WSJ</td></tr><tr><td/><td>model 1</td><td>0.60</td><td>0.576</td></tr><tr><td>Word</td><td>model 2</td><td colspan=\"2\">0. 594 0. 551</td></tr><tr><td/><td>model 3</td><td colspan=\"2\">0.592 0. 572</td></tr><tr><td/><td>model 4</td><td>0.607</td><td>0.62</td></tr><tr><td/><td>model 5</td><td colspan=\"2\">0. 588 0. 582</td></tr><tr><td/><td>model 6</td><td colspan=\"2\">0.608 0. 622</td></tr><tr><td>Sentence</td><td>model 7 model 8</td><td colspan=\"2\">0.602 0.622 0.607 0.611</td></tr><tr><td/><td>model 9</td><td>0.609</td><td>-</td></tr><tr><td/><td colspan=\"3\">model 10 0.589 0.597</td></tr><tr><td/><td colspan=\"3\">model 11 0.602 0.604</td></tr><tr><td>NP</td><td>model 12</td><td>0.59</td><td>0.623</td></tr><tr><td>Markable</td><td colspan=\"3\">model 13 model 14 0.594 0.561 -0.577</td></tr><tr><td colspan=\"4\">Paragraph model 15 0.625 0. 616</td></tr></table>",
                "type_str": "table"
            },
            "TABREF6": {
                "html": null,
                "num": null,
                "text": "Bayes Factor analysis giving the ratio of probabilities that the underlying accuracy rates are within 1% of each other or not. According to the scale ofKass and Raftery (1995) presented in",
                "content": "<table/>",
                "type_str": "table"
            }
        }
    }
}