File size: 79,636 Bytes
6fa4bc9
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
1001
1002
1003
1004
1005
1006
1007
1008
1009
1010
1011
1012
1013
1014
1015
1016
1017
1018
1019
1020
1021
1022
1023
1024
1025
1026
1027
1028
1029
1030
1031
1032
1033
1034
1035
1036
1037
1038
1039
1040
1041
1042
1043
1044
1045
1046
1047
1048
1049
1050
1051
1052
1053
1054
1055
1056
1057
1058
1059
1060
1061
1062
1063
1064
1065
1066
1067
1068
1069
1070
1071
1072
1073
1074
1075
1076
1077
1078
1079
1080
1081
1082
1083
1084
1085
1086
1087
1088
1089
1090
1091
1092
1093
1094
1095
1096
1097
1098
1099
1100
1101
1102
1103
1104
1105
1106
1107
1108
1109
1110
1111
1112
1113
1114
1115
1116
1117
1118
1119
1120
1121
1122
1123
1124
1125
1126
1127
1128
1129
1130
1131
1132
1133
1134
1135
1136
1137
1138
1139
1140
1141
1142
1143
1144
1145
1146
1147
1148
1149
1150
1151
1152
1153
1154
1155
1156
1157
1158
1159
1160
1161
1162
1163
1164
1165
1166
1167
1168
1169
1170
1171
1172
1173
1174
1175
1176
1177
1178
1179
1180
1181
1182
1183
1184
1185
1186
1187
1188
1189
1190
1191
1192
1193
1194
1195
1196
1197
1198
1199
1200
1201
1202
1203
1204
1205
1206
1207
1208
1209
1210
1211
1212
1213
1214
1215
1216
1217
1218
1219
1220
1221
1222
1223
1224
1225
1226
1227
1228
1229
1230
1231
1232
1233
1234
1235
1236
1237
1238
1239
1240
1241
1242
1243
1244
1245
1246
1247
1248
1249
1250
1251
1252
1253
1254
1255
1256
1257
1258
1259
1260
1261
1262
1263
1264
1265
1266
1267
1268
1269
1270
1271
1272
1273
1274
1275
1276
1277
1278
1279
1280
1281
1282
1283
1284
1285
1286
1287
1288
1289
1290
1291
1292
1293
1294
1295
1296
1297
1298
1299
1300
1301
1302
1303
1304
1305
1306
1307
1308
1309
1310
1311
1312
1313
1314
1315
1316
1317
1318
1319
1320
1321
1322
1323
1324
1325
1326
1327
1328
1329
1330
1331
1332
1333
1334
1335
1336
1337
1338
1339
1340
1341
1342
1343
1344
1345
1346
1347
1348
1349
1350
1351
1352
1353
1354
1355
1356
1357
1358
1359
1360
1361
1362
1363
1364
1365
1366
1367
1368
1369
1370
1371
1372
1373
1374
1375
1376
1377
1378
1379
1380
1381
1382
1383
1384
1385
1386
1387
1388
1389
1390
1391
1392
1393
1394
1395
1396
1397
1398
1399
1400
1401
1402
1403
1404
1405
1406
1407
1408
1409
1410
1411
1412
1413
1414
1415
1416
1417
1418
1419
1420
1421
1422
1423
1424
1425
1426
1427
1428
1429
1430
1431
1432
1433
1434
1435
1436
1437
1438
1439
1440
1441
1442
1443
1444
1445
1446
1447
1448
1449
1450
1451
1452
1453
1454
1455
1456
{
    "paper_id": "L16-1019",
    "header": {
        "generated_with": "S2ORC 1.0.0",
        "date_generated": "2023-01-19T12:08:28.885560Z"
    },
    "title": "PentoRef: A Corpus of Spoken References in Task-oriented Dialogues",
    "authors": [
        {
            "first": "Sina",
            "middle": [],
            "last": "Zarrie\u00df",
            "suffix": "",
            "affiliation": {
                "laboratory": "Dialogue Systems Group",
                "institution": "Bielefeld University",
                "location": {}
            },
            "email": ""
        },
        {
            "first": "Julian",
            "middle": [],
            "last": "Hough",
            "suffix": "",
            "affiliation": {
                "laboratory": "Dialogue Systems Group",
                "institution": "Bielefeld University",
                "location": {}
            },
            "email": ""
        },
        {
            "first": "Casey",
            "middle": [],
            "last": "Kennington",
            "suffix": "",
            "affiliation": {
                "laboratory": "Dialogue Systems Group",
                "institution": "Bielefeld University",
                "location": {}
            },
            "email": ""
        },
        {
            "first": "Ramesh",
            "middle": [],
            "last": "Manuvinakurike",
            "suffix": "",
            "affiliation": {
                "laboratory": "",
                "institution": "USC Institute for Creative Technologies",
                "location": {
                    "settlement": "Playa Vista",
                    "region": "CA"
                }
            },
            "email": ""
        },
        {
            "first": "David",
            "middle": [],
            "last": "Devault",
            "suffix": "",
            "affiliation": {
                "laboratory": "",
                "institution": "USC Institute for Creative Technologies",
                "location": {
                    "settlement": "Playa Vista",
                    "region": "CA"
                }
            },
            "email": ""
        },
        {
            "first": "Raquel",
            "middle": [],
            "last": "Fern\u00e1ndez",
            "suffix": "",
            "affiliation": {
                "laboratory": "",
                "institution": "University of Amsterdam",
                "location": {}
            },
            "email": ""
        },
        {
            "first": "David",
            "middle": [],
            "last": "Schlangen",
            "suffix": "",
            "affiliation": {
                "laboratory": "Dialogue Systems Group",
                "institution": "Bielefeld University",
                "location": {}
            },
            "email": ""
        }
    ],
    "year": "",
    "venue": null,
    "identifiers": {},
    "abstract": "PentoRef is a corpus of task-oriented dialogues collected in systematically manipulated settings. The corpus is multilingual, with English and German sections, and overall comprises more than 20000 utterances. The dialogues are fully transcribed and annotated with referring expressions mapped to objects in corresponding visual scenes, which makes the corpus a rich resource for research on spoken referring expressions in generation and resolution. The corpus includes several sub-corpora that correspond to different dialogue situations where parameters related to interactivity, visual access, and verbal channel have been manipulated in systematic ways. The corpus thus lends itself to very targeted studies of reference in spontaneous dialogue.",
    "pdf_parse": {
        "paper_id": "L16-1019",
        "_pdf_hash": "",
        "abstract": [
            {
                "text": "PentoRef is a corpus of task-oriented dialogues collected in systematically manipulated settings. The corpus is multilingual, with English and German sections, and overall comprises more than 20000 utterances. The dialogues are fully transcribed and annotated with referring expressions mapped to objects in corresponding visual scenes, which makes the corpus a rich resource for research on spoken referring expressions in generation and resolution. The corpus includes several sub-corpora that correspond to different dialogue situations where parameters related to interactivity, visual access, and verbal channel have been manipulated in systematic ways. The corpus thus lends itself to very targeted studies of reference in spontaneous dialogue.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Abstract",
                "sec_num": null
            }
        ],
        "body_text": [
            {
                "text": "We present PentoRef, a corpus of task-oriented spoken dialogues recorded in a puzzle-playing domain where players have to manipulate and communicate about Pentomino pieces. 1 PentoRef presents a rich resource for investigating human conversational strategies for referring to objects, on different levels of linguistic realization (including speech and timing/turn-taking) and in different yet consistently represented interactive and visual contexts. In particular, PentoRef is useful for developing automatic systems for, and studying the human mechanisms for, two concrete tasks, namely reference resolution (RR) and referring expression generation (REG). The corpus is a meta-collection that bundles up a range of experimental data collected over recent years in the Dialogue Systems Group, first at Potsdam University and then Bielefeld University, and by collaborators. The individual sub-corpora have been used for empirical studies of conversational behaviour in spoken language interaction as well as work on building statistical reference resolution systems in situated environments, in German and English (Fern\u00e1ndez et al., 2006; Schlangen and Fern\u00e1ndez, 2007; Schlangen et al., 2009; Heintze et al., 2010; Kennington et al., 2013; Kennington and Schlangen, 2015) . The common property of the experiments in this collection is that participants have to produce spoken referring expressions to puzzle pieces in a game, normally to instruct another player to carry out a certain move on the Pentomino game board. At the same time, some important parameters of the respective experimental settings were manipulated, such as the way communication was mediated (speech channel and/or visual channel), and the presentation of the scene (virtual or real-world). The original versions of the sub-corpora could not be directly exploited for systematic studies of referring expressions across these settings, due to inconsistent conventions used for segmenting, transcribing and annotating the audio recordings. More- 1 Pentomino is a puzzle game with pieces based on the 12 different shapes that can be constructed from arranging 5 squares next to each other. over, in each experiment, the visual scenes and visual attributes of pieces in a scene were represented in different ways (e.g. either as sets of logical properties or as low-level features from machine vision) such that additional annotation and standardization is needed to exploit the data as an actual corpus of spoken references. This paper presents the upcoming inaugural release of Pen-toRef, a unification of these resources that contains highquality transcriptions of spoken utterances, consistent representations of visual scenes, mark-up of referring expressions and mappings between referring expressions and pieces present in a visual scene. In addition to a consistently structured resource of the raw and derived data, we also provide a light-weight relational database that can be easily processed and queried across the different experimental settings in PentoRef.",
                "cite_spans": [
                    {
                        "start": 173,
                        "end": 174,
                        "text": "1",
                        "ref_id": null
                    },
                    {
                        "start": 1116,
                        "end": 1140,
                        "text": "(Fern\u00e1ndez et al., 2006;",
                        "ref_id": "BIBREF3"
                    },
                    {
                        "start": 1141,
                        "end": 1171,
                        "text": "Schlangen and Fern\u00e1ndez, 2007;",
                        "ref_id": "BIBREF17"
                    },
                    {
                        "start": 1172,
                        "end": 1195,
                        "text": "Schlangen et al., 2009;",
                        "ref_id": "BIBREF18"
                    },
                    {
                        "start": 1196,
                        "end": 1217,
                        "text": "Heintze et al., 2010;",
                        "ref_id": "BIBREF7"
                    },
                    {
                        "start": 1218,
                        "end": 1242,
                        "text": "Kennington et al., 2013;",
                        "ref_id": "BIBREF11"
                    },
                    {
                        "start": 1243,
                        "end": 1274,
                        "text": "Kennington and Schlangen, 2015)",
                        "ref_id": "BIBREF10"
                    },
                    {
                        "start": 2019,
                        "end": 2020,
                        "text": "1",
                        "ref_id": null
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": "1."
            },
            {
                "text": "Compared to other resources used in dialogue research, PentoRef follows a tradition perhaps best exemplified by the HCRC Map Task Corpus (Anderson et al., 1991; MacMahon et al., 2006) in that it combines the naturalness of unscripted conversation with the advantages of taskoriented dialogue, such as careful control over aspects of the linguistic and extralinguistic context. Recent comparable data collection efforts are relatively rare, but see (Tokunaga et al., 2012; Gatt and Paggio, 2014) . Related studies in REG research showed that the linguistic phenomena found in the elicited referring expressions vary widely with the modality, task, and audience, cf. (Mitchell et al., 2010; Koolen and Krahmer, 2010; Clarke et al., 2013) . Inspired by a recently increasing interest in image description and labelling tasks, data sets of real-world photographs (paired with references to specific entities in the image) have also been created for REG (Kazemzadeh et al., 2014; Gkatzia et al., 2015) . Real-world images pose interesting challenges for REG, as the set of visual attributes and, consequently, the distractor objects (objects present in the scene which are not the target of a referring expression) cannot be directly controlled.",
                "cite_spans": [
                    {
                        "start": 137,
                        "end": 160,
                        "text": "(Anderson et al., 1991;",
                        "ref_id": "BIBREF0"
                    },
                    {
                        "start": 161,
                        "end": 183,
                        "text": "MacMahon et al., 2006)",
                        "ref_id": "BIBREF14"
                    },
                    {
                        "start": 448,
                        "end": 471,
                        "text": "(Tokunaga et al., 2012;",
                        "ref_id": "BIBREF19"
                    },
                    {
                        "start": 472,
                        "end": 494,
                        "text": "Gatt and Paggio, 2014)",
                        "ref_id": "BIBREF5"
                    },
                    {
                        "start": 665,
                        "end": 688,
                        "text": "(Mitchell et al., 2010;",
                        "ref_id": "BIBREF16"
                    },
                    {
                        "start": 689,
                        "end": 714,
                        "text": "Koolen and Krahmer, 2010;",
                        "ref_id": "BIBREF12"
                    },
                    {
                        "start": 715,
                        "end": 735,
                        "text": "Clarke et al., 2013)",
                        "ref_id": "BIBREF2"
                    },
                    {
                        "start": 949,
                        "end": 974,
                        "text": "(Kazemzadeh et al., 2014;",
                        "ref_id": "BIBREF9"
                    },
                    {
                        "start": 975,
                        "end": 996,
                        "text": "Gkatzia et al., 2015)",
                        "ref_id": "BIBREF6"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Related Work",
                "sec_num": "2."
            },
            {
                "text": "Although attempts have been made to systematically assess the effects of the different domains on the reference task (Gkatzia et al., 2015) , the comparability of existing reference corpora is limited as they are based on very different types of visual stimuli. PentoRef provides an unusually wide spectrum of experimental settings that have been investigated in a single domain, combining various levels of interactivity and mediation on the one hand, and variation between virtual and real-world scenes on the other.",
                "cite_spans": [
                    {
                        "start": 117,
                        "end": 139,
                        "text": "(Gkatzia et al., 2015)",
                        "ref_id": "BIBREF6"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Related Work",
                "sec_num": "2."
            },
            {
                "text": "PentoRef consists of different manipulations on taskoriented puzzle-playing using the 12 Pentomino pieces, individuated by their shape. When more than one set of Pentominoes is used, the object type may also be individuated by colour. An important difference to standard reference resources is that control over the set of distractors was not a major consideration during experiment design. Different settings vary widely with respect to number of pieces in a scene, and the properties that a target piece shares with distractor objects. For instance, in some settings, all pieces had the same color. In other settings, each piece had a unique color. Taken together as a corpus, the experiments thus provide an interesting test-bed for REG and RR systems that need to adapt to different types of visual contexts within a common domain.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "PentoRef Overview",
                "sec_num": "3."
            },
            {
                "text": "In the puzzle games, a player can have one of the following roles: (i) the Instruction Giver (IG), the player who has complete knowledge about the game's goal (e.g. a picture of a shape constructed out of Pentomino pieces), but who cannot manipulate the pieces herself, or (ii) the Instruction Follower (IF) who can manipulate pieces, but does not have knowledge about the game's goal. In order to achieve the goal, the IG has to formulate verbal instructions which the IF has to execute in terms of actions on the game board (i.e. selecting, moving, rotating, or placing pieces). In this task-oriented setting, it is possible to directly assess the communicative success (effectiveness) of an utterance or a referring expression in that if the IF could quickly identify the intended Pentomino piece in the scene, the referring expression formulated by the IG was immediately effective. In some of the interactions, only the piece selection is required of the IF rather than the construction of the entire puzzle, however reference identification is common to all domains. The corpus contains two main types of task-oriented interactions:",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "General Task",
                "sec_num": "3.1."
            },
            {
                "text": "Human-wizard interaction: A human IG has the task to instruct what they believe to be a machine to select or move certain pieces on a game board or desk. Depending on the setting the IG can use speech, and sometimes, gesture. Behind the scenes, a human wizard performs the game actions as the IF. The IG receives signals of the wizard's game actions (e.g. via highlighted pieces on the screen, or audio signals). In some cases, the IG can react to these signals.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "General Task",
                "sec_num": "3.1."
            },
            {
                "text": "Human-human dialogues: The IF is a human player that communicates with the IG via speech. Both players collaboratively perform the task (i.e. building a shape out of Pentomino pieces). The IG has the desired solution to the puzzle, but cannot manipulate pieces, whereas the IG can manipulate pieces but does not have the solution. Table 1 shows an overview of the data that we have bundled up for PentoRef, and introduces the sub-corpora with their labels, as they were used in previous research. Experimental settings have been manipulated along the following dimensions.",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 331,
                        "end": 338,
                        "text": "Table 1",
                        "ref_id": "TABREF1"
                    }
                ],
                "eq_spans": [],
                "section": "General Task",
                "sec_num": "3.1."
            },
            {
                "text": "Scene: In virtual settings, Pentomino pieces are shown as graphical objects on a computer screen. In the realworld settings, participants had to interact with real pieces on a physical game board. There is also an intermediate level of \"images\" in the RDG-Pento experiment, a version of the RDG-Image game described in (Manuvinakurike et al., 2015) , using the same webbased data-collection methods using photographs of real Pentomino pieces.",
                "cite_spans": [
                    {
                        "start": 319,
                        "end": 348,
                        "text": "(Manuvinakurike et al., 2015)",
                        "ref_id": "BIBREF15"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Experimental Settings",
                "sec_num": "3.2."
            },
            {
                "text": "Pre-solved game: When the game plan is pre-solved, the IG cannot decide on the pieces that the IF has to select and actions that the IF has to perform, but has to follow some plan given to them as a stimulus. When the game is not pre-solved, the IG can freely decide on the order of game actions, and potentially, the types of pieces the IF has to select.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Experimental Settings",
                "sec_num": "3.2."
            },
            {
                "text": "Vision: When vision is available, IGs can observe what the IF is doing, e.g. via a camera feed of the IF's game board and their hands, or the IF's mouse movements on a screen. Otherwise, participants only communicate via speech.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Experimental Settings",
                "sec_num": "3.2."
            },
            {
                "text": "In each experimental setting, players had to interact with Pentomino pieces. Beyond that common property, the different settings vary widely with respect to number of pieces in a scene, and the properties that a target piece would share with distractor objects. This is illustrated in Figure 1 , showing four example scenes from Take, Take-CV, Visual Pento, and WOz-Pento. For instance, in Visual Pento, all pieces initially have the same color (blue) and their shape uniquely distinguishes them from all other pieces. For the Take experiment, the scenes were randomly generated and contained a large number of pieces in various colors such that there were always pieces that had the same color and/or shape. As another example, the scenes in Take-CV were composed of real Pentomino pieces taken from 3 sets and randomly distributed on a desk. In this case, some colors only occur with a particular shape (e.g. red crosses). Moreover, there were wooden pieces or pieces with different shades of the same color. Another difference between the virtual and the real scenes concerns the orientation of the pieces. In the virtual scenes, the pieces were arranged on a regular rectangular grid. The ",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 285,
                        "end": 293,
                        "text": "Figure 1",
                        "ref_id": null
                    }
                ],
                "eq_spans": [],
                "section": "Scenes and Distractors",
                "sec_num": "3.3."
            },
            {
                "text": "Task In this Wizard-of-Oz study, users gave instructions to the system (the wizard) in order to manipulate (select, rotate, mirror, delete) puzzle pieces on an upper board and to put them onto a lower board, reaching a pre-specified goal state. Each participant took part in several rounds in which the distinguishing characteristics for puzzle pieces (color, shape, pro-posed name, position on the board) varied widely.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "WOz Pento",
                "sec_num": "4.1."
            },
            {
                "text": "Task In this Wizard-of-Oz study, the participant was confronted with a game board containing 15 randomly selected Pentomino puzzle pieces (out of a repertoire of 12 shapes, and 6 colors). The positions of the pieces were randomly determined, but in such a way that the pieces grouped in the four corners of the screen. They were instructed to (silently) choose a Pentomino tile on the screen and then instruct the computer system to select this piece by describing and pointing to it. When a piece was selected (by the wizard), the participant had to utter a confirmation (or give negative feedback) and a new board was generated and the process repeated.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Take",
                "sec_num": "4.2."
            },
            {
                "text": "Procedure The participants were seated at a table in front of the screen. Their gaze was then calibrated with an eye tracker (Seeingmachines FaceLab) placed above the screen and their arm movements (captured by a Microsoft Kinect, also above the screen) were also calibrated. The utterances, board states, arm movements, and gaze information were recorded in a similar fashion as described in (?). The wizard was instructed to elicit pointing gestures by waiting to select the participant-referred piece by several seconds, unless a pointing action by the participant had already occurred. When the wizard misunderstood, or a technical problem arose, the wizard had an option to flag the episode.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Take",
                "sec_num": "4.2."
            },
            {
                "text": "Task In this Wizard-of-Oz setting, participants were seated in front of a table with 36 Pentomino puzzle pieces that were randomly placed with some space between them.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Take-CV",
                "sec_num": "4.3."
            },
            {
                "text": "The task of the participant was to refer to that object using only speech, as if identifying it for a friend sitting next to the participant.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Take-CV",
                "sec_num": "4.3."
            },
            {
                "text": "Procedure Above the table was a camera that recorded a video feed of the objects, processed using OpenCV to segment the objects; of those, one (or one pair) was chosen randomly by the experiment software. The video image was presented to the participant on a display placed behind the table, but with the randomly selected piece (or pair of pieces) indicated by an overlay. The wizard had an identical screen depicting the scene but not the selected object. The wizard listened to the participants RE and clicked on the object she thought was being referred on her screen. If it was the target object, a tone sounded and a new object was randomly chosen. If a wrong object was clicked, a different tone sounded, the episode was flagged, and a new episode began. At varied intervals, the participant was instructed to \"shuffle\" the board between episodes by moving around the pieces.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Take-CV",
                "sec_num": "4.3."
            },
            {
                "text": "Phases The first half of the allotted time constituted Phase 1. After Phase 1 was complete, instructions for Phase 2 were explained: the screen showed the target and also a landmark object, outlined in blue, near the target. The participant was instructed to refer to the target using the landmark. (In the instructions, the concepts of landmark and target were explained in general terms.) All other instructions remained the same as Phase 1. The targets identifier, which was always known be-forehand, was always recorded. For Phase 2, the landmarks identifier was also recorded.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Take-CV",
                "sec_num": "4.3."
            },
            {
                "text": "Task The IG instructs the IF on how to build a Pentomino puzzle-an elephant shape built out of tiles that are composed out of five squares (see Figure 1) . The IG has the solution of the puzzle, while the IF is only given the outline and a set of 12 loose pieces. The Pentomino pieces available to the IF, while distinct in shape, are all the same colour and do not have an identifying label.",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 144,
                        "end": 153,
                        "text": "Figure 1)",
                        "ref_id": null
                    }
                ],
                "eq_spans": [],
                "section": "Noise/No-noise",
                "sec_num": "4.4."
            },
            {
                "text": "Conditions In Noise/No-Noise, there were two conditions: a Noise condition (experimental group) where the channel from the IG to the IF was manipulated by replacing, in real time and at random points, all signal with noise (brown noise, matched to loudness level of channel); and Figure 1 : A common reference mark-up across the PentoRef settings (the letters V and Z serve as shape identifiers)",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 280,
                        "end": 288,
                        "text": "Figure 1",
                        "ref_id": null
                    }
                ],
                "eq_spans": [],
                "section": "Noise/No-noise",
                "sec_num": "4.4."
            },
            {
                "text": "a No-noise condition (control group) where there were no manipulations.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Noise/No-noise",
                "sec_num": "4.4."
            },
            {
                "text": "Procedure Subjects were jointly greeted by the experimenter, who briefly explained the tasks to be carried out and allowed them to choose their roles as either IG or IF. They were then placed in different sound-proof rooms and were given written instructions for the Pentomino task. The IF was allowed a few minutes to get used to the Pentomino program. After subjects had read the instructions, the experimenter asked each of them whether they had any questions. Before leaving the IF room, the experimenter said to the IF something to the effect of: \"There might be some problems with the audio, which we can't fix at the moment, so please just go ahead\". This was done in order to prevent subjects in the noise condition from coming out of the room to complain about the quality of the audio. Finally the experimenter left the rooms and the first phase of the run began.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Noise/No-noise",
                "sec_num": "4.4."
            },
            {
                "text": "Task Same as Noise/No-noise.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Visual Pento",
                "sec_num": "4.5."
            },
            {
                "text": "Procedure The setting in this experiment was very much like the one described for the Pentomino task in the Noise experiment, except that there was a visual channel between IG and IF that allowes IG to see the actions performed by IF on the board. This was realised technically through a Virtual Network Computing (VNC) connection between the IF computer and a computer in IG's room, which replayed the GUI of the Pentomino program on which the IF was executing the instructions. Recording was done as described for the No-noise condition.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Visual Pento",
                "sec_num": "4.5."
            },
            {
                "text": "Task In this human-human set-up, two participants worked together to construct objects out of 12 pentomino tiles, one person could see the goal shape (the IG), the other could manipulate the objects (the IF). Each game was further subdivided into an initial selection phase and the actual game. In the selection phase, the IF picked some objects and presented them to the IG. The IG had to find a shape in a database with those objects. After that, the IG directed the IF in creating that shape.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Pento-CV",
                "sec_num": "4.6."
            },
            {
                "text": "Procedure Subjects were jointly greeted by the experimenter, who briefly explained the tasks to be carried out and allowed them to choose their initial roles as either IG or IF. They were then placed on different tables in the room. Above the table of the IF was a camera that recorded a video feed of the objects and his hands. The video image was presented to the IG on his screen. For each pair of participants, several games were recorded. After the first half of the allotted recording time, participants were asked to switch roles.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Pento-CV",
                "sec_num": "4.6."
            },
            {
                "text": "Task This is a Pentomino version of the Rapid Dialogue Game (RDG) described in (Manuvinakurike et al., 2015) , a human-human set-up where participants have audio access to each other through microphones and headsets. The participants had mutual visual access to a set of images, which are changed for each new round in the game. The participant playing the IG role would have one of the images on their screen highlighted as a target. They would describe the target to the participant in the IF role, who would try to identify it as fast as possible and click on the image they guessed to be the target. Participants were motivated by time pressure with the incentive to score as many points as possible in each fixed-duration round.",
                "cite_spans": [
                    {
                        "start": 79,
                        "end": 108,
                        "text": "(Manuvinakurike et al., 2015)",
                        "ref_id": "BIBREF15"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "RDG-Pento",
                "sec_num": "4.7."
            },
            {
                "text": "Procedure Participants were recruited and their technical set-ups tested via the web in the way described by (Manuvinakurike et al., 2015) . Participants would follow on-screen instructions then begin their first round in one of the roles (IG or IF). In each round, the pairs were presented 8 images of Pentomino pieces at a time on their own screens. The participant roles were switched every round. There were several rounds per difficulty level, starting with the easiest task with images of single Pento pieces, then progressing to sets of 2-6 pieces in each image. See Figure 2 for an example of the level with 2 pieces per image.",
                "cite_spans": [
                    {
                        "start": 109,
                        "end": 138,
                        "text": "(Manuvinakurike et al., 2015)",
                        "ref_id": "BIBREF15"
                    }
                ],
                "ref_spans": [
                    {
                        "start": 574,
                        "end": 580,
                        "text": "Figure",
                        "ref_id": null
                    }
                ],
                "eq_spans": [],
                "section": "RDG-Pento",
                "sec_num": "4.7."
            },
            {
                "text": "PentoRef consists of recordings of spontaneous speech. Most REG corpora have been collected in written, noninteractive domains. However, it is well-known that when humans use referring expressions in more natural, interac- Figure 2 : Game board in the RDG setting tive and situated contexts, conversational strategies are entirely different (Clark and Krych, 2004) . Importantly, in a situated dialogue, conversation partners typically collaborate to identify a particular target object, often coordinating on a referring strategy. While the IG utters the RE, the listener (the IF) can give feedback signals (verbal or action-based), or ask for clarifications and engage in repair sequences. A frequent phenomenon is 'reference in installments' where speakers split the reference across several utterances to incrementally build common ground with the listener. On the other hand, in spoken interactions, speakers (instruction givers) do not have unlimited time to ponder an optimal RE to refer to a particular object in a potentially complex scene. As a result, spoken referring expressions (as spoken language in general) typically contain disfluent material, including interruptions, pauses, hesitations, repetitions and self-repairs. To illustrate that PentoRef captures these types of referring, we present a few examples. Example (1) taken from Visual Pento (cf. Figure 1) illustrates typical phenomena in spoken referring expressions, such as repair, interruption and hesitation.",
                "cite_spans": [
                    {
                        "start": 341,
                        "end": 364,
                        "text": "(Clark and Krych, 2004)",
                        "ref_id": "BIBREF1"
                    }
                ],
                "ref_spans": [
                    {
                        "start": 223,
                        "end": 231,
                        "text": "Figure 2",
                        "ref_id": null
                    },
                    {
                        "start": 1369,
                        "end": 1378,
                        "text": "Figure 1)",
                        "ref_id": null
                    }
                ],
                "eq_spans": [],
                "section": "Referring in Spoken Dialogue: Examples",
                "sec_num": "5."
            },
            {
                "text": "(1) a. IG: IG: In Example (1), the IG first uses an analogical expression to refer to a piece. This is misunderstood by the IF who does not select the intended referent. The IG immediately produces utterances that correct the IF's action and provides more information about the target. In Example (2), again taken from Visual Pento, the IG is not certain how to name the properties of the target piece in an optimal way (i.e. shape or location) so he uses the location of the mouse pointer as a landmark, and produces a hesitation, and a hypernym. The IF interrupts him and asks for feedback about his current piece selection. Ecke.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Referring in Spoken Dialogue: Examples",
                "sec_num": "5."
            },
            {
                "text": "Jetzt",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Referring in Spoken Dialogue: Examples",
                "sec_num": "5."
            },
            {
                "text": "In the RDG Pento data, participants had to refer to sets instead of individual Pentomino pieces. The following Example illustrates a referring expression from that sub-corpus (produced for the second set in the bottom row in Figure  2 ).",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 225,
                        "end": 234,
                        "text": "Figure  2",
                        "ref_id": null
                    }
                ],
                "eq_spans": [],
                "section": "Referring in Spoken Dialogue: Examples",
                "sec_num": "5."
            },
            {
                "text": "(4) blue L on the top and the harry potter sign on the right Finally, we want to point out that our corpus also contains references to locations and a restricted set of actions. In the following example, taken from Pento-CV, the IG tries to explain to the IF how to position and rotate the object on the game board. As this example illustrates, this data is rich in disfluencies which are marked up according to the transcription and segmentation guidelines developed by (Hough et al., 2015 ",
                "cite_spans": [
                    {
                        "start": 471,
                        "end": 490,
                        "text": "(Hough et al., 2015",
                        "ref_id": "BIBREF8"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Referring in Spoken Dialogue: Examples",
                "sec_num": "5."
            },
            {
                "text": "Here we briefly describe the representations we provide in the corpus. The available annotations and overall corpus statistics including word types and tokens in each experimental setting are summarized in Table 2 .",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 206,
                        "end": 213,
                        "text": "Table 2",
                        "ref_id": "TABREF7"
                    }
                ],
                "eq_spans": [],
                "section": "Data Representation for Dialogue, Scenes and References",
                "sec_num": "6."
            },
            {
                "text": "We provide high quality utterance segmentation and transcription according to the manual in (Hough et al., 2015) , all of which was quality checked by the first two au- ",
                "cite_spans": [
                    {
                        "start": 92,
                        "end": 112,
                        "text": "(Hough et al., 2015)",
                        "ref_id": "BIBREF8"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Transcription and Segmentation",
                "sec_num": "6.1."
            },
            {
                "text": "Across all datasets we provide a common mark-up for objects, whereby each puzzle piece in a game has a unique ID. Also common across every setting are the two high-level attributes of piece shape 2 and colour from a closed set which is sufficient to identify all piece types across all settings. All referring expressions to pieces are marked with this identifying information over word spans. See Figure 1 which shows the commonality of this mark-up between the virtual and real-world settings. The reference annotation links the transcribed utterances to unique identifiers of pieces in the corresponding scene. In Take-CV, at the time of writing is the only corpus with landmark referents and relations such as 'next to' to be annotated in addition to the target referring expression.",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 398,
                        "end": 406,
                        "text": "Figure 1",
                        "ref_id": null
                    }
                ],
                "eq_spans": [],
                "section": "Referent and Scene Representation",
                "sec_num": "6.2."
            },
            {
                "text": "Visual Information from Scenes For RR and REG automatic tasks, one wishes to identify a referent in a scene given a representation of the scene and the words, so we make available both logical features and, for the real-world scenes, automatically derived real-valued machine vision captured features of each object in the scene. For example in Figure 1 , while the Take dataset provides logical features for a piece such as colour=red, in Take-CV, the features provided are from machine vision and will provide features such as RGB value, hue and saturation.",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 345,
                        "end": 353,
                        "text": "Figure 1",
                        "ref_id": null
                    }
                ],
                "eq_spans": [],
                "section": "Referent and Scene Representation",
                "sec_num": "6.2."
            },
            {
                "text": "2 Each object shape name is the letter that corresponds most closely to its shape in its normal orientation. Lightweight database Our data therefore represents the following layers of information: (i) transcribed words, (ii) segmentation of sequences of words into utterances, (iii) annotation of referring expression on word spans, (iv) representations of visual scenes. We use a light-weight relational database format to represent the data in PentoRef, shown in Figure 3 . Information on words, utterances and scenes are kept in tables that can be linked via the identifiers for pieces and referring expressions. Therefore, it is straightforward to query the database for all expressions referring to pieces with a particular shape across the different sub-corpora. In the general case, the scenes in our experiments are dynamic. This means that the location of pieces and their orientation on the game board changes over time. We include timestamps as unique identifiers for scenes.",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 465,
                        "end": 473,
                        "text": "Figure 3",
                        "ref_id": "FIGREF1"
                    }
                ],
                "eq_spans": [],
                "section": "Referent and Scene Representation",
                "sec_num": "6.2."
            },
            {
                "text": "PentoRef transcriptions and annotations are made available under a public PDDL license (doi:10.4119/unibi/ 2901444). Please contact the authors for obtaining audio data.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Release",
                "sec_num": "7."
            },
            {
                "text": "We have presented PentoRef, a spoken dialogue corpus consisting of several sub-corpora collected in systematically manipulated settings. The corpus includes a variety of dialogue situations that differ systematically with respect to interactivity, verbal channel, and visual access, which allows for interesting comparisons between experimental settings. The corpus is fully transcribed and enriched with different representations of visual scenes and annotations of referring expressions, providing a rich resource for reference in spontaneous spoken language.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Conclusion",
                "sec_num": "8."
            }
        ],
        "back_matter": [
            {
                "text": "This work was supported by the German Research Foundation (DFG) through the Cluster of Excellence Cognitive Interaction Technology 'CITEC' (EXC 277) at Bielefeld University and the DUEL project (grant SCHL 845/5-1).",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Acknowledgements",
                "sec_num": "9."
            }
        ],
        "bib_entries": {
            "BIBREF0": {
                "ref_id": "b0",
                "title": "The HCRC Map Task corpus",
                "authors": [
                    {
                        "first": "A",
                        "middle": [
                            "H"
                        ],
                        "last": "Anderson",
                        "suffix": ""
                    },
                    {
                        "first": "M",
                        "middle": [],
                        "last": "Bader",
                        "suffix": ""
                    },
                    {
                        "first": "E",
                        "middle": [
                            "G"
                        ],
                        "last": "Bard",
                        "suffix": ""
                    },
                    {
                        "first": "E",
                        "middle": [],
                        "last": "Boyle",
                        "suffix": ""
                    },
                    {
                        "first": "G",
                        "middle": [],
                        "last": "Doherty",
                        "suffix": ""
                    },
                    {
                        "first": "S",
                        "middle": [],
                        "last": "Garrod",
                        "suffix": ""
                    },
                    {
                        "first": "S",
                        "middle": [],
                        "last": "Isard",
                        "suffix": ""
                    },
                    {
                        "first": "J",
                        "middle": [],
                        "last": "Kowtko",
                        "suffix": ""
                    },
                    {
                        "first": "J",
                        "middle": [],
                        "last": "Mcallister",
                        "suffix": ""
                    },
                    {
                        "first": "J",
                        "middle": [],
                        "last": "Miller",
                        "suffix": ""
                    }
                ],
                "year": 1991,
                "venue": "Language and Speech",
                "volume": "34",
                "issue": "",
                "pages": "351--366",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Anderson, A. H., Bader, M., Bard, E. G., Boyle, E., Do- herty, G., Garrod, S., Isard, S., Kowtko, J., McAllister, J., Miller, J., et al. (1991). The HCRC Map Task corpus. Language and Speech, 34:351-366.",
                "links": null
            },
            "BIBREF1": {
                "ref_id": "b1",
                "title": "Speaking while monitoring addressees for understanding",
                "authors": [
                    {
                        "first": "H",
                        "middle": [
                            "H"
                        ],
                        "last": "Clark",
                        "suffix": ""
                    },
                    {
                        "first": "M",
                        "middle": [
                            "A"
                        ],
                        "last": "Krych",
                        "suffix": ""
                    }
                ],
                "year": 2004,
                "venue": "Journal of Memory and Language",
                "volume": "50",
                "issue": "1",
                "pages": "62--81",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Clark, H. H. and Krych, M. A. (2004). Speaking while monitoring addressees for understanding. Journal of Memory and Language, 50(1):62-81.",
                "links": null
            },
            "BIBREF2": {
                "ref_id": "b2",
                "title": "Where's wally: the influence of visual salience on referring expression generation",
                "authors": [
                    {
                        "first": "A",
                        "middle": [
                            "D"
                        ],
                        "last": "Clarke",
                        "suffix": ""
                    },
                    {
                        "first": "M",
                        "middle": [],
                        "last": "Elsner",
                        "suffix": ""
                    },
                    {
                        "first": "H",
                        "middle": [],
                        "last": "Rohde",
                        "suffix": ""
                    }
                ],
                "year": 2013,
                "venue": "",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Clarke, A. D., Elsner, M., and Rohde, H. (2013). Where's wally: the influence of visual salience on referring ex- pression generation. Frontiers in psychology, 4.",
                "links": null
            },
            "BIBREF3": {
                "ref_id": "b3",
                "title": "Interaction in task-oriented human-human dialogue: The effects of different turn-taking policies. Proceedings of the First International IEEE",
                "authors": [
                    {
                        "first": "R",
                        "middle": [],
                        "last": "Fern\u00e1ndez",
                        "suffix": ""
                    },
                    {
                        "first": "T",
                        "middle": [],
                        "last": "Lucht",
                        "suffix": ""
                    },
                    {
                        "first": "K",
                        "middle": [],
                        "last": "Rodr\u00edguez",
                        "suffix": ""
                    },
                    {
                        "first": "D",
                        "middle": [],
                        "last": "Schlangen",
                        "suffix": ""
                    }
                ],
                "year": 2006,
                "venue": "",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Fern\u00e1ndez, R., Lucht, T., Rodr\u00edguez, K., and Schlangen, D. (2006). Interaction in task-oriented human-human dia- logue: The effects of different turn-taking policies. Pro- ceedings of the First International IEEE/ACL Workshop on Spoken Language Technology.",
                "links": null
            },
            "BIBREF4": {
                "ref_id": "b4",
                "title": "Pushto-talk ain't always bad! comparing different interactivity settings in task-oriented dialogue",
                "authors": [
                    {
                        "first": "R",
                        "middle": [],
                        "last": "Fern\u00e1ndez",
                        "suffix": ""
                    },
                    {
                        "first": "D",
                        "middle": [],
                        "last": "Schlangen",
                        "suffix": ""
                    },
                    {
                        "first": "T",
                        "middle": [],
                        "last": "Lucht",
                        "suffix": ""
                    }
                ],
                "year": 2007,
                "venue": "Proceeding of DECALOG, the 11th International Workshop on the Semantics and Pragmatics of Dialogue",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Fern\u00e1ndez, R., Schlangen, D., and Lucht, T. (2007). Push- to-talk ain't always bad! comparing different interac- tivity settings in task-oriented dialogue. Proceeding of DECALOG, the 11th International Workshop on the Se- mantics and Pragmatics of Dialogue (SemDial07).",
                "links": null
            },
            "BIBREF5": {
                "ref_id": "b5",
                "title": "Learning when to point: A data-driven approach",
                "authors": [
                    {
                        "first": "A",
                        "middle": [],
                        "last": "Gatt",
                        "suffix": ""
                    },
                    {
                        "first": "P",
                        "middle": [],
                        "last": "Paggio",
                        "suffix": ""
                    }
                ],
                "year": 2014,
                "venue": "Proceedings of COLING 2014, the 25th International Conference on Computational Linguistics: Technical Papers",
                "volume": "",
                "issue": "",
                "pages": "2007--2017",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Gatt, A. and Paggio, P. (2014). Learning when to point: A data-driven approach. In Proceedings of COLING 2014, the 25th International Conference on Computa- tional Linguistics: Technical Papers, pages 2007-2017, Dublin, Ireland, aug. Dublin City University and Associ- ation for Computational Linguistics.",
                "links": null
            },
            "BIBREF6": {
                "ref_id": "b6",
                "title": "From the virtual to the real world: Referring to objects in real-world spatial scenes",
                "authors": [
                    {
                        "first": "D",
                        "middle": [],
                        "last": "Gkatzia",
                        "suffix": ""
                    },
                    {
                        "first": "V",
                        "middle": [],
                        "last": "Rieser",
                        "suffix": ""
                    },
                    {
                        "first": "P",
                        "middle": [],
                        "last": "Bartie",
                        "suffix": ""
                    },
                    {
                        "first": "W",
                        "middle": [],
                        "last": "Mackaness",
                        "suffix": ""
                    }
                ],
                "year": 2015,
                "venue": "Proceedings of EMNLP 2015. Association for Computational Linguistics",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Gkatzia, D., Rieser, V., Bartie, P., and Mackaness, W. (2015). From the virtual to the real world: Referring to objects in real-world spatial scenes. In Proceedings of EMNLP 2015. Association for Computational Linguis- tics.",
                "links": null
            },
            "BIBREF7": {
                "ref_id": "b7",
                "title": "Comparing local and sequential models for statistical incremental natural language understanding",
                "authors": [
                    {
                        "first": "S",
                        "middle": [],
                        "last": "Heintze",
                        "suffix": ""
                    },
                    {
                        "first": "T",
                        "middle": [],
                        "last": "Baumann",
                        "suffix": ""
                    },
                    {
                        "first": "D",
                        "middle": [],
                        "last": "Schlangen",
                        "suffix": ""
                    }
                ],
                "year": 2010,
                "venue": "Proceedings of the 11th Annual Meeting of the Special Interest Group on Discourse and Dialogue",
                "volume": "",
                "issue": "",
                "pages": "9--16",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Heintze, S., Baumann, T., and Schlangen, D. (2010). Com- paring local and sequential models for statistical incre- mental natural language understanding. In Proceedings of the 11th Annual Meeting of the Special Interest Group on Discourse and Dialogue, pages 9-16. Association for Computational Linguistics.",
                "links": null
            },
            "BIBREF8": {
                "ref_id": "b8",
                "title": "Disfluency and laughter annotation in a lightweight dialogue mark-up protocol",
                "authors": [
                    {
                        "first": "J",
                        "middle": [],
                        "last": "Hough",
                        "suffix": ""
                    },
                    {
                        "first": "L",
                        "middle": [],
                        "last": "De Ruiter",
                        "suffix": ""
                    },
                    {
                        "first": "S",
                        "middle": [],
                        "last": "Betz",
                        "suffix": ""
                    },
                    {
                        "first": "D",
                        "middle": [],
                        "last": "Schlangen",
                        "suffix": ""
                    }
                ],
                "year": 2015,
                "venue": "The 6th Workshop on Disfluency in Spontaneous Speech (DiSS)",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Hough, J., de Ruiter, L., Betz, S., and Schlangen, D. (2015). Disfluency and laughter annotation in a light- weight dialogue mark-up protocol. In The 6th Workshop on Disfluency in Spontaneous Speech (DiSS).",
                "links": null
            },
            "BIBREF9": {
                "ref_id": "b9",
                "title": "ReferItGame: Referring to Objects in Photographs of Natural Scenes",
                "authors": [
                    {
                        "first": "S",
                        "middle": [],
                        "last": "Kazemzadeh",
                        "suffix": ""
                    },
                    {
                        "first": "V",
                        "middle": [],
                        "last": "Ordonez",
                        "suffix": ""
                    },
                    {
                        "first": "M",
                        "middle": [],
                        "last": "Matten",
                        "suffix": ""
                    },
                    {
                        "first": "T",
                        "middle": [
                            "L"
                        ],
                        "last": "Berg",
                        "suffix": ""
                    }
                ],
                "year": 2014,
                "venue": "Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP 2014)",
                "volume": "",
                "issue": "",
                "pages": "787--798",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Kazemzadeh, S., Ordonez, V., Matten, M., and Berg, T. L. (2014). ReferItGame: Referring to Objects in Pho- tographs of Natural Scenes. In Proceedings of the Con- ference on Empirical Methods in Natural Language Pro- cessing (EMNLP 2014), pages 787-798, Doha, Qatar.",
                "links": null
            },
            "BIBREF10": {
                "ref_id": "b10",
                "title": "Simple learning and compositional application of perceptually grounded word meanings for incremental reference resolution",
                "authors": [
                    {
                        "first": "C",
                        "middle": [],
                        "last": "Kennington",
                        "suffix": ""
                    },
                    {
                        "first": "D",
                        "middle": [],
                        "last": "Schlangen",
                        "suffix": ""
                    }
                ],
                "year": 2015,
                "venue": "Proceedings of the Conference for the Association for Computational Linguistics (ACL)",
                "volume": "",
                "issue": "",
                "pages": "292--301",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Kennington, C. and Schlangen, D. (2015). Simple learning and compositional application of perceptually grounded word meanings for incremental reference resolution. Proceedings of the Conference for the Association for Computational Linguistics (ACL), pages 292-301. As- sociation for Computational Linguistics.",
                "links": null
            },
            "BIBREF11": {
                "ref_id": "b11",
                "title": "Interpreting situated dialogue utterances: an update model that uses speech, gaze, and gesture information",
                "authors": [
                    {
                        "first": "C",
                        "middle": [],
                        "last": "Kennington",
                        "suffix": ""
                    },
                    {
                        "first": "S",
                        "middle": [],
                        "last": "Kousidis",
                        "suffix": ""
                    },
                    {
                        "first": "D",
                        "middle": [],
                        "last": "Schlangen",
                        "suffix": ""
                    }
                ],
                "year": 2013,
                "venue": "Proceedings of SIGdial",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Kennington, C., Kousidis, S., and Schlangen, D. (2013). Interpreting situated dialogue utterances: an update model that uses speech, gaze, and gesture information. Proceedings of SIGdial 2013.",
                "links": null
            },
            "BIBREF12": {
                "ref_id": "b12",
                "title": "The d-tuna corpus: A dutch dataset for the evaluation of referring expression generation algorithms",
                "authors": [
                    {
                        "first": "R",
                        "middle": [],
                        "last": "Koolen",
                        "suffix": ""
                    },
                    {
                        "first": "E",
                        "middle": [],
                        "last": "Krahmer",
                        "suffix": ""
                    }
                ],
                "year": 2010,
                "venue": "LREC",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Koolen, R. and Krahmer, E. (2010). The d-tuna corpus: A dutch dataset for the evaluation of referring expression generation algorithms. In LREC.",
                "links": null
            },
            "BIBREF13": {
                "ref_id": "b13",
                "title": "Evaluating a minimally invasive laboratory architecture for recording multimodal conversational data",
                "authors": [
                    {
                        "first": "S",
                        "middle": [],
                        "last": "Kousidis",
                        "suffix": ""
                    },
                    {
                        "first": "T",
                        "middle": [],
                        "last": "Pfeiffer",
                        "suffix": ""
                    },
                    {
                        "first": "Z",
                        "middle": [],
                        "last": "Malisz",
                        "suffix": ""
                    },
                    {
                        "first": "P",
                        "middle": [],
                        "last": "Wagner",
                        "suffix": ""
                    },
                    {
                        "first": "D",
                        "middle": [],
                        "last": "Schlangen",
                        "suffix": ""
                    }
                ],
                "year": 2012,
                "venue": "Proceedings of the Interdisciplinary Workshop on Feedback Behaviors in Dialog, INTER-SPEECH2012 Satellite Workshop",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Kousidis, S., Pfeiffer, T., Malisz, Z., Wagner, P., and Schlangen, D. (2012). Evaluating a minimally invasive laboratory architecture for recording multimodal conver- sational data. In Proceedings of the Interdisciplinary Workshop on Feedback Behaviors in Dialog, INTER- SPEECH2012 Satellite Workshop.",
                "links": null
            },
            "BIBREF14": {
                "ref_id": "b14",
                "title": "Walk the talk: Connecting language, knowledge, and action in route instructions",
                "authors": [
                    {
                        "first": "M",
                        "middle": [],
                        "last": "Macmahon",
                        "suffix": ""
                    },
                    {
                        "first": "B",
                        "middle": [],
                        "last": "Stankiewicz",
                        "suffix": ""
                    },
                    {
                        "first": "B",
                        "middle": [],
                        "last": "Kuipers",
                        "suffix": ""
                    }
                ],
                "year": 2006,
                "venue": "Def",
                "volume": "",
                "issue": "",
                "pages": "1475--1482",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "MacMahon, M., Stankiewicz, B., and Kuipers, B. (2006). Walk the talk: Connecting language, knowledge, and ac- tion in route instructions. Def, pages 1475-1482.",
                "links": null
            },
            "BIBREF15": {
                "ref_id": "b15",
                "title": "Reducing the cost of dialogue system training and evaluation with online, crowd-sourced dialogue data collection",
                "authors": [
                    {
                        "first": "R",
                        "middle": [],
                        "last": "Manuvinakurike",
                        "suffix": ""
                    },
                    {
                        "first": "M",
                        "middle": [],
                        "last": "Paetzel",
                        "suffix": ""
                    },
                    {
                        "first": "D",
                        "middle": [],
                        "last": "Devault",
                        "suffix": ""
                    }
                ],
                "year": 2015,
                "venue": "Proceedings of SEMDIAL 2015 goDIAL",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Manuvinakurike, R., Paetzel, M., and DeVault, D. (2015). Reducing the cost of dialogue system training and eval- uation with online, crowd-sourced dialogue data collec- tion. In Proceedings of SEMDIAL 2015 goDIAL.",
                "links": null
            },
            "BIBREF16": {
                "ref_id": "b16",
                "title": "Natural reference to objects in a visual domain",
                "authors": [
                    {
                        "first": "M",
                        "middle": [],
                        "last": "Mitchell",
                        "suffix": ""
                    },
                    {
                        "first": "K",
                        "middle": [],
                        "last": "Van Deemter",
                        "suffix": ""
                    },
                    {
                        "first": "E",
                        "middle": [],
                        "last": "Reiter",
                        "suffix": ""
                    }
                ],
                "year": 2010,
                "venue": "Proceedings of the 6th international natural language generation conference",
                "volume": "",
                "issue": "",
                "pages": "95--104",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Mitchell, M., van Deemter, K., and Reiter, E. (2010). Nat- ural reference to objects in a visual domain. In Proceed- ings of the 6th international natural language genera- tion conference, pages 95-104. Association for Compu- tational Linguistics.",
                "links": null
            },
            "BIBREF17": {
                "ref_id": "b17",
                "title": "Speaking through a noisy channel -experiments on inducing clarification behaviour in human-human dialogue",
                "authors": [
                    {
                        "first": "D",
                        "middle": [],
                        "last": "Schlangen",
                        "suffix": ""
                    },
                    {
                        "first": "R",
                        "middle": [],
                        "last": "Fern\u00e1ndez",
                        "suffix": ""
                    }
                ],
                "year": 2007,
                "venue": "Proceedings of Interspeech",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Schlangen, D. and Fern\u00e1ndez, R. (2007). Speaking through a noisy channel -experiments on inducing clar- ification behaviour in human-human dialogue. Proceed- ings of Interspeech 2007.",
                "links": null
            },
            "BIBREF18": {
                "ref_id": "b18",
                "title": "Incremental reference resolution: The task, metrics for evaluation, and a bayesian filtering model that is sensitive to disfluencies",
                "authors": [
                    {
                        "first": "D",
                        "middle": [],
                        "last": "Schlangen",
                        "suffix": ""
                    },
                    {
                        "first": "T",
                        "middle": [],
                        "last": "Baumann",
                        "suffix": ""
                    },
                    {
                        "first": "M",
                        "middle": [],
                        "last": "Atterer",
                        "suffix": ""
                    }
                ],
                "year": 2009,
                "venue": "Proceedings of SIGdial 2009, the 10th Annual SIGDIAL Meeting on Discourse and Dialogue",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Schlangen, D., Baumann, T., and Atterer, M. (2009). In- cremental reference resolution: The task, metrics for evaluation, and a bayesian filtering model that is sensi- tive to disfluencies. Proceedings of SIGdial 2009, the 10th Annual SIGDIAL Meeting on Discourse and Dia- logue.",
                "links": null
            },
            "BIBREF19": {
                "ref_id": "b19",
                "title": "The REX corpora : A collection of multimodal corpora of referring expressions in collaborative problem solving dialogues",
                "authors": [
                    {
                        "first": "T",
                        "middle": [],
                        "last": "Tokunaga",
                        "suffix": ""
                    },
                    {
                        "first": "R",
                        "middle": [],
                        "last": "Iida",
                        "suffix": ""
                    },
                    {
                        "first": "A",
                        "middle": [],
                        "last": "Terai",
                        "suffix": ""
                    },
                    {
                        "first": "N",
                        "middle": [],
                        "last": "Kuriyama",
                        "suffix": ""
                    }
                ],
                "year": 2012,
                "venue": "Proceedings of the Eigth International Conference on Language Resources and Evaluation (LREC 2012)",
                "volume": "",
                "issue": "",
                "pages": "422--429",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Tokunaga, T., Iida, R., Terai, A., and Kuriyama, N. (2012). The REX corpora : A collection of multimodal cor- pora of referring expressions in collaborative problem solving dialogues. In Proceedings of the Eigth Interna- tional Conference on Language Resources and Evalua- tion (LREC 2012), pages 422-429.",
                "links": null
            }
        },
        "ref_entries": {
            "FIGREF1": {
                "num": null,
                "uris": null,
                "text": "Database design for representing the mapping between dynamic visual context, words and references",
                "type_str": "figure"
            },
            "TABREF1": {
                "text": "Overview of experimental settings in the PentoRef corpus real scenes were more cluttered, and pieces can have various orientations.",
                "num": null,
                "content": "<table/>",
                "html": null,
                "type_str": "table"
            },
            "TABREF7": {
                "text": "Corpus statistics and available annotations for PentoRef thors. For a subset of our corpora, disfluency and laughter annotation is also included in-line in the way described therein, making it suitable for training and testing disfluency detection. For a subset of the corpora the segments are given dialogue act type tags such as Instruction, Confirmation and ClarificationRequest.",
                "num": null,
                "content": "<table/>",
                "html": null,
                "type_str": "table"
            }
        }
    }
}