File size: 110,951 Bytes
6fa4bc9
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
1001
1002
1003
1004
1005
1006
1007
1008
1009
1010
1011
1012
1013
1014
1015
1016
1017
1018
1019
1020
1021
1022
1023
1024
1025
1026
1027
1028
1029
1030
1031
1032
1033
1034
1035
1036
1037
1038
1039
1040
1041
1042
1043
1044
1045
1046
1047
1048
1049
1050
1051
1052
1053
1054
1055
1056
1057
1058
1059
1060
1061
1062
1063
1064
1065
1066
1067
1068
1069
1070
1071
1072
1073
1074
1075
1076
1077
1078
1079
1080
1081
1082
1083
1084
1085
1086
1087
1088
1089
1090
1091
1092
1093
1094
1095
1096
1097
1098
1099
1100
1101
1102
1103
1104
1105
1106
1107
1108
1109
1110
1111
1112
1113
1114
1115
1116
1117
1118
1119
1120
1121
1122
1123
1124
1125
1126
1127
1128
1129
1130
1131
1132
1133
1134
1135
1136
1137
1138
1139
1140
1141
1142
1143
1144
1145
1146
1147
1148
1149
1150
1151
1152
1153
1154
1155
1156
1157
1158
1159
1160
1161
1162
1163
1164
1165
1166
1167
1168
1169
1170
1171
1172
1173
1174
1175
1176
1177
1178
1179
1180
1181
1182
1183
1184
1185
1186
1187
1188
1189
1190
1191
1192
1193
1194
1195
1196
1197
1198
1199
1200
1201
1202
1203
1204
1205
1206
1207
1208
1209
1210
1211
1212
1213
1214
1215
1216
1217
1218
1219
1220
1221
1222
1223
1224
1225
1226
1227
1228
1229
1230
1231
1232
1233
1234
1235
1236
1237
1238
1239
1240
1241
1242
1243
1244
1245
1246
1247
1248
1249
1250
1251
1252
1253
1254
1255
1256
1257
1258
1259
1260
1261
1262
1263
1264
1265
1266
1267
1268
1269
1270
1271
1272
1273
1274
1275
1276
1277
1278
1279
1280
1281
1282
1283
1284
1285
1286
1287
1288
1289
1290
1291
1292
1293
1294
1295
1296
1297
1298
1299
1300
1301
1302
1303
1304
1305
1306
1307
1308
1309
1310
1311
1312
1313
1314
1315
1316
1317
1318
1319
1320
1321
1322
1323
1324
1325
1326
1327
1328
1329
1330
1331
1332
1333
1334
1335
1336
1337
1338
1339
1340
1341
1342
1343
1344
1345
1346
1347
1348
1349
1350
1351
1352
1353
1354
1355
1356
1357
1358
1359
1360
1361
1362
1363
1364
1365
1366
1367
1368
1369
1370
1371
1372
1373
1374
1375
1376
1377
1378
1379
1380
1381
1382
1383
1384
1385
1386
1387
1388
1389
1390
1391
1392
1393
1394
1395
1396
1397
1398
1399
1400
1401
1402
1403
1404
1405
1406
1407
1408
1409
1410
1411
1412
1413
1414
1415
1416
1417
1418
1419
1420
1421
1422
1423
1424
1425
1426
1427
1428
1429
1430
1431
1432
1433
1434
1435
1436
1437
1438
1439
1440
1441
1442
1443
1444
1445
1446
1447
1448
1449
1450
1451
1452
1453
1454
1455
1456
1457
1458
1459
1460
1461
1462
1463
1464
1465
1466
1467
1468
1469
1470
1471
1472
1473
1474
1475
1476
1477
1478
1479
1480
1481
1482
1483
1484
1485
1486
1487
1488
1489
1490
1491
1492
1493
1494
1495
1496
1497
1498
1499
1500
1501
1502
1503
1504
1505
1506
1507
1508
1509
1510
1511
1512
1513
1514
1515
1516
1517
1518
1519
1520
1521
1522
1523
1524
1525
1526
1527
1528
1529
1530
1531
1532
1533
1534
1535
1536
1537
1538
1539
1540
1541
1542
1543
1544
1545
1546
1547
1548
1549
1550
1551
1552
1553
1554
1555
1556
1557
1558
1559
1560
1561
1562
1563
1564
1565
1566
1567
1568
1569
1570
1571
1572
1573
1574
1575
1576
1577
1578
1579
1580
1581
1582
1583
1584
1585
1586
1587
1588
1589
1590
1591
1592
1593
1594
1595
1596
1597
1598
1599
1600
1601
1602
1603
1604
1605
1606
1607
1608
1609
1610
1611
1612
1613
1614
1615
1616
1617
1618
1619
1620
1621
1622
1623
1624
1625
1626
1627
1628
1629
1630
1631
1632
1633
1634
1635
1636
1637
1638
1639
1640
1641
1642
1643
1644
1645
1646
1647
1648
1649
1650
1651
1652
1653
1654
1655
1656
1657
1658
1659
1660
1661
1662
1663
1664
1665
1666
1667
1668
1669
1670
1671
1672
1673
1674
1675
1676
1677
1678
1679
1680
1681
1682
1683
1684
1685
1686
1687
1688
1689
1690
1691
1692
1693
1694
1695
1696
1697
1698
1699
1700
1701
1702
1703
1704
1705
1706
1707
1708
1709
1710
1711
1712
1713
1714
1715
1716
1717
1718
1719
1720
1721
1722
1723
1724
1725
1726
1727
1728
1729
1730
1731
1732
1733
1734
1735
1736
1737
1738
1739
1740
1741
1742
1743
1744
1745
1746
1747
1748
1749
1750
1751
1752
1753
1754
1755
1756
1757
1758
1759
1760
1761
1762
1763
1764
1765
1766
1767
1768
1769
1770
1771
1772
1773
1774
1775
1776
1777
1778
1779
1780
1781
1782
1783
1784
1785
1786
1787
1788
1789
1790
1791
1792
1793
1794
1795
1796
1797
1798
1799
1800
1801
1802
1803
1804
1805
1806
1807
1808
1809
1810
1811
1812
1813
1814
1815
1816
1817
1818
1819
1820
1821
1822
1823
1824
1825
1826
1827
1828
1829
1830
1831
1832
1833
1834
1835
1836
1837
1838
1839
1840
1841
1842
1843
1844
1845
1846
1847
1848
1849
1850
1851
1852
1853
1854
1855
1856
1857
1858
1859
1860
1861
1862
1863
1864
1865
1866
1867
1868
1869
1870
1871
1872
1873
1874
{
    "paper_id": "2020",
    "header": {
        "generated_with": "S2ORC 1.0.0",
        "date_generated": "2023-01-19T02:09:55.462220Z"
    },
    "title": "Annotating Topics, Stance, Argumentativeness and Claims in Dutch Social Media Comments: A Pilot Study",
    "authors": [
        {
            "first": "Nina",
            "middle": [],
            "last": "Bauwelinck",
            "suffix": "",
            "affiliation": {
                "laboratory": "LT3, Language and Translation Technology Team",
                "institution": "Ghent University Groot",
                "location": {
                    "addrLine": "Brittanni\u00eblaan 45",
                    "postCode": "9000",
                    "settlement": "Ghent",
                    "country": "Belgium"
                }
            },
            "email": ""
        },
        {
            "first": "Els",
            "middle": [],
            "last": "Lefever",
            "suffix": "",
            "affiliation": {
                "laboratory": "LT3, Language and Translation Technology Team",
                "institution": "Ghent University Groot",
                "location": {
                    "addrLine": "Brittanni\u00eblaan 45",
                    "postCode": "9000",
                    "settlement": "Ghent",
                    "country": "Belgium"
                }
            },
            "email": ""
        }
    ],
    "year": "",
    "venue": null,
    "identifiers": {},
    "abstract": "One of the major challenges currently facing the field of argumentation mining is the lack of consensus on how to analyse argumentative user-generated texts such as online comments. The theoretical motivations underlying the annotation guidelines used to generate labelled corpora rarely include motivation for the use of a particular theoretical basis. This pilot study reports on the annotation of a corpus of 100 Dutch user comments made in response to politically-themed news articles on Facebook. The annotation covers topic and aspect labelling, stance labelling, argumentativeness detection and claim identification. Our IAA study reports substantial agreement scores for argumentativeness detection (0.76 Fleiss' kappa) and moderate agreement for claim labelling (0.45 Fleiss' kappa). We provide a clear justification of the theories and definitions underlying the design of our guidelines. Our analysis of the annotations signal the importance of adjusting our guidelines to include allowances for missing context information and defining the concept of argumentativeness in connection with stance. Our annotated corpus and associated guidelines are made publicly available.",
    "pdf_parse": {
        "paper_id": "2020",
        "_pdf_hash": "",
        "abstract": [
            {
                "text": "One of the major challenges currently facing the field of argumentation mining is the lack of consensus on how to analyse argumentative user-generated texts such as online comments. The theoretical motivations underlying the annotation guidelines used to generate labelled corpora rarely include motivation for the use of a particular theoretical basis. This pilot study reports on the annotation of a corpus of 100 Dutch user comments made in response to politically-themed news articles on Facebook. The annotation covers topic and aspect labelling, stance labelling, argumentativeness detection and claim identification. Our IAA study reports substantial agreement scores for argumentativeness detection (0.76 Fleiss' kappa) and moderate agreement for claim labelling (0.45 Fleiss' kappa). We provide a clear justification of the theories and definitions underlying the design of our guidelines. Our analysis of the annotations signal the importance of adjusting our guidelines to include allowances for missing context information and defining the concept of argumentativeness in connection with stance. Our annotated corpus and associated guidelines are made publicly available.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Abstract",
                "sec_num": null
            }
        ],
        "body_text": [
            {
                "text": "User-generated content (UGC) such as can be found in the comment sections of newspapers and social media sites is a valuable resource for the collection of argumentative texts written in natural language. According to Manosevitch and Walker (2009) , user comments offer a \"substantial amount of factual information, and [demonstrate] a public process of weighing alternatives via the expression of issue positions and supporting rationales\". The field of argumentation mining, which forms a part of Natural Language Processing (NLP) research, uses this type of data as a resource to train and test automatic detection systems for the purpose of extracting the various components making up the argumentation expressed by the users (Park and Cardie, 2014; Villalba and Saint-Dizier, 2012) . Training the systems requires annotating the data, for example labelling claims and reasons for those claims in the text. The various annotation tasks required for producing such data have proven to be very difficult for human annotators. Defining a good set of annotation guidelines is essential towards advancing the field of argumentation mining on UGC data such as social media comments. Currently, the myriad of theoretical perspectives on how to analyse argumentation as well as the unpredictable nature of UGC data have lead to a lack of current consensus on reliable guidelines for the various argumentation annotation tasks.",
                "cite_spans": [
                    {
                        "start": 218,
                        "end": 247,
                        "text": "Manosevitch and Walker (2009)",
                        "ref_id": "BIBREF20"
                    },
                    {
                        "start": 730,
                        "end": 753,
                        "text": "(Park and Cardie, 2014;",
                        "ref_id": "BIBREF23"
                    },
                    {
                        "start": 754,
                        "end": 786,
                        "text": "Villalba and Saint-Dizier, 2012)",
                        "ref_id": "BIBREF34"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": "1"
            },
            {
                "text": "This paper presents a pilot annotation study for the identification of the topics, topic aspects and stance expressed by the comments, as well as the detection of argumentativeness and the main claim or conclusion presented. This study assesses the suitability of our current guidelines by measuring the Inter Annotator Agreement (IAA) for all tasks and analyses some specific cases which proved most challenging to our annotators. Our aim is to adjust the guidelines based on these results and analysis (Bauwelinck and Lefever, 2020) , which will then serve as the basis for an extensive annotation study on a more substantial corpus and including more annotation tasks required for a full analysis of the argumentation presented in the comments (a.o., this will include premise and argumentative relation annotation).",
                "cite_spans": [
                    {
                        "start": 504,
                        "end": 534,
                        "text": "(Bauwelinck and Lefever, 2020)",
                        "ref_id": "BIBREF4"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": "1"
            },
            {
                "text": "In Section 2, we briefly discuss some of the relevant research. In Section 3, we give an overview of the theoretical frameworks which form the basis of our annotation scheme. In Section 4, we describe our pilot corpus and in Section 5 we give an overview of the annotation procedure, as well as more information on the rationale underlying our guidelines. In Section 6, we first present the results of the IAA study. In Section 7, we then present our analysis of the annotations. We end with Section 8 on concluding remarks as well as indications for future research.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": "1"
            },
            {
                "text": "In the field of argumentation mining, many different problems related to the analysis of argumentation in texts are being treated as different subtasks for automatic detection. Many of these tasks relate more generally to the processing of various aspects of texts within the broader field of NLP. For example, as a preliminary step towards the more argumentation-specific detection tasks, the tasks of topic and stance detection are often performed. Users, especially when arguing on controversial topics, tend to emphasize specific aspects of the topic, a concept called \"framing\" (Entman, 1993) . Therefore, both the more general topics and the more fine-grained topic aspects need to be identified. A major challenge still lies in determining how the different aspects relate to each other and to the major topic under discussion (Saint-Dizier, 2016) . This challenge relates to the issue of determining how fine-grained the targets of the users' stance needs to be. There is still no consensus on this issue, but the findings have confirmed that the more fine-grained the stance target, the more difficult it is to automatically classify the stance, and stance targets that are defined too broadly may not be specific enough to become associated with each respective side (pro/con) of the debate target (Wojatzki and Zesch, 2016) . Most authors therefore opt for a predetermined list of topics, in which topics either take the form of single words (more coarse-grained approach) or phrases (fine-grained). The latter approach was used by Saint-Dizier (2016), who defined controversial issues as evaluative statements (e.g., \"Climate action is necessary\") as this would aid in mining pro/con arguments for these specific controversies. As a preliminary step for the annotation of different argument components (Stab and Gurevych, 2014; Peldszus and Stede, 2015) , the text needs to be split into smaller segments. This step is often skipped in argumentation mining research, in favour of departing from pre-segmented text (Ajjour et al., 2017) . The segmentation of user-generated comments is not straightforward, as they contain many irregularities in the use of punctuation, capitalization and other orthographic markers of sentence boundaries. Determining the boundaries of argumentative segments is challenging but necessary for argumentation mining, as it forms the preliminary step both towards identifying the claim and premises as well as more fine-grained components of the argument. Textual indicators of argumentativeness such as discourse markers are often used as features for the automatic detection of argumentative segments (Eckle-Kohler et al., 2015) . An important caveat of using argumentative words or phrases for the task of argumentativeness detection is that they may also occur in non-argumentative text (van Eemeren and Houtlosser, 2006) . Argumentation mining research has produced some work on the linguistic cues which may be helpful for the identification of argumentativeness (Nguyen and Litman, 2015) , especially the detection of discourse markers has been explored to serve as signallers of argumentativeness to aid the automatic detection of arguments (Somasundaran et al., 2008; Tseronis, 2011; Eckle-Kohler et al., 2015) . The detection of segments representing the claim or conclusion of an argument in texts is an important prerequisite for applications involving fact checking (Vlachos and Riedel, 2014) . It is a very challenging task, especially when applied to an extremely varied resource, like online discourse . There is no current consensus on what exactly constitutes a claim, leading to many different annotation approaches (Daxenberger et al., 2017) .",
                "cite_spans": [
                    {
                        "start": 583,
                        "end": 597,
                        "text": "(Entman, 1993)",
                        "ref_id": "BIBREF12"
                    },
                    {
                        "start": 834,
                        "end": 854,
                        "text": "(Saint-Dizier, 2016)",
                        "ref_id": "BIBREF27"
                    },
                    {
                        "start": 1308,
                        "end": 1334,
                        "text": "(Wojatzki and Zesch, 2016)",
                        "ref_id": "BIBREF37"
                    },
                    {
                        "start": 1814,
                        "end": 1839,
                        "text": "(Stab and Gurevych, 2014;",
                        "ref_id": "BIBREF30"
                    },
                    {
                        "start": 1840,
                        "end": 1865,
                        "text": "Peldszus and Stede, 2015)",
                        "ref_id": "BIBREF25"
                    },
                    {
                        "start": 2026,
                        "end": 2047,
                        "text": "(Ajjour et al., 2017)",
                        "ref_id": "BIBREF0"
                    },
                    {
                        "start": 2644,
                        "end": 2671,
                        "text": "(Eckle-Kohler et al., 2015)",
                        "ref_id": "BIBREF11"
                    },
                    {
                        "start": 2832,
                        "end": 2866,
                        "text": "(van Eemeren and Houtlosser, 2006)",
                        "ref_id": "BIBREF33"
                    },
                    {
                        "start": 3010,
                        "end": 3035,
                        "text": "(Nguyen and Litman, 2015)",
                        "ref_id": "BIBREF21"
                    },
                    {
                        "start": 3190,
                        "end": 3217,
                        "text": "(Somasundaran et al., 2008;",
                        "ref_id": "BIBREF29"
                    },
                    {
                        "start": 3218,
                        "end": 3233,
                        "text": "Tseronis, 2011;",
                        "ref_id": "BIBREF32"
                    },
                    {
                        "start": 3234,
                        "end": 3260,
                        "text": "Eckle-Kohler et al., 2015)",
                        "ref_id": "BIBREF11"
                    },
                    {
                        "start": 3420,
                        "end": 3446,
                        "text": "(Vlachos and Riedel, 2014)",
                        "ref_id": "BIBREF35"
                    },
                    {
                        "start": 3676,
                        "end": 3702,
                        "text": "(Daxenberger et al., 2017)",
                        "ref_id": "BIBREF8"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Related Research",
                "sec_num": "2"
            },
            {
                "text": "For the annotation of topics in comments, it is difficult to predetermine a list of possible topics. The comments in our corpus were made in response to newspaper articles shared via Facebook, so we can safely assume they will often refer to the newspaper content (Manosevitch and Walker, 2009) . In order to capture all the available topic information, the distinction between structuring and interactional topics is useful. The structuring topics are those found in the surrounding context; the interactional topics are those which form the topics of discussion and are found in the immediate context of interaction (Stromer-Galley and Martinson, 2009) . This distinction has been applied by Rowe (2015) , who used the two categories for the topic annotation of user comments to newspaper articles shared via Facebook and comments made on the same articles, via the official newspaper website. In their study, the structuring topic is that which is reported on in the article and the interactional topics are present in the individual comments. This approach seems feasible for our corpus, since the data type is so similar.",
                "cite_spans": [
                    {
                        "start": 264,
                        "end": 294,
                        "text": "(Manosevitch and Walker, 2009)",
                        "ref_id": "BIBREF20"
                    },
                    {
                        "start": 618,
                        "end": 654,
                        "text": "(Stromer-Galley and Martinson, 2009)",
                        "ref_id": "BIBREF31"
                    },
                    {
                        "start": 694,
                        "end": 705,
                        "text": "Rowe (2015)",
                        "ref_id": "BIBREF26"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Theoretical Frameworks",
                "sec_num": "3"
            },
            {
                "text": "For some interactional topics and aspects in the comment, the user expresses a stance reflecting his personal opinion towards the specific topic or aspect. Not all topics and aspects touched upon in the comment will be the target of the users' opinion, as some will only be mentioned in passing, or to set up the context for the argumentation. The typical stance annotation labels include pro, contra and none, where the last one represents a topic or aspect which is not used as a target for the opinion of the user (K\u00fc\u00e7\u00fck and Can, 2020) . Parallel to the distinction between the broader (discussion-wide) structuring topic label and the narrower interactional topic label (based on the comment text itself), the distinction between a debate stance and explicit stance is made by Wojatzki and Zesch (2016) . The debate stance is the stance towards the target of the whole debate. It is often implicit and inferrable from the explicit stance(s) which rely on textual evidence. This distinction has proven successful for stance annotation on noisy social media data and has even helped model implicit argumentation (Wojatzki and Zesch, 2016) .",
                "cite_spans": [
                    {
                        "start": 517,
                        "end": 538,
                        "text": "(K\u00fc\u00e7\u00fck and Can, 2020)",
                        "ref_id": "BIBREF17"
                    },
                    {
                        "start": 781,
                        "end": 806,
                        "text": "Wojatzki and Zesch (2016)",
                        "ref_id": "BIBREF37"
                    },
                    {
                        "start": 1114,
                        "end": 1140,
                        "text": "(Wojatzki and Zesch, 2016)",
                        "ref_id": "BIBREF37"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Theoretical Frameworks",
                "sec_num": "3"
            },
            {
                "text": "Given the difficulty of the automatic segmentation of argumentative texts (Al-Khatib et al., 2016) and our focus on the segment-level annotation of argumentativeness and claims, we also perform a preliminary manual segmentation step before handing the data to the annotators to avoid the error percolation which an added segmentation annotation task would inevitably introduce.",
                "cite_spans": [
                    {
                        "start": 74,
                        "end": 98,
                        "text": "(Al-Khatib et al., 2016)",
                        "ref_id": "BIBREF2"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Theoretical Frameworks",
                "sec_num": "3"
            },
            {
                "text": "While discourse markers are considered useful indicators of argumentativeness (Eckle-Kohler et al., 2015), Tseronis (2011) proposed the more specific concept of the argumentative marker which is a lexical item signalling the presence of an argumentative move (e.g., marking a standpoint). The argumentative marker may consist of a single word, phrase or sentence. A distinction is made between those markers which are syntactically part of the main constituents of the phrase and those that are independent on a semantic and syntactic level (Tseronis, 2011) . The concept of shell language developed by Madnani et al. (2012) is similar to the argumentative markers, in the sense that they may be used as signallers of argumentativeness and may consist of longer sequences of words than is the case for discourse markers. Shell language includes organizational phrases, such as ones marking the expression of an opinion (e.g., \"I think that\"), but also ones marking argumentative structures (such as \"after all\"). Du et al. (2014) have used shell language for the task of automatically separating topical contents from organizational language and have demonstrated the usefulness of this task, for instance to improve topic detection. They evaluated their fully unsupervised Shell Topic Model on argumentative UGC sourced from online forums. Ducrot (1982) contends that argumentativeness is present in an utterance even if it does not contain a linguistic expression it may explicitly be linked to. The first major challenge lies in defining what exactly constitutes argumentativeness. The distinction between the argumentative and informative components of utterances (Anscombre, 1995) provides one possible answer. The informative component corresponds to the propositional content of the sentence. The argumentative component signals the utterance's argumentative orientation: whether or not it has the potential of being used as a premise for a given conclusion. This perspective helps to circumvent the difficulty of reconstructing the intentions of the author of an argumentative text, as it emphasizes the text itself as the locus of the author's intention.",
                "cite_spans": [
                    {
                        "start": 107,
                        "end": 122,
                        "text": "Tseronis (2011)",
                        "ref_id": "BIBREF32"
                    },
                    {
                        "start": 541,
                        "end": 557,
                        "text": "(Tseronis, 2011)",
                        "ref_id": "BIBREF32"
                    },
                    {
                        "start": 603,
                        "end": 624,
                        "text": "Madnani et al. (2012)",
                        "ref_id": "BIBREF19"
                    },
                    {
                        "start": 1013,
                        "end": 1029,
                        "text": "Du et al. (2014)",
                        "ref_id": "BIBREF9"
                    },
                    {
                        "start": 1341,
                        "end": 1354,
                        "text": "Ducrot (1982)",
                        "ref_id": "BIBREF10"
                    },
                    {
                        "start": 1668,
                        "end": 1685,
                        "text": "(Anscombre, 1995)",
                        "ref_id": "BIBREF3"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Theoretical Frameworks",
                "sec_num": "3"
            },
            {
                "text": "Biran and Rambow (2011)'s concept of the claim consists of any utterance conveying subjective information and anticipating the question \"why are you telling me that?\". Daxenberger et al. (2017) have criticized Biran and Rambow's (2011) annotated dataset of online comments from blog threads for containing noisy sentences annotated with claims. However, we hypothesize this is not necessarily caused by the definition of the claim concept. Instead, it is a characteristic of this type of data. Our annotation guidelines employ a very similar definition of the claim concept as the one used in Biran and Rambow (2011) .",
                "cite_spans": [
                    {
                        "start": 168,
                        "end": 193,
                        "text": "Daxenberger et al. (2017)",
                        "ref_id": "BIBREF8"
                    },
                    {
                        "start": 210,
                        "end": 235,
                        "text": "Biran and Rambow's (2011)",
                        "ref_id": "BIBREF5"
                    },
                    {
                        "start": 593,
                        "end": 616,
                        "text": "Biran and Rambow (2011)",
                        "ref_id": "BIBREF5"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Theoretical Frameworks",
                "sec_num": "3"
            },
            {
                "text": "The current pilot study represents the first phase of our aim to define the annotation guidelines on a corpus of 100 Dutch user-generated comments sourced from the social media platform Facebook. All comments were made in response to an online newspaper article being shared via the official Facebook page of a Flemish newspaper. In this first phase, we focus on measuring agreement and finding edge cases for the annotation tasks of topic and stance labelling, segmentation of the text into argumentative units and claim identification. The second phase will consist of an agreement study on the same corpus sample of 100 comments and will focus on the tasks of premise and argumentative relation identification.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Theoretical Frameworks",
                "sec_num": "3"
            },
            {
                "text": "Facebook is by far the most popular platform for accessing news (Rowe, 2015) . However, given the difficulty in collecting this data automatically and assumptions about the lack of argumentation in shorter texts , this platform is rarely used to source data for argumentation mining purposes. Our corpus of 100 Dutch comments was sourced from the official Facebook page of Het Laatste Nieuws (HLN), a popular Flemish newspaper. As is the case for many news outlets (Rowe, 2015) , the Facebook page is used to share news articles with a wider audience, often linking to the official website of the newspaper. The corpus was collected manually and contains comments made on posts published in the second and third weeks of June 2020. We did not predetermine a list of topics to filter the Facebook posts. Instead, we chose to collect the first ten comments of each most recent post on the page which was topically related to a political or controversial topic. This included news stories related to policy decisions, party politics, but also related to topics like health care, poverty and racism. We filtered out duplicate comments and comments in other languages than Dutch, but did not filter out multiple comments made by the same user. Aside from the 100 comments, we also collected the 30 associated article texts and gathered 30 screenshots of the Facebook posts commented upon.",
                "cite_spans": [
                    {
                        "start": 64,
                        "end": 76,
                        "text": "(Rowe, 2015)",
                        "ref_id": "BIBREF26"
                    },
                    {
                        "start": 465,
                        "end": 477,
                        "text": "(Rowe, 2015)",
                        "ref_id": "BIBREF26"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Corpus",
                "sec_num": "4"
            },
            {
                "text": "For the segmentation of comments containing rhetorical questions, we decided to follow the perspective of Speech Act Theory, which contends that this special type of questions only has the surface structure of a question, but realizes the speech act of making a statement (Walton, 2007) . Thus, every rhetorical question which was followed by an explicit answer was considered as one segment together with that answer. Our approach resulted in a total number of 504 segments across the whole corpus. The highest number of segments for a single comment is 19; the lowest is 1. The comments in our corpus contained an average number of 5 segments.",
                "cite_spans": [
                    {
                        "start": 272,
                        "end": 286,
                        "text": "(Walton, 2007)",
                        "ref_id": "BIBREF36"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Corpus",
                "sec_num": "4"
            },
            {
                "text": "We divided our annotators into two groups of three. Each group was assigned different annotation tasks:",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Corpus",
                "sec_num": "4"
            },
            {
                "text": "1. The first group had to annotate the topics and topic aspects contained in the comment (interactional topics), as well as the stances expressed towards those topics and aspects. This task was performed on the level of the entire comment text. The annotators could select the topics and aspects from the list of structuring topics identified in the preliminary step for all the articles and Facebook posts. They were allowed to create additional topic and aspect labels if they could not find a suitable one in the list.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Corpus",
                "sec_num": "4"
            },
            {
                "text": "2. The second group was tasked with labelling the predefined segments of each comment as argumentative unit (AU) or non-argumentative unit (NAU). Organizational elements which did not carry any argumentatively relevant information were to be marked as NAU. Then, the annotators had to indicate which segment best represented the claim of the argument. Only segments marked as AU were eligible to receive the claim label.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Corpus",
                "sec_num": "4"
            },
            {
                "text": "We provide an example of an annotated comment from our corpus to help clarify the annotation tasks. We refer the reader to the guidelines for a more detailed description of the labels. For the comment \"And in the stores everyone touches all the fruit and vegetables with their hands better to monitor everything that is in the store, people even open packages and eat the product\", the topic-aspect combinations corona (measures, spread of disease), enforcement and shops were labelled by one of our annotators. Three stance labels were identified: pro towards the topic of enforcement and the aspect of measures and contra towards the aspect of spread of disease. The segmented comment was shown to the second group of annotators as follows: \"1[And] 2[in the stores everyone touches all the fruit and vegetables with their hands] 3[better to monitor everything that is in the store,] 4[people even open packages and eat the product]\". In a first round, the annotator determined for each segment whether it was argumentatively relevant or not, resulting in the following labels: argumentative (segments 2, 3 and 4); non-argumentative (segment 1). In a second round, the annotator was given the segments prelabelled for argumentativeness. The annotator then had to determine which of the segments labelled as argumentative best represented the central claim/conclusion of the user comment. In this example, segment 3 was labelled as the claim. All other segments marked argumentative in this comment are therefore seen to support or to be otherwise argumentatively relevant leading up to this claim.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Corpus",
                "sec_num": "4"
            },
            {
                "text": "For the first annotation tasks, viz. assigning topics, aspects and stance, the annotators could assign multiple labels to the same comment. As standard IAA scoring mechanisms (such as Cohen's pairwise kappa) assume the assignment of one category label per unit of annotation, these metrics are not suitable for measuring IAA for multiple labels per annotator. Therefore other metrics, such as Krippendorff's Alpha (Krippendorff, 1970; Krippendorff, 2004) , have to be used to calculate disagreement (or distance) between sets of assigned labels. Krippendorff's Alpha considers difference in annotation on all possible annotation units, irrespective of the number of labels and the type of annotation (categorical, numeric, ordinal). To calculate the distance between two sets of annotation labels, we followed the implementation of Passonneau (2006) , using MASI (Measuring Agreement on Set-valued Items). The resulting distance is 0 when sets are overlapping, and 1 when sets are disjoint.",
                "cite_spans": [
                    {
                        "start": 414,
                        "end": 434,
                        "text": "(Krippendorff, 1970;",
                        "ref_id": "BIBREF15"
                    },
                    {
                        "start": 435,
                        "end": 454,
                        "text": "Krippendorff, 2004)",
                        "ref_id": "BIBREF16"
                    },
                    {
                        "start": 832,
                        "end": 849,
                        "text": "Passonneau (2006)",
                        "ref_id": "BIBREF24"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Inter-annotator Agreement Results",
                "sec_num": "6"
            },
            {
                "text": "The first annotation tasks appeared to be challenging, resulting in a total of 88 different topics, 245 aspects and 197 stance labels. As a lot of the disagreement for stance is caused by the choice of different Alpha/MASI distance #comments perfect agreement Ann1 -Ann2 Ann1 -Ann3 Ann2 -Ann3 Ann1 -Ann2 Ann1 -Ann3 Ann2 - aspects of the same topic, we also calculated the distance when only considering the sets of stance labels disregarding the specific topic or aspect they were attached to. Table 1 lists the distance between the sets of labels assigned per pair of annotators and the number of instances with total label agreement for topics, aspects, stance at the topic/aspect level and stance labels ignoring the aspect/topic. In the following step, annotators were charged with labeling each segment as (1) either an argumentative unit (AU) or non-argumentative unit (NAU) and (2) a claim or no claim. As annotators could only assign one label per task, we could apply more traditional IAA metrics for these tasks such as Cohen's pairwise kappa (Cohen, 1960) , which measures agreement between two raters, and Fleiss' kappa (Fleiss, 1971) , that can be used for measuring agreement between multiple raters. Note that the Fleiss kappa is a multi-rater generalization of Scott's pi statistic, not Cohen's kappa. Table 2 lists the agreement scores for these two tasks. For the interpretation of the kappa scores, we refer to Landis and Koch (1977) , who consider kappa scores ranging between 0.21 and 0.40 as fair agreement, between 0.41 and 0.60 as moderate agreement, between 0.61 and 0.80 as substantial agreement and between 0.81 and 0.99 as almost perfect agreement.",
                "cite_spans": [
                    {
                        "start": 1053,
                        "end": 1066,
                        "text": "(Cohen, 1960)",
                        "ref_id": "BIBREF6"
                    },
                    {
                        "start": 1132,
                        "end": 1146,
                        "text": "(Fleiss, 1971)",
                        "ref_id": "BIBREF13"
                    },
                    {
                        "start": 1430,
                        "end": 1452,
                        "text": "Landis and Koch (1977)",
                        "ref_id": "BIBREF18"
                    }
                ],
                "ref_spans": [
                    {
                        "start": 494,
                        "end": 501,
                        "text": "Table 1",
                        "ref_id": null
                    },
                    {
                        "start": 1318,
                        "end": 1325,
                        "text": "Table 2",
                        "ref_id": "TABREF1"
                    }
                ],
                "eq_spans": [],
                "section": "Inter-annotator Agreement Results",
                "sec_num": "6"
            },
            {
                "text": "Fleiss' kappa Ann1 -Ann2 Ann1 -Ann3 Ann2 -Ann3 All annotators Argumentativeness /vs/ non-argumentativeness 0.74 0.82 0.71 0.76 Claim detection 0.45 0.48 0.41 0.45 We reached moderate agreement for the topic and aspect labelling tasks, considering the Krippendorff's distance is rather high between all pairs of annotators. This is partly due to errors percolating from the topic labelling step to the aspect labels. When we consider only the stance labels per comment, without taking into account the topic and aspects they are linked to, we see that on average, almost half the comments show perfect agreement. In addition, annotators sometimes assign similar (sub)topics or additional (sub)topics, resulting in fairly large distance scores: e.g. [travel, Corona, aviation] vs [Corona, aviation] results in a distance of 0.55, and [police, politician] vs [politician] results in a distance of 0.665. Perfect agreement on the stance labelling task was reached for only 8 comments, all three annotators agreeing on the \"none\" stance label for those comments. When considering partial agreement (defined here with the following condition satisfied: all three annotators share at least one stance label towards the same topic, e.g. \"(politics)contra\"), we noted partial agreement for 19 comments. In the following, we briefly discuss some concrete annotation examples for the annotation tasks which were performed on the comment level. When considering the distance metric between the stance labels only, we noted some recurring labelling errors.",
                "cite_spans": [
                    {
                        "start": 748,
                        "end": 774,
                        "text": "[travel, Corona, aviation]",
                        "ref_id": null
                    },
                    {
                        "start": 778,
                        "end": 796,
                        "text": "[Corona, aviation]",
                        "ref_id": null
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Cohen's kappa",
                "sec_num": null
            },
            {
                "text": "(1) Feeling ill = stay at home! Easy! There are people who can't stand wearing the mask due to breathing problems! Wearing a mask in this heat is asking for trouble for those people. Keep your distance, sanitize your hands and stay AT HOME when ill. EASY... 2",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Cohen's kappa",
                "sec_num": null
            },
            {
                "text": "In some cases, the annotator identified the stance towards a specific topic, while the others identified only the stance towards the broader related topic. In Example 1, Annotator 1 identified a contra stance towards the mouth mask obligation topic, while Annotator 2 identified only a pro stance towards the more general topic of corona measures. Annotator 3 did identify both stances. One possible explanation for this confusion is that the annotators may have made a distinction between main topics and more peripheral topics in the comment. In future guidelines, it will be necessary to emphasize the need for identifying both the stance towards the broader topics and more specific ones.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Cohen's kappa",
                "sec_num": null
            },
            {
                "text": "(2) Just have them all write a protest letter, you won't be receiving much mail.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Cohen's kappa",
                "sec_num": null
            },
            {
                "text": "Comments containing irony such as Example 2 seem to complicate the stance annotation in some cases (perfect agreement for stance labelling amounting only to 8% of the total of 25 ironic comments and partial agreement reaching 28% (7/25)). Two of the annotators indicated doubts in cases such as these and were subsequently instructed to use the NONE label in case of doubt. This instruction will also be added to the future guidelines. Additionally, we will ask the annotators to mark the instances of irony and context-related understandability issues, a strategy also applied by Wojatzki and Zesch (2016) in their stance annotation study.",
                "cite_spans": [
                    {
                        "start": 581,
                        "end": 606,
                        "text": "Wojatzki and Zesch (2016)",
                        "ref_id": "BIBREF37"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Cohen's kappa",
                "sec_num": null
            },
            {
                "text": "(3) And the people who kept working from home every day and entertained the kids at the same time, also don't get anything from the government. The energy bill has nevertheless seriously increased in the past 3 months .. This decision was not well considered at all! Some specific topic labels frequently caused disagreement. In Example 3, disagreement is caused by the fact that one annotator has interpreted the user's stance as being in favour of the government handing out a financial compensation to those in need, resulting in the stance label (compensation)PRO. Another annotator has interpreted the stance as being against the lack of compensation for specific groups of the population, resulting in the label (compensation)CONTRA. The third annotator has circumvented the issue by adding the extra labels (government)CONTRA and (aid)PRO. In any case, the future guidelines will still include the option of defining extra labels wherever called for. Considering the difficulty of the annotation task, we reached substantial agreement scores for the task of argumentativeness detection (0.76 Fleiss' kappa) . For the argumentativeness labelling task, nonargumentative segments were defined as those having primarily an organizational function in the comment (for example, to introduce a particular component of the argumentation). Two annotators expressed their doubts as to how to annotate segments which in addition to the organizational function, also seemed to express the stance of the user, such as segments 2 and 6 in Example 4. To avoid mislabelling of such segments which are stance-bearing (and should thus be labelled as AU), we will add this specification to the guidelines. Segments occurring at the very start of the comment caused disagreement for argumentativeness labelling, especially in cases like Example 5 where the segment appeared to interact with the surrounding context (for instance, the title or the content of the newspaper article). Since the information in these segments often consists of an evaluative comment on the article content, they are considered stance-bearing and should be labelled as AU. We found a total of 45 comments in our corpus in which the first segment interacted with the context. In 77.8% of these cases (35/45 comments), perfect agreement was reached across all three annotators for the argumentativeness labelling task. When we compare this to the 52% of comments reaching perfect agreement for this task on the whole corpus, we see that the influence of the lack of context information available to our annotators was moderate for this task on our current corpus. We will ensure such segments are flagged in future annotations. This will help us to determine which types of comments may require context features for the automatic detection system.",
                "cite_spans": [
                    {
                        "start": 1099,
                        "end": 1113,
                        "text": "Fleiss' kappa)",
                        "ref_id": null
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Cohen's kappa",
                "sec_num": null
            },
            {
                "text": "Additionally, it will be useful to ensure our future guidelines are more explicit on the possible guises of argumentativeness (defined in our guidelines as \"all information relevant to the support or expression of the author's position\"). Therefore, we will explicitly state that this then includes background information (e.g., \"In 1400, a virus was considered an illness\") and (parts of) personal narratives (\"Just the other day I saw a whole family with no mouth masks on\"). In our corpus, we found a total of 44 comments contained at least one of these less obviously argumentative segments (corresponding to 19% or 98/504 total segments in our corpus). Since the annotators themselves raised the question of whether or not such segments were to be considered argumentative fairly quickly, we were able to achieve perfect agreement across all three annotators for 80% (79/98) of segments. However, perfect agreement was only reached for 43% (19/44) of all comments containing this type of segment, meaning that there were few comments in which they were all captured. Therefore, we will include more examples of what sequences of less obviously argumentative segments may look like (such as the first segment of Example (6) in the guidelines). In Example (6), while both segments are argumentative (since together they form evidence for the user's claim that the experts are distributing confusing information about the usefulness of mouth masks), we found disagreement on the argumentative relevance of the first segment.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Cohen's kappa",
                "sec_num": null
            },
            {
                "text": "(6) 1[During the acute phase the experts said that wearing mouth masks wasn't useful .] 2[Now that the virus has been seriously reduced, we do have to wear them .] [...] Maar (English: but) (29 instances) and en (English: and) (23 instances) were the most frequently occurring shell language expressions and proved indicative of argumentativeness (But preceded an AU segment for 23/29 instances and And preceded an AU segment for 15/23 instances for all annotators). This is not surprising, since maar (English: but) is a connective often used to signal a contrasting reason in argumentation and en (English: and) can be used to signal an additional reason. An important type of shell language expression we found in our corpus (I shouldn't be saying this; There is not much more to say about this) corresponds to the so-called \"discourse markers of standpoint continuity\", identified by Craig and Sanusi (2000) as a common characteristic of group discussions on controversial issues. They are commonly used to specify argumentative standpoints as well as to avoid disagreement while saving face (Craig and Sanusi, 2000) . However, a clear distinction should be introduced in the guidelines between such markers and segments like I do understand his point or I get that, which are indicative of the stance of the author and should be annotated as AU (all three annotators currently annotated these segments as NAU). The presence of the verbs of understanding as well as the first person singular may help distinguish such segments.",
                "cite_spans": [
                    {
                        "start": 164,
                        "end": 169,
                        "text": "[...]",
                        "ref_id": null
                    },
                    {
                        "start": 888,
                        "end": 911,
                        "text": "Craig and Sanusi (2000)",
                        "ref_id": "BIBREF7"
                    },
                    {
                        "start": 1096,
                        "end": 1120,
                        "text": "(Craig and Sanusi, 2000)",
                        "ref_id": "BIBREF7"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Cohen's kappa",
                "sec_num": null
            },
            {
                "text": "We reached moderate agreement for the claim labelling task (0.45 Fleiss' kappa). For 33 comments full agreement was reached on the claim annotation task. Many of the claims identified are elliptical, e.g.: Belgium doomed, Mayor not capable. This is typical of the nature of our data. From Daxenberger et al.'s (2017) analysis of claim segments in various argumentation mining corpora, the presence of policy claims (Schiappa and Nordin, 2014) emerged as a common characteristic in multiple corpora (e.g., of the Wikipedia Talk Page Corpus (Biran and Rambow, 2011) and the Microtext corpus of Peldszus and Stede (2015) ). In our corpus, such policy claims, which are often characterized by the presence of the modal verb \"should\" (Daxenberger et al., 2017) , were present in 32 comments, for example: Everyone should just decide for themselves what they find important, taking care of yourself or blending in with the crowd. Since the policy claim expresses a wish for things to be done differently, it may be expressed in the form of an advice (We can stop driveling about the mouth masks now), or as a strong imperative (if your heart isn't in it, don't do it), the latter of which seems to be particularly indicative of claim segments. Perfect agreement for the claim labeling task was reached on 44% of comments (14/32) (corresponding to 22 segments containing at least one segment with \"should\" or an imperative form.",
                "cite_spans": [
                    {
                        "start": 289,
                        "end": 316,
                        "text": "Daxenberger et al.'s (2017)",
                        "ref_id": "BIBREF8"
                    },
                    {
                        "start": 415,
                        "end": 442,
                        "text": "(Schiappa and Nordin, 2014)",
                        "ref_id": "BIBREF28"
                    },
                    {
                        "start": 539,
                        "end": 563,
                        "text": "(Biran and Rambow, 2011)",
                        "ref_id": "BIBREF5"
                    },
                    {
                        "start": 592,
                        "end": 617,
                        "text": "Peldszus and Stede (2015)",
                        "ref_id": "BIBREF25"
                    },
                    {
                        "start": 729,
                        "end": 755,
                        "text": "(Daxenberger et al., 2017)",
                        "ref_id": "BIBREF8"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Cohen's kappa",
                "sec_num": null
            },
            {
                "text": "Aside from the occurrence of policy claims, when comparing the claims annotated in our corpus to those found in the Web Discourse corpus , our claims are more often anaphoric in the sense that they express the stance of the author, but without specific lexical reference to the given topic (\"I personally find it really sad it's not obligatory,\"). Many claims contain expressions signalling beliefs (such as \"in my opinion\", \"personally\", \"I find\"), which according to Daxenberger et al.'s (2017) analysis, is characteristic of the claims found in persuasive student essays (such as the corpus of Stab and Gurevych (2017) ). In general, the claim of an argument is more likely to carry stance-taking words toward the topic. This aspect has been identifed as a useful feature for the automatic detection of claims (Ajjour et al., 2019) .",
                "cite_spans": [
                    {
                        "start": 469,
                        "end": 496,
                        "text": "Daxenberger et al.'s (2017)",
                        "ref_id": "BIBREF8"
                    },
                    {
                        "start": 597,
                        "end": 621,
                        "text": "Stab and Gurevych (2017)",
                        "ref_id": "BIBREF8"
                    },
                    {
                        "start": 813,
                        "end": 834,
                        "text": "(Ajjour et al., 2019)",
                        "ref_id": "BIBREF1"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Cohen's kappa",
                "sec_num": null
            },
            {
                "text": "The fact that we can find many correspondences with existing corpora of different genres and domains, strengthens our assumption that general markers of claim presence may still be found across genres. Adding these markers in the guidelines for the annotation can help create a more unified approach towards the annotation of claims, which appeared to be absent in the field (Daxenberger et al., 2017) . We will investigate the presence and usefulness of such general markers of argumentativity in our more extended corpus, including more domains and platforms to source UGC data from.",
                "cite_spans": [
                    {
                        "start": 375,
                        "end": 401,
                        "text": "(Daxenberger et al., 2017)",
                        "ref_id": "BIBREF8"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Cohen's kappa",
                "sec_num": null
            },
            {
                "text": "The insights we gained from this pilot annotation study will be used to improve our annotation guidelines. In this study, we pre-segmented all the comments in a preliminary step. This was done to avoid too much error percolation from the annotation results of the segmentation into the claim annotation task, which would make calculating the agreement on claim identification a difficult matter. However, we realize this pre-segmentation may introduce more bias when considering the annotation of other argumentative components like premises and relations between segments. Since the automatic segmentation of texts into argumentative and non-argumentative segments is still very challenging and advances in the field are still being developed, we will use Al-Khatib et al.'s (2016) rule-based algorithm for the automatic pre-segmentation of the corpus as a step prior to the manual annotation. In this way, the annotators will be asked to correct the automatic segmentation by merging incorrectly split segments.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Conclusion and Future Research",
                "sec_num": "8"
            },
            {
                "text": "We believe the low agreement results for the first set of annotation tasks (topic, aspect and stance detection) may be improved by reducing the number of possible structuring topic labels for the annotators to choose from. This will require pruning the list of structuring topics and aspects we identified to include only the most frequently occurring ones. Since we are aware of the risk of bias entering our annotation process in providing the structuring topic and aspect labels for the annotators to choose from when deciding on the interactional labels, we will include an evaluation step (removing duplicate labels and becoming aware of ambiguities in certain labels) in our revised annotation guidelines to be performed by a separate annotator. From our analysis of the results for the argumentativeness annotation task, we conclude that our guidelines need more incorporation of stance expressions as an important indicator of argumentativeness.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Conclusion and Future Research",
                "sec_num": "8"
            },
            {
                "text": "Since most of our annotators indicated that they had trouble annotating comments due to missing context, we want to explore the impact of context on the various annotation tasks we have performed in this study. First of all, our new guidelines will have to supply more examples of information that is considered argumentatively relevant, e.g., background or context setting information. In particular, we want to investigate whether there are \"triggering devices\" which are used to evoke context (Nyan, 2017) in the comment. Since the function of context is often defined as narrowing down the range of possible understandings of a text or utterance (Nyan, 2017) , we are interested in studying how supplying the annotators with various degrees of context information will for instance impact their understanding of the argumentativeness and the claim of a comment.",
                "cite_spans": [
                    {
                        "start": 496,
                        "end": 508,
                        "text": "(Nyan, 2017)",
                        "ref_id": "BIBREF22"
                    },
                    {
                        "start": 650,
                        "end": 662,
                        "text": "(Nyan, 2017)",
                        "ref_id": "BIBREF22"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Conclusion and Future Research",
                "sec_num": "8"
            },
            {
                "text": "AnnotationSix annotators participated in this study. All of them were Dutch native speakers and linguists who received the same annotation guidelines(Bauwelinck and Lefever, 2020) to follow 1 . They each annotated a total of 100 Dutch user comments, all 100 were the same set for each annotator and were used to measure inter-annotator agreement.We performed two preliminary steps to prepare the corpus for the annotators. First, to prepare for the topic annotations we annotated the structuring topics and aspects contained in the Facebook posts and the news article texts. For the topic annotations of the Facebook posts, we labelled the topic information as present on screenshots of the post in question, thus including information like the title of the article, the accompanying image and description text in our decision making. For the topic annotations of the article texts, we limited ourselves to the title and the first three paragraphs of each article. The second preparatory step was to manually segment all the comments in order to prepare for the argumentativeness and claim annotation tasks, which were to be performed on a segment-level. We segmented the comments based on the shell language expressions we found(Madnani et al., 2012;Du et al., 2014), but only if they were also set apart from other segments in a syntactical way(Tseronis, 2011). If the expression occurred syntactically embedded in the phrase, we considered it a part of the larger segment. This approach allowed us to distinguish organizational segments from content segments. We did not consider markers of opinion (such as \"I don't think that\") as part of shell language, since we only wanted to focus on separating organizational phrases marking argumentative structures from content segments, leaving markers of subjectivity to remain a part of the content segments. Thus, our segmentation approach is more coarse-grained. Other researchers likeNguyen and Litman (2015) have used shell-like concepts for the task of detecting argument components in persuasive essays, also allowing shell language (inNguyen and Litman's (2015) terms \"argument words\") to occur in the argument content (inNguyen and Litman's (2015) terms \"domain words\").1 Both the corpus (consisting of comments, article texts and Facebook screenshots) and the associated annotation guidelines can be found at https://www.lt3.ugent.be/resources/platos pilot-study/.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "",
                "sec_num": null
            },
            {
                "text": "All examples from the corpus given here have been translated from the original Dutch.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "",
                "sec_num": null
            }
        ],
        "back_matter": [],
        "bib_entries": {
            "BIBREF0": {
                "ref_id": "b0",
                "title": "Unit segmentation of argumentative texts",
                "authors": [
                    {
                        "first": "Y",
                        "middle": [],
                        "last": "Ajjour",
                        "suffix": ""
                    },
                    {
                        "first": "W",
                        "middle": [],
                        "last": "Chen",
                        "suffix": ""
                    },
                    {
                        "first": "J",
                        "middle": [],
                        "last": "Kiesel",
                        "suffix": ""
                    },
                    {
                        "first": "H",
                        "middle": [],
                        "last": "Wachsmuth",
                        "suffix": ""
                    },
                    {
                        "first": "B",
                        "middle": [],
                        "last": "Stein",
                        "suffix": ""
                    }
                ],
                "year": 2017,
                "venue": "Proceedings of the 4th Workshop on Argument Mining",
                "volume": "",
                "issue": "",
                "pages": "118--128",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Y. Ajjour, W. Chen, J. Kiesel, H. Wachsmuth, and B. Stein. 2017. Unit segmentation of argumentative texts. In Proceedings of the 4th Workshop on Argument Mining, pages 118-128, Copenhagen, Denmark. Association for Computational Linguistics.",
                "links": null
            },
            "BIBREF1": {
                "ref_id": "b1",
                "title": "Modeling frames in argumentation",
                "authors": [
                    {
                        "first": "Y",
                        "middle": [],
                        "last": "Ajjour",
                        "suffix": ""
                    },
                    {
                        "first": "M",
                        "middle": [],
                        "last": "Alshomary",
                        "suffix": ""
                    },
                    {
                        "first": "H",
                        "middle": [],
                        "last": "Wachsmuth",
                        "suffix": ""
                    },
                    {
                        "first": "B",
                        "middle": [],
                        "last": "Stein",
                        "suffix": ""
                    }
                ],
                "year": 2019,
                "venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)",
                "volume": "",
                "issue": "",
                "pages": "2922--2932",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Y. Ajjour, M. Alshomary, H. Wachsmuth, and B. Stein. 2019. Modeling frames in argumentation. In Proceed- ings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 2922-2932, Hong Kong, China. Association for Computational Linguistics.",
                "links": null
            },
            "BIBREF2": {
                "ref_id": "b2",
                "title": "A news editorial corpus for mining argumentation strategies",
                "authors": [
                    {
                        "first": "K",
                        "middle": [],
                        "last": "Al-Khatib",
                        "suffix": ""
                    },
                    {
                        "first": "H",
                        "middle": [],
                        "last": "Wachsmuth",
                        "suffix": ""
                    },
                    {
                        "first": "J",
                        "middle": [],
                        "last": "Kiesel",
                        "suffix": ""
                    },
                    {
                        "first": "M",
                        "middle": [],
                        "last": "Hagen",
                        "suffix": ""
                    },
                    {
                        "first": "B",
                        "middle": [],
                        "last": "Stein",
                        "suffix": ""
                    }
                ],
                "year": 2016,
                "venue": "Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers",
                "volume": "",
                "issue": "",
                "pages": "3433--3443",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "K. Al-Khatib, H. Wachsmuth, J. Kiesel, M. Hagen, and B. Stein. 2016. A news editorial corpus for mining ar- gumentation strategies. In Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers, pages 3433-3443, Osaka, Japan.",
                "links": null
            },
            "BIBREF3": {
                "ref_id": "b3",
                "title": "De l'argumentation dans la langue\u00e0 la th\u00e9orie des topo\u00ef",
                "authors": [
                    {
                        "first": "J",
                        "middle": [
                            "C"
                        ],
                        "last": "Anscombre",
                        "suffix": ""
                    }
                ],
                "year": 1995,
                "venue": "La th\u00e9orie des topo\u00ef",
                "volume": "",
                "issue": "",
                "pages": "11--47",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "J.C. Anscombre. 1995. De l'argumentation dans la langue\u00e0 la th\u00e9orie des topo\u00ef. In La th\u00e9orie des topo\u00ef, pages 11-47. Kim\u00e9, Paris.",
                "links": null
            },
            "BIBREF4": {
                "ref_id": "b4",
                "title": "Annotation Guidelines for Labeling Topics, Aspects, Stance, Argumentativeness and Claims in Dutch social media comments, version 1.0",
                "authors": [
                    {
                        "first": "N",
                        "middle": [],
                        "last": "Bauwelinck",
                        "suffix": ""
                    },
                    {
                        "first": "E",
                        "middle": [],
                        "last": "Lefever",
                        "suffix": ""
                    }
                ],
                "year": 2020,
                "venue": "",
                "volume": "",
                "issue": "",
                "pages": "3--15",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "N. Bauwelinck and E. Lefever. 2020. Annotation Guidelines for Labeling Topics, Aspects, Stance, Argumen- tativeness and Claims in Dutch social media comments, version 1.0. Technical report, Ghent University, LT3 15-01.",
                "links": null
            },
            "BIBREF5": {
                "ref_id": "b5",
                "title": "Identifying justifications in written dialogs",
                "authors": [
                    {
                        "first": "O",
                        "middle": [],
                        "last": "Biran",
                        "suffix": ""
                    },
                    {
                        "first": "O",
                        "middle": [],
                        "last": "Rambow",
                        "suffix": ""
                    }
                ],
                "year": 2011,
                "venue": "Proceedings of the 2011 IEEE Fifth International Conference on Semantic Computing, ICSC '11",
                "volume": "",
                "issue": "",
                "pages": "162--168",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "O. Biran and O. Rambow. 2011. Identifying justifications in written dialogs. In Proceedings of the 2011 IEEE Fifth International Conference on Semantic Computing, ICSC '11, pages 162-168, USA. IEEE Computer So- ciety.",
                "links": null
            },
            "BIBREF6": {
                "ref_id": "b6",
                "title": "A coefficient of agreement for nominal scales",
                "authors": [
                    {
                        "first": "J",
                        "middle": [],
                        "last": "Cohen",
                        "suffix": ""
                    }
                ],
                "year": 1960,
                "venue": "Educational and Psychological Measurement",
                "volume": "20",
                "issue": "",
                "pages": "37--46",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "J. Cohen. 1960. A coefficient of agreement for nominal scales. Educational and Psychological Measurement, 20:37-46.",
                "links": null
            },
            "BIBREF7": {
                "ref_id": "b7",
                "title": "i'm just saying",
                "authors": [
                    {
                        "first": "R",
                        "middle": [
                            "T"
                        ],
                        "last": "Craig",
                        "suffix": ""
                    },
                    {
                        "first": "A",
                        "middle": [
                            "L"
                        ],
                        "last": "Sanusi",
                        "suffix": ""
                    }
                ],
                "year": 2000,
                "venue": "Discourse markers of standpoint continuity. Argumentation",
                "volume": "14",
                "issue": "",
                "pages": "425--445",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "R.T. Craig and A.L. Sanusi. 2000. 'i'm just saying...': Discourse markers of standpoint continuity. Argumentation, 14(4):425-445.",
                "links": null
            },
            "BIBREF8": {
                "ref_id": "b8",
                "title": "What is the essence of a claim? crossdomain claim identification",
                "authors": [
                    {
                        "first": "J",
                        "middle": [],
                        "last": "Daxenberger",
                        "suffix": ""
                    },
                    {
                        "first": "S",
                        "middle": [],
                        "last": "Eger",
                        "suffix": ""
                    },
                    {
                        "first": "I",
                        "middle": [],
                        "last": "Habernal",
                        "suffix": ""
                    },
                    {
                        "first": "C",
                        "middle": [],
                        "last": "Stab",
                        "suffix": ""
                    },
                    {
                        "first": "I",
                        "middle": [],
                        "last": "Gurevych",
                        "suffix": ""
                    }
                ],
                "year": 2017,
                "venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing",
                "volume": "",
                "issue": "",
                "pages": "2055--2066",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "J. Daxenberger, S. Eger, I. Habernal, C. Stab, and I. Gurevych. 2017. What is the essence of a claim? cross- domain claim identification. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 2055-2066, Copenhagen, Denmark. Association for Computational Linguistics.",
                "links": null
            },
            "BIBREF9": {
                "ref_id": "b9",
                "title": "Shell miner: Mining organizational phrases in argumentative texts in social media",
                "authors": [
                    {
                        "first": "J",
                        "middle": [],
                        "last": "Du",
                        "suffix": ""
                    },
                    {
                        "first": "J",
                        "middle": [],
                        "last": "Jiang",
                        "suffix": ""
                    },
                    {
                        "first": "L",
                        "middle": [],
                        "last": "Yang",
                        "suffix": ""
                    },
                    {
                        "first": "D",
                        "middle": [],
                        "last": "Song",
                        "suffix": ""
                    },
                    {
                        "first": "L",
                        "middle": [],
                        "last": "Liao",
                        "suffix": ""
                    }
                ],
                "year": 2014,
                "venue": "IEEE International Conference on Data Mining",
                "volume": "",
                "issue": "",
                "pages": "797--802",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "J. Du, J. Jiang, L. Yang, D. Song, and L. Liao. 2014. Shell miner: Mining organizational phrases in argumentative texts in social media. 2014 IEEE International Conference on Data Mining, pages 797-802.",
                "links": null
            },
            "BIBREF10": {
                "ref_id": "b10",
                "title": "Note sur l'argumentation et l'acte d'argumenter in concession et cons\u00e9cution dans le discours",
                "authors": [
                    {
                        "first": "O",
                        "middle": [],
                        "last": "Ducrot",
                        "suffix": ""
                    }
                ],
                "year": 1982,
                "venue": "Cahiers de linguistique fran\u00e7aise",
                "volume": "4",
                "issue": "",
                "pages": "143--163",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "O. Ducrot. 1982. Note sur l'argumentation et l'acte d'argumenter in concession et cons\u00e9cution dans le discours. In Cahiers de linguistique fran\u00e7aise, 4, pages 143-163, Gen\u00e8ve. Universit\u00e9 de Gen\u00e8ve.",
                "links": null
            },
            "BIBREF11": {
                "ref_id": "b11",
                "title": "On the role of discourse markers for discriminating claims and premises in argumentative discourse",
                "authors": [
                    {
                        "first": "J",
                        "middle": [],
                        "last": "Eckle-Kohler",
                        "suffix": ""
                    },
                    {
                        "first": "R",
                        "middle": [],
                        "last": "Kluge",
                        "suffix": ""
                    },
                    {
                        "first": "I",
                        "middle": [],
                        "last": "Gurevych",
                        "suffix": ""
                    }
                ],
                "year": 2015,
                "venue": "Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing",
                "volume": "",
                "issue": "",
                "pages": "2236--2242",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "J. Eckle-Kohler, R. Kluge, and I. Gurevych. 2015. On the role of discourse markers for discriminating claims and premises in argumentative discourse. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 2236-2242, Lisbon, Portugal. Association for Computational Linguistics.",
                "links": null
            },
            "BIBREF12": {
                "ref_id": "b12",
                "title": "Framing: Toward clarification of a fractured paradigm",
                "authors": [
                    {
                        "first": "R",
                        "middle": [],
                        "last": "Entman",
                        "suffix": ""
                    }
                ],
                "year": 1993,
                "venue": "Journal of Communication",
                "volume": "43",
                "issue": "4",
                "pages": "51--58",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "R. Entman. 1993. Framing: Toward clarification of a fractured paradigm. Journal of Communication, 43(4):51- 58.",
                "links": null
            },
            "BIBREF13": {
                "ref_id": "b13",
                "title": "Measuring nominal scale agreement among many raters",
                "authors": [
                    {
                        "first": "J",
                        "middle": [
                            "L"
                        ],
                        "last": "Fleiss",
                        "suffix": ""
                    }
                ],
                "year": 1971,
                "venue": "Psychological Bulletin",
                "volume": "76",
                "issue": "",
                "pages": "378--382",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "J.L. Fleiss. 1971. Measuring nominal scale agreement among many raters. Psychological Bulletin, 76:378-382.",
                "links": null
            },
            "BIBREF14": {
                "ref_id": "b14",
                "title": "Argumentation mining in user-generated web discourse",
                "authors": [
                    {
                        "first": "I",
                        "middle": [],
                        "last": "Habernal",
                        "suffix": ""
                    },
                    {
                        "first": "I",
                        "middle": [],
                        "last": "Gurevych",
                        "suffix": ""
                    }
                ],
                "year": 2017,
                "venue": "Computational Linguistics",
                "volume": "43",
                "issue": "1",
                "pages": "125--179",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "I. Habernal and I. Gurevych. 2017. Argumentation mining in user-generated web discourse. Computational Linguistics, 43(1):125-179.",
                "links": null
            },
            "BIBREF15": {
                "ref_id": "b15",
                "title": "Bivariate agreement coefficients for reliability of data",
                "authors": [
                    {
                        "first": "K",
                        "middle": [],
                        "last": "Krippendorff",
                        "suffix": ""
                    }
                ],
                "year": 1970,
                "venue": "Sociological methodology",
                "volume": "",
                "issue": "",
                "pages": "139--150",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "K. Krippendorff. 1970. Bivariate agreement coefficients for reliability of data. Sociological methodology, pages 139-150.",
                "links": null
            },
            "BIBREF16": {
                "ref_id": "b16",
                "title": "Measuring the reliability of qualitative text analysis data. Quality quantity",
                "authors": [
                    {
                        "first": "K",
                        "middle": [],
                        "last": "Krippendorff",
                        "suffix": ""
                    }
                ],
                "year": 2004,
                "venue": "",
                "volume": "38",
                "issue": "",
                "pages": "787--800",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "K. Krippendorff. 2004. Measuring the reliability of qualitative text analysis data. Quality quantity, 38:787-800.",
                "links": null
            },
            "BIBREF17": {
                "ref_id": "b17",
                "title": "Stance Detection: A Survey",
                "authors": [
                    {
                        "first": "D",
                        "middle": [],
                        "last": "K\u00fc\u00e7\u00fck",
                        "suffix": ""
                    },
                    {
                        "first": "F",
                        "middle": [],
                        "last": "Can",
                        "suffix": ""
                    }
                ],
                "year": 2020,
                "venue": "ACM Computing Surveys",
                "volume": "",
                "issue": "1",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "D. K\u00fc\u00e7\u00fck and F. Can. 2020. Stance Detection: A Survey. ACM Computing Surveys, 53(1).",
                "links": null
            },
            "BIBREF18": {
                "ref_id": "b18",
                "title": "The measurement of observer agreement for categorical data",
                "authors": [
                    {
                        "first": "J",
                        "middle": [
                            "R"
                        ],
                        "last": "Landis",
                        "suffix": ""
                    },
                    {
                        "first": "G",
                        "middle": [
                            "G"
                        ],
                        "last": "Koch",
                        "suffix": ""
                    }
                ],
                "year": 1977,
                "venue": "Biometrics",
                "volume": "33",
                "issue": "",
                "pages": "159--174",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "J.R. Landis and G.G. Koch. 1977. The measurement of observer agreement for categorical data. Biometrics, 33:159-174.",
                "links": null
            },
            "BIBREF19": {
                "ref_id": "b19",
                "title": "Identifying high-level organizational elements in argumentative discourse",
                "authors": [
                    {
                        "first": "N",
                        "middle": [],
                        "last": "Madnani",
                        "suffix": ""
                    },
                    {
                        "first": "M",
                        "middle": [],
                        "last": "Heilman",
                        "suffix": ""
                    },
                    {
                        "first": "J",
                        "middle": [],
                        "last": "Tetreault",
                        "suffix": ""
                    },
                    {
                        "first": "M",
                        "middle": [],
                        "last": "Chodorow",
                        "suffix": ""
                    }
                ],
                "year": 2012,
                "venue": "Proceedings of the 2012 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
                "volume": "",
                "issue": "",
                "pages": "20--28",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "N. Madnani, M. Heilman, J. Tetreault, and M. Chodorow. 2012. Identifying high-level organizational elements in argumentative discourse. In Proceedings of the 2012 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 20-28, Montr\u00e9al, Canada. Association for Computational Linguistics.",
                "links": null
            },
            "BIBREF20": {
                "ref_id": "b20",
                "title": "Reader comments to online opinion journalism: A space of public deliberation",
                "authors": [
                    {
                        "first": "E",
                        "middle": [],
                        "last": "Manosevitch",
                        "suffix": ""
                    },
                    {
                        "first": "D",
                        "middle": [],
                        "last": "Walker",
                        "suffix": ""
                    }
                ],
                "year": 2009,
                "venue": "10th International Symposium on Online Journalism",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "E. Manosevitch and D. Walker. 2009. Reader comments to online opinion journalism: A space of public delibera- tion. In 10th International Symposium on Online Journalism, Austin, Texas.",
                "links": null
            },
            "BIBREF21": {
                "ref_id": "b21",
                "title": "Extracting argument and domain words for identifying argument components in texts",
                "authors": [
                    {
                        "first": "H",
                        "middle": [],
                        "last": "Nguyen",
                        "suffix": ""
                    },
                    {
                        "first": "D",
                        "middle": [],
                        "last": "Litman",
                        "suffix": ""
                    }
                ],
                "year": 2015,
                "venue": "Proceedings of the 2nd Workshop on Argumentation Mining",
                "volume": "",
                "issue": "",
                "pages": "22--28",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "H. Nguyen and D. Litman. 2015. Extracting argument and domain words for identifying argument components in texts. In Proceedings of the 2nd Workshop on Argumentation Mining, pages 22-28, Denver, Colorado, USA. Association for Computational Linguistics.",
                "links": null
            },
            "BIBREF22": {
                "ref_id": "b22",
                "title": "Re-contextualising argumentative meanings: An adaptive perspective. Argumentation",
                "authors": [
                    {
                        "first": "T",
                        "middle": [],
                        "last": "Nyan",
                        "suffix": ""
                    }
                ],
                "year": 2017,
                "venue": "",
                "volume": "31",
                "issue": "",
                "pages": "267--299",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "T. Nyan. 2017. Re-contextualising argumentative meanings: An adaptive perspective. Argumentation, 31:267- 299.",
                "links": null
            },
            "BIBREF23": {
                "ref_id": "b23",
                "title": "Identifying appropriate support for propositions in online user comments",
                "authors": [
                    {
                        "first": "J",
                        "middle": [],
                        "last": "Park",
                        "suffix": ""
                    },
                    {
                        "first": "C",
                        "middle": [],
                        "last": "Cardie",
                        "suffix": ""
                    }
                ],
                "year": 2014,
                "venue": "Proceedings of the First Workshop on Argumentation Mining",
                "volume": "",
                "issue": "",
                "pages": "29--38",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "J. Park and C. Cardie. 2014. Identifying appropriate support for propositions in online user comments. In Proceedings of the First Workshop on Argumentation Mining, pages 29-38, Baltimore, Maryland. Association for Computational Linguistics.",
                "links": null
            },
            "BIBREF24": {
                "ref_id": "b24",
                "title": "Measuring agreement on set-valued items (MASI) for semantic and pragmatic annotation",
                "authors": [
                    {
                        "first": "R",
                        "middle": [],
                        "last": "Passonneau",
                        "suffix": ""
                    }
                ],
                "year": 2006,
                "venue": "Proceedings of the Fifth International Conference on Language Resources and Evaluation (LREC'06)",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "R. Passonneau. 2006. Measuring agreement on set-valued items (MASI) for semantic and pragmatic annota- tion. In Proceedings of the Fifth International Conference on Language Resources and Evaluation (LREC'06), Genoa, Italy. European Language Resources Association (ELRA).",
                "links": null
            },
            "BIBREF25": {
                "ref_id": "b25",
                "title": "An annotated corpus of argumentative microtexts",
                "authors": [
                    {
                        "first": "A",
                        "middle": [],
                        "last": "Peldszus",
                        "suffix": ""
                    },
                    {
                        "first": "M",
                        "middle": [],
                        "last": "Stede",
                        "suffix": ""
                    }
                ],
                "year": 2015,
                "venue": "Proceedings of the First Conference on Argumentation",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "A. Peldszus and M. Stede. 2015. An annotated corpus of argumentative microtexts. In Proceedings of the First Conference on Argumentation, Lisbon, Portugal.",
                "links": null
            },
            "BIBREF26": {
                "ref_id": "b26",
                "title": "Deliberation 2.0: Comparing the deliberative quality of online news user comments across platforms",
                "authors": [
                    {
                        "first": "I",
                        "middle": [],
                        "last": "Rowe",
                        "suffix": ""
                    }
                ],
                "year": 2015,
                "venue": "Journal of Broadcasting & Electronic Media",
                "volume": "59",
                "issue": "4",
                "pages": "539--555",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "I. Rowe. 2015. Deliberation 2.0: Comparing the deliberative quality of online news user comments across plat- forms. Journal of Broadcasting & Electronic Media, 59(4):539-555.",
                "links": null
            },
            "BIBREF27": {
                "ref_id": "b27",
                "title": "Challenges of argument mining: Generating an argument synthesis based on the qualia structure",
                "authors": [
                    {
                        "first": "P",
                        "middle": [],
                        "last": "Saint-Dizier",
                        "suffix": ""
                    }
                ],
                "year": 2016,
                "venue": "Proceedings of the 9th International Natural Language Generation conference",
                "volume": "",
                "issue": "",
                "pages": "79--83",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "P. Saint-Dizier. 2016. Challenges of argument mining: Generating an argument synthesis based on the qualia structure. In Proceedings of the 9th International Natural Language Generation conference, pages 79-83, Edinburgh, UK. Association for Computational Linguistics.",
                "links": null
            },
            "BIBREF28": {
                "ref_id": "b28",
                "title": "Argumentation: Keeping Faith with Reason",
                "authors": [
                    {
                        "first": "E",
                        "middle": [],
                        "last": "Schiappa",
                        "suffix": ""
                    },
                    {
                        "first": "J",
                        "middle": [
                            "P"
                        ],
                        "last": "Nordin",
                        "suffix": ""
                    }
                ],
                "year": 2014,
                "venue": "",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "E. Schiappa and J.P. Nordin. 2014. Argumentation: Keeping Faith with Reason. Pearson.",
                "links": null
            },
            "BIBREF29": {
                "ref_id": "b29",
                "title": "Discourse level opinion interpretation",
                "authors": [
                    {
                        "first": "S",
                        "middle": [],
                        "last": "Somasundaran",
                        "suffix": ""
                    },
                    {
                        "first": "J",
                        "middle": [],
                        "last": "Wiebe",
                        "suffix": ""
                    },
                    {
                        "first": "J",
                        "middle": [],
                        "last": "Ruppenhofer",
                        "suffix": ""
                    }
                ],
                "year": 2008,
                "venue": "Proceedings of the 22nd International Conference on Computational Linguistics",
                "volume": "",
                "issue": "",
                "pages": "801--808",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "S. Somasundaran, J. Wiebe, and J. Ruppenhofer. 2008. Discourse level opinion interpretation. In Proceedings of the 22nd International Conference on Computational Linguistics (Coling 2008), pages 801-808, Manchester, UK.",
                "links": null
            },
            "BIBREF30": {
                "ref_id": "b30",
                "title": "Identifying argumentative discourse structures in persuasive essays",
                "authors": [
                    {
                        "first": "C",
                        "middle": [],
                        "last": "Stab",
                        "suffix": ""
                    },
                    {
                        "first": "I",
                        "middle": [],
                        "last": "Gurevych",
                        "suffix": ""
                    }
                ],
                "year": 2014,
                "venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
                "volume": "",
                "issue": "",
                "pages": "46--56",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "C. Stab and I. Gurevych. 2014. Identifying argumentative discourse structures in persuasive essays. In Proceed- ings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 46-56, Doha, Qatar. Association for Computational Linguistics.",
                "links": null
            },
            "BIBREF31": {
                "ref_id": "b31",
                "title": "Coherence in political computer-mediated communication: analyzing topic relevance and drift in chat",
                "authors": [
                    {
                        "first": "J",
                        "middle": [],
                        "last": "Stromer-Galley",
                        "suffix": ""
                    },
                    {
                        "first": "A",
                        "middle": [
                            "M"
                        ],
                        "last": "Martinson",
                        "suffix": ""
                    }
                ],
                "year": 2009,
                "venue": "",
                "volume": "3",
                "issue": "",
                "pages": "195--216",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "J. Stromer-Galley and A.M. Martinson. 2009. Coherence in political computer-mediated communication: analyz- ing topic relevance and drift in chat. Discourse & Communication, 3(2):195-216.",
                "links": null
            },
            "BIBREF32": {
                "ref_id": "b32",
                "title": "From Connectives to Argumentative Markers: A Quest for Markers of Argumentative Moves and of Related Aspects of Argumentative Discourse",
                "authors": [
                    {
                        "first": "A",
                        "middle": [],
                        "last": "Tseronis",
                        "suffix": ""
                    }
                ],
                "year": 2011,
                "venue": "",
                "volume": "25",
                "issue": "",
                "pages": "427--447",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "A. Tseronis. 2011. From Connectives to Argumentative Markers: A Quest for Markers of Argumentative Moves and of Related Aspects of Argumentative Discourse. Argumentation, 25(4):427-447.",
                "links": null
            },
            "BIBREF33": {
                "ref_id": "b33",
                "title": "Strategic maneuvering: A synthetic recapitulation",
                "authors": [
                    {
                        "first": "F",
                        "middle": [],
                        "last": "Van Eemeren",
                        "suffix": ""
                    },
                    {
                        "first": "P",
                        "middle": [],
                        "last": "Houtlosser",
                        "suffix": ""
                    }
                ],
                "year": 2006,
                "venue": "Argumentation",
                "volume": "20",
                "issue": "4",
                "pages": "381--392",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "F. van Eemeren and P. Houtlosser. 2006. Strategic maneuvering: A synthetic recapitulation. Argumentation, 20(4):381-392-802.",
                "links": null
            },
            "BIBREF34": {
                "ref_id": "b34",
                "title": "Some facets of argument mining for opinion analysis",
                "authors": [
                    {
                        "first": "M",
                        "middle": [
                            "P G"
                        ],
                        "last": "Villalba",
                        "suffix": ""
                    },
                    {
                        "first": "P",
                        "middle": [],
                        "last": "Saint-Dizier",
                        "suffix": ""
                    }
                ],
                "year": 2012,
                "venue": "COMMA",
                "volume": "245",
                "issue": "",
                "pages": "23--34",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "M.P.G. Villalba and P. Saint-Dizier. 2012. Some facets of argument mining for opinion analysis. COMMA, 245:23-34.",
                "links": null
            },
            "BIBREF35": {
                "ref_id": "b35",
                "title": "Fact checking: Task definition and dataset construction",
                "authors": [
                    {
                        "first": "A",
                        "middle": [],
                        "last": "Vlachos",
                        "suffix": ""
                    },
                    {
                        "first": "S",
                        "middle": [],
                        "last": "Riedel",
                        "suffix": ""
                    }
                ],
                "year": 2014,
                "venue": "Proceedings of the ACL 2014 Workshop on Language Technologies and Computational Social Science",
                "volume": "",
                "issue": "",
                "pages": "18--22",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "A. Vlachos and S. Riedel. 2014. Fact checking: Task definition and dataset construction. In Proceedings of the ACL 2014 Workshop on Language Technologies and Computational Social Science, pages 18-22, Baltimore, MD, USA. Association for Computational Linguistics.",
                "links": null
            },
            "BIBREF36": {
                "ref_id": "b36",
                "title": "Dialog theory for critical argumentation",
                "authors": [
                    {
                        "first": "D",
                        "middle": [
                            "N"
                        ],
                        "last": "Walton",
                        "suffix": ""
                    }
                ],
                "year": 2007,
                "venue": "John Benjamins",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "D.N. Walton. 2007. Dialog theory for critical argumentation. John Benjamins, Amsterdam.",
                "links": null
            },
            "BIBREF37": {
                "ref_id": "b37",
                "title": "Stance-based Argument Mining -Modeling Implicit Argumentation Using Stance",
                "authors": [
                    {
                        "first": "M",
                        "middle": [],
                        "last": "Wojatzki",
                        "suffix": ""
                    },
                    {
                        "first": "T",
                        "middle": [],
                        "last": "Zesch",
                        "suffix": ""
                    }
                ],
                "year": 2016,
                "venue": "Proceedings of the KONVENS",
                "volume": "",
                "issue": "",
                "pages": "313--322",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "M. Wojatzki and T. Zesch. 2016. Stance-based Argument Mining -Modeling Implicit Argumentation Using Stance. In Proceedings of the KONVENS, pages 313-322.",
                "links": null
            }
        },
        "ref_entries": {
            "FIGREF0": {
                "num": null,
                "uris": null,
                "text": "(4) 1[Feeling ill = stay at home!] 2[Easy!] 3[There are people who can't stand wearing the mask due to breathing problems!] 4[Wearing a mask in this heat is asking for trouble for those people.] 5[Keep your distance, sanitize your hands and stay AT HOME when ill.] 6[EASY...] 7.2 Segment-level annotation tasks (argumentativeness, claim detection)",
                "type_str": "figure"
            },
            "TABREF1": {
                "text": "Cohen's kappa agreement scores for pairs of annotators and Fleiss' kappa agreement for all three annotators for the tasks of argumentativeness and claim detection.",
                "html": null,
                "type_str": "table",
                "content": "<table><tr><td>7 Analysis</td></tr><tr><td>7.1 Comment-level annotation tasks (topic, aspect, stance detection)</td></tr></table>",
                "num": null
            },
            "TABREF2": {
                "text": "But no matter what she says or does, it will never be enough.][...]",
                "html": null,
                "type_str": "table",
                "content": "<table><tr><td>(5) 1[She's right.] 2[</td></tr></table>",
                "num": null
            }
        }
    }
}