Datasets:
Update README.md
Browse files
README.md
CHANGED
|
@@ -56,12 +56,14 @@ marking 'supporting evidence' for the label, following how the task is defined b
|
|
| 56 |
all the words, in the sentence, they think shows
|
| 57 |
evidence for their chosen label.
|
| 58 |
|
| 59 |
-
#### Our annotations
|
| 60 |
negative 1555 |
|
| 61 |
positive 1435 |
|
| 62 |
no sentiment 470
|
| 63 |
Total 3460
|
| 64 |
|
|
|
|
|
|
|
| 65 |
|
| 66 |
### SST2
|
| 67 |
|
|
@@ -81,12 +83,14 @@ sentiment that they do not see.
|
|
| 81 |
*The Zuco data contains eye-tracking data for 400 instances from SST. By annotating some of these with rationales,
|
| 82 |
we add an extra layer of information for future research.
|
| 83 |
|
| 84 |
-
#### Our annotations
|
| 85 |
positive 1027 |
|
| 86 |
negative 900 |
|
| 87 |
no sentiment 163
|
| 88 |
Total 2090
|
| 89 |
|
|
|
|
|
|
|
| 90 |
|
| 91 |
### CoS-E
|
| 92 |
|
|
@@ -105,9 +109,11 @@ think that removing it will decrease your
|
|
| 105 |
confidence toward your chosen label,
|
| 106 |
please mark it.’
|
| 107 |
|
| 108 |
-
#### Our annotations
|
| 109 |
Total 3760
|
| 110 |
|
|
|
|
|
|
|
| 111 |
|
| 112 |
### Dataset Sources
|
| 113 |
|
|
@@ -116,15 +122,16 @@ Total 3760
|
|
| 116 |
- **Repository:** https://github.com/terne/Being_Right_for_Whose_Right_Reasons
|
| 117 |
- **Paper:** [Being Right for Whose Right Reasons?](https://aclanthology.org/2023.acl-long.59/)
|
| 118 |
|
| 119 |
-
|
| 120 |
|
| 121 |
<!-- Address questions around how the dataset is intended to be used. -->
|
| 122 |
In our paper, we present a collection of three
|
| 123 |
existing datasets (SST2, DynaSent and Cos-E) with demographics-augmented annotations to enable profiling of models, i.e., quantifying their alignment (or agreement) with rationales provided
|
| 124 |
by different socio-demographic groups. Such profiling enables us to ask whose right reasons models are being right for and fosters future research on performance equality/robustness.
|
| 125 |
|
| 126 |
-
For each dataset, we provide the data under a unique **'
|
| 127 |
-
|
|
|
|
| 128 |
|
| 129 |
|
| 130 |
## Dataset Structure
|
|
|
|
| 56 |
all the words, in the sentence, they think shows
|
| 57 |
evidence for their chosen label.
|
| 58 |
|
| 59 |
+
#### >Our annotations:
|
| 60 |
negative 1555 |
|
| 61 |
positive 1435 |
|
| 62 |
no sentiment 470
|
| 63 |
Total 3460
|
| 64 |
|
| 65 |
+
Note that all the data is uploaded under a single 'train' split (read [## Uses](uses) for further details).
|
| 66 |
+
|
| 67 |
|
| 68 |
### SST2
|
| 69 |
|
|
|
|
| 83 |
*The Zuco data contains eye-tracking data for 400 instances from SST. By annotating some of these with rationales,
|
| 84 |
we add an extra layer of information for future research.
|
| 85 |
|
| 86 |
+
#### >Our annotations:
|
| 87 |
positive 1027 |
|
| 88 |
negative 900 |
|
| 89 |
no sentiment 163
|
| 90 |
Total 2090
|
| 91 |
|
| 92 |
+
Note that all the data is uploaded under a single 'train' split (read [## Uses](uses) for further details).
|
| 93 |
+
|
| 94 |
|
| 95 |
### CoS-E
|
| 96 |
|
|
|
|
| 109 |
confidence toward your chosen label,
|
| 110 |
please mark it.’
|
| 111 |
|
| 112 |
+
#### >Our annotations:
|
| 113 |
Total 3760
|
| 114 |
|
| 115 |
+
Note that all the data is uploaded under a single 'train' split (read [## Uses](uses) for further details).
|
| 116 |
+
|
| 117 |
|
| 118 |
### Dataset Sources
|
| 119 |
|
|
|
|
| 122 |
- **Repository:** https://github.com/terne/Being_Right_for_Whose_Right_Reasons
|
| 123 |
- **Paper:** [Being Right for Whose Right Reasons?](https://aclanthology.org/2023.acl-long.59/)
|
| 124 |
|
| 125 |
+
<a id="uses">## Uses</a>
|
| 126 |
|
| 127 |
<!-- Address questions around how the dataset is intended to be used. -->
|
| 128 |
In our paper, we present a collection of three
|
| 129 |
existing datasets (SST2, DynaSent and Cos-E) with demographics-augmented annotations to enable profiling of models, i.e., quantifying their alignment (or agreement) with rationales provided
|
| 130 |
by different socio-demographic groups. Such profiling enables us to ask whose right reasons models are being right for and fosters future research on performance equality/robustness.
|
| 131 |
|
| 132 |
+
For each dataset, we provide the data under a unique **'train'** split due to the current limitation of not being possible to upload a dataset with a single *'test'* split.
|
| 133 |
+
Note, however, that the original itended used of these collection of datasets was to **test** the quality & alignment of post-hoc explainability methods.
|
| 134 |
+
If you use it following different splits, please clarify it to ease reproducibility of your work.
|
| 135 |
|
| 136 |
|
| 137 |
## Dataset Structure
|