mathiascreutz
commited on
Commit
·
6306358
1
Parent(s):
6ca6aaf
Minor modifications
Browse files
README.md
CHANGED
|
@@ -130,6 +130,41 @@ TBA
|
|
| 130 |
|
| 131 |
`gem_id`: unique identifier of this entry
|
| 132 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 133 |
### Data Splits
|
| 134 |
|
| 135 |
The data is split into ...
|
|
|
|
| 130 |
|
| 131 |
`gem_id`: unique identifier of this entry
|
| 132 |
|
| 133 |
+
**Additional information about the annotation scheme:**
|
| 134 |
+
|
| 135 |
+
The annotation scores given by an individual annotator are:
|
| 136 |
+
|
| 137 |
+
4: Good example of paraphrases = Dark green button in the annotation
|
| 138 |
+
tool = The two sentences can be used in the same situation and
|
| 139 |
+
essentially "mean the same thing".
|
| 140 |
+
|
| 141 |
+
3: Mostly good example of paraphrases = Light green button in the
|
| 142 |
+
annotation tool = It is acceptable to think that the two sentences
|
| 143 |
+
refer to the same thing, although one sentence might be more specific
|
| 144 |
+
than the other one, or there are differences in style, such as polite
|
| 145 |
+
form versus familiar form.
|
| 146 |
+
|
| 147 |
+
2: Mostly bad example of paraphrases = Yellow button in the annotation
|
| 148 |
+
tool = There is some connection between the sentences that explains
|
| 149 |
+
why they occur together, but one would not really consider them to
|
| 150 |
+
mean the same thing.
|
| 151 |
+
|
| 152 |
+
1: Bad example of paraphrases = Red button in the annotation tool =
|
| 153 |
+
There is no obvious connection. The sentences mean different things.
|
| 154 |
+
|
| 155 |
+
If the two annotators fully agreed on the category, the value in the
|
| 156 |
+
`annot_score` field is 4.0, 3.0, 2.0 or 1.0. If the two annotators
|
| 157 |
+
chose adjacent categories, the value in this field will be 3.5, 2.5 or
|
| 158 |
+
1.5. For instance, a value of 2.5 means that one annotator gave a
|
| 159 |
+
score of 3 ("mostly good"), indicating a possible paraphrase pair,
|
| 160 |
+
whereas the other annotator scored this as a 2 ("mostly bad"), that
|
| 161 |
+
is, unlikely to be a paraphrase pair. If the annotators disagreed by
|
| 162 |
+
more than one category, the sentence pair was discarded and won't show
|
| 163 |
+
up in the datasets.
|
| 164 |
+
|
| 165 |
+
The training sets were not annotated manually. This is indicated by
|
| 166 |
+
the value 0.0 in the `annot_score` field.
|
| 167 |
+
|
| 168 |
### Data Splits
|
| 169 |
|
| 170 |
The data is split into ...
|