Datasets:
Formats:
parquet
Sub-tasks:
multi-class-classification
univariate-time-series-forecasting
tabular-multi-class-classification
Languages:
English
Size:
1M - 10M
ArXiv:
Tags:
timeseries
time-series
time-series-forecasting
tabular-regression
tabular-classification
univariate-time-series-forecasting
License:
Added Glue Card as Template
Browse files
README.md
CHANGED
|
@@ -1,3 +1,506 @@
|
|
| 1 |
-
---
|
| 2 |
-
|
| 3 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
# TODO @Moritz
|
| 3 |
+
annotations_creators:
|
| 4 |
+
- other
|
| 5 |
+
language_creators:
|
| 6 |
+
- other
|
| 7 |
+
language:
|
| 8 |
+
- en
|
| 9 |
+
license:
|
| 10 |
+
- other
|
| 11 |
+
size_categories:
|
| 12 |
+
- 10K<n<100K
|
| 13 |
+
source_datasets:
|
| 14 |
+
- original
|
| 15 |
+
task_categories:
|
| 16 |
+
- text-classification
|
| 17 |
+
task_ids:
|
| 18 |
+
- acceptability-classification
|
| 19 |
+
- natural-language-inference
|
| 20 |
+
- semantic-similarity-scoring
|
| 21 |
+
- sentiment-classification
|
| 22 |
+
- text-scoring
|
| 23 |
+
paperswithcode_id: glue
|
| 24 |
+
pretty_name: GLUE (General Language Understanding Evaluation benchmark)
|
| 25 |
+
config_names:
|
| 26 |
+
- ax
|
| 27 |
+
- cola
|
| 28 |
+
- mnli
|
| 29 |
+
- mnli_matched
|
| 30 |
+
- mnli_mismatched
|
| 31 |
+
- mrpc
|
| 32 |
+
- qnli
|
| 33 |
+
- qqp
|
| 34 |
+
- rte
|
| 35 |
+
- sst2
|
| 36 |
+
- stsb
|
| 37 |
+
- wnli
|
| 38 |
+
tags:
|
| 39 |
+
- qa-nli
|
| 40 |
+
- coreference-nli
|
| 41 |
+
- paraphrase-identification
|
| 42 |
+
dataset_info:
|
| 43 |
+
- config_name: mnli
|
| 44 |
+
features:
|
| 45 |
+
- name: premise
|
| 46 |
+
dtype: string
|
| 47 |
+
- name: hypothesis
|
| 48 |
+
dtype: string
|
| 49 |
+
- name: label
|
| 50 |
+
dtype:
|
| 51 |
+
class_label:
|
| 52 |
+
names:
|
| 53 |
+
'0': entailment
|
| 54 |
+
'1': neutral
|
| 55 |
+
'2': contradiction
|
| 56 |
+
- name: idx
|
| 57 |
+
dtype: int32
|
| 58 |
+
splits:
|
| 59 |
+
- name: train
|
| 60 |
+
num_bytes: 74619646
|
| 61 |
+
num_examples: 392702
|
| 62 |
+
- name: validation_matched
|
| 63 |
+
num_bytes: 1833783
|
| 64 |
+
num_examples: 9815
|
| 65 |
+
- name: validation_mismatched
|
| 66 |
+
num_bytes: 1949231
|
| 67 |
+
num_examples: 9832
|
| 68 |
+
- name: test_matched
|
| 69 |
+
num_bytes: 1848654
|
| 70 |
+
num_examples: 9796
|
| 71 |
+
- name: test_mismatched
|
| 72 |
+
num_bytes: 1950703
|
| 73 |
+
num_examples: 9847
|
| 74 |
+
download_size: 57168425
|
| 75 |
+
dataset_size: 82202017
|
| 76 |
+
- config_name: mnli_matched
|
| 77 |
+
features:
|
| 78 |
+
- name: premise
|
| 79 |
+
dtype: string
|
| 80 |
+
- name: hypothesis
|
| 81 |
+
dtype: string
|
| 82 |
+
- name: label
|
| 83 |
+
dtype:
|
| 84 |
+
class_label:
|
| 85 |
+
names:
|
| 86 |
+
'0': entailment
|
| 87 |
+
'1': neutral
|
| 88 |
+
'2': contradiction
|
| 89 |
+
- name: idx
|
| 90 |
+
dtype: int32
|
| 91 |
+
splits:
|
| 92 |
+
- name: validation
|
| 93 |
+
num_bytes: 1833783
|
| 94 |
+
num_examples: 9815
|
| 95 |
+
- name: test
|
| 96 |
+
num_bytes: 1848654
|
| 97 |
+
num_examples: 9796
|
| 98 |
+
download_size: 2435055
|
| 99 |
+
dataset_size: 3682437
|
| 100 |
+
- config_name: mnli_mismatched
|
| 101 |
+
features:
|
| 102 |
+
- name: premise
|
| 103 |
+
dtype: string
|
| 104 |
+
- name: hypothesis
|
| 105 |
+
dtype: string
|
| 106 |
+
- name: label
|
| 107 |
+
dtype:
|
| 108 |
+
class_label:
|
| 109 |
+
names:
|
| 110 |
+
'0': entailment
|
| 111 |
+
'1': neutral
|
| 112 |
+
'2': contradiction
|
| 113 |
+
- name: idx
|
| 114 |
+
dtype: int32
|
| 115 |
+
splits:
|
| 116 |
+
- name: validation
|
| 117 |
+
num_bytes: 1949231
|
| 118 |
+
num_examples: 9832
|
| 119 |
+
- name: test
|
| 120 |
+
num_bytes: 1950703
|
| 121 |
+
num_examples: 9847
|
| 122 |
+
download_size: 2509009
|
| 123 |
+
dataset_size: 3899934
|
| 124 |
+
- config_name: mrpc
|
| 125 |
+
features:
|
| 126 |
+
- name: sentence1
|
| 127 |
+
dtype: string
|
| 128 |
+
- name: sentence2
|
| 129 |
+
dtype: string
|
| 130 |
+
- name: label
|
| 131 |
+
dtype:
|
| 132 |
+
class_label:
|
| 133 |
+
names:
|
| 134 |
+
'0': not_equivalent
|
| 135 |
+
'1': equivalent
|
| 136 |
+
- name: idx
|
| 137 |
+
dtype: int32
|
| 138 |
+
splits:
|
| 139 |
+
- name: train
|
| 140 |
+
num_bytes: 943843
|
| 141 |
+
num_examples: 3668
|
| 142 |
+
- name: validation
|
| 143 |
+
num_bytes: 105879
|
| 144 |
+
num_examples: 408
|
| 145 |
+
- name: test
|
| 146 |
+
num_bytes: 442410
|
| 147 |
+
num_examples: 1725
|
| 148 |
+
download_size: 1033400
|
| 149 |
+
dataset_size: 1492132
|
| 150 |
+
- config_name: qnli
|
| 151 |
+
features:
|
| 152 |
+
- name: question
|
| 153 |
+
dtype: string
|
| 154 |
+
- name: sentence
|
| 155 |
+
dtype: string
|
| 156 |
+
- name: label
|
| 157 |
+
dtype:
|
| 158 |
+
class_label:
|
| 159 |
+
names:
|
| 160 |
+
'0': entailment
|
| 161 |
+
'1': not_entailment
|
| 162 |
+
- name: idx
|
| 163 |
+
dtype: int32
|
| 164 |
+
splits:
|
| 165 |
+
- name: train
|
| 166 |
+
num_bytes: 25612443
|
| 167 |
+
num_examples: 104743
|
| 168 |
+
- name: validation
|
| 169 |
+
num_bytes: 1368304
|
| 170 |
+
num_examples: 5463
|
| 171 |
+
- name: test
|
| 172 |
+
num_bytes: 1373093
|
| 173 |
+
num_examples: 5463
|
| 174 |
+
download_size: 19278324
|
| 175 |
+
dataset_size: 28353840
|
| 176 |
+
- config_name: qqp
|
| 177 |
+
features:
|
| 178 |
+
- name: question1
|
| 179 |
+
dtype: string
|
| 180 |
+
- name: question2
|
| 181 |
+
dtype: string
|
| 182 |
+
- name: label
|
| 183 |
+
dtype:
|
| 184 |
+
class_label:
|
| 185 |
+
names:
|
| 186 |
+
'0': not_duplicate
|
| 187 |
+
'1': duplicate
|
| 188 |
+
- name: idx
|
| 189 |
+
dtype: int32
|
| 190 |
+
splits:
|
| 191 |
+
- name: train
|
| 192 |
+
num_bytes: 50900820
|
| 193 |
+
num_examples: 363846
|
| 194 |
+
- name: validation
|
| 195 |
+
num_bytes: 5653754
|
| 196 |
+
num_examples: 40430
|
| 197 |
+
- name: test
|
| 198 |
+
num_bytes: 55171111
|
| 199 |
+
num_examples: 390965
|
| 200 |
+
download_size: 73982265
|
| 201 |
+
dataset_size: 111725685
|
| 202 |
+
- config_name: mnli
|
| 203 |
+
data_files:
|
| 204 |
+
- split: train
|
| 205 |
+
path: mnli/train-*
|
| 206 |
+
- split: validation_matched
|
| 207 |
+
path: mnli/validation_matched-*
|
| 208 |
+
- split: validation_mismatched
|
| 209 |
+
path: mnli/validation_mismatched-*
|
| 210 |
+
- split: test_matched
|
| 211 |
+
path: mnli/test_matched-*
|
| 212 |
+
- split: test_mismatched
|
| 213 |
+
path: mnli/test_mismatched-*
|
| 214 |
+
---
|
| 215 |
+
|
| 216 |
+
# Dataset Card for GLUE
|
| 217 |
+
# TODO @Moritz
|
| 218 |
+
|
| 219 |
+
## Table of Contents
|
| 220 |
+
- [Dataset Card for GLUE](#dataset-card-for-glue)
|
| 221 |
+
- [Table of Contents](#table-of-contents)
|
| 222 |
+
- [Dataset Description](#dataset-description)
|
| 223 |
+
- [Dataset Summary](#dataset-summary)
|
| 224 |
+
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
|
| 225 |
+
- [ax](#ax)
|
| 226 |
+
- [cola](#cola)
|
| 227 |
+
- [mnli](#mnli)
|
| 228 |
+
- [mnli_matched](#mnli_matched)
|
| 229 |
+
- [Languages](#languages)
|
| 230 |
+
- [Dataset Structure](#dataset-structure)
|
| 231 |
+
- [Data Instances](#data-instances)
|
| 232 |
+
- [ax](#ax-1)
|
| 233 |
+
- [cola](#cola-1)
|
| 234 |
+
- [mnli](#mnli-1)
|
| 235 |
+
- [mnli_matched](#mnli_matched-1)
|
| 236 |
+
- [Data Fields](#data-fields)
|
| 237 |
+
- [ax](#ax-2)
|
| 238 |
+
- [cola](#cola-2)
|
| 239 |
+
- [mnli](#mnli-2)
|
| 240 |
+
- [mnli_matched](#mnli_matched-2)
|
| 241 |
+
- [mnli_mismatched](#mnli_mismatched-2)
|
| 242 |
+
- [Data Splits](#data-splits)
|
| 243 |
+
- [ax](#ax-3)
|
| 244 |
+
- [cola](#cola-3)
|
| 245 |
+
- [mnli](#mnli-3)
|
| 246 |
+
- [Dataset Creation](#dataset-creation)
|
| 247 |
+
- [Curation Rationale](#curation-rationale)
|
| 248 |
+
- [Source Data](#source-data)
|
| 249 |
+
- [Initial Data Collection and Normalization](#initial-data-collection-and-normalization)
|
| 250 |
+
- [Who are the source language producers?](#who-are-the-source-language-producers)
|
| 251 |
+
- [Annotations](#annotations)
|
| 252 |
+
- [Annotation process](#annotation-process)
|
| 253 |
+
- [Who are the annotators?](#who-are-the-annotators)
|
| 254 |
+
- [Personal and Sensitive Information](#personal-and-sensitive-information)
|
| 255 |
+
- [Considerations for Using the Data](#considerations-for-using-the-data)
|
| 256 |
+
- [Social Impact of Dataset](#social-impact-of-dataset)
|
| 257 |
+
- [Discussion of Biases](#discussion-of-biases)
|
| 258 |
+
- [Other Known Limitations](#other-known-limitations)
|
| 259 |
+
- [Additional Information](#additional-information)
|
| 260 |
+
- [Dataset Curators](#dataset-curators)
|
| 261 |
+
- [Licensing Information](#licensing-information)
|
| 262 |
+
- [Citation Information](#citation-information)
|
| 263 |
+
- [Contributions](#contributions)
|
| 264 |
+
|
| 265 |
+
## Dataset Description
|
| 266 |
+
# TODO @Moritz
|
| 267 |
+
|
| 268 |
+
- **Homepage:** https://gluebenchmark.com/
|
| 269 |
+
- **Repository:** https://github.com/nyu-mll/GLUE-baselines
|
| 270 |
+
- **Paper:** https://arxiv.org/abs/1804.07461
|
| 271 |
+
- **Leaderboard:** https://gluebenchmark.com/leaderboard
|
| 272 |
+
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
| 273 |
+
- **Size of downloaded dataset files:** 1.00 GB
|
| 274 |
+
- **Size of the generated dataset:** 240.84 MB
|
| 275 |
+
- **Total amount of disk used:** 1.24 GB
|
| 276 |
+
|
| 277 |
+
### Dataset Summary
|
| 278 |
+
# TODO @Moritz
|
| 279 |
+
|
| 280 |
+
GLUE, the General Language Understanding Evaluation benchmark (https://gluebenchmark.com/) is a collection of resources for training, evaluating, and analyzing natural language understanding systems.
|
| 281 |
+
|
| 282 |
+
### Supported Tasks and Leaderboards
|
| 283 |
+
# TODO @Moritz
|
| 284 |
+
|
| 285 |
+
NEDTBench comprises the following tasks:
|
| 286 |
+
|
| 287 |
+
#### stsb
|
| 288 |
+
|
| 289 |
+
The Semantic Textual Similarity Benchmark (Cer et al., 2017) is a collection of sentence pairs drawn from news headlines, video and image captions, and natural language inference data. Each pair is human-annotated with a similarity score from 1 to 5.
|
| 290 |
+
|
| 291 |
+
#### wnli
|
| 292 |
+
|
| 293 |
+
The Winograd Schema Challenge (Levesque et al., 2011) is a reading comprehension task in which a system must read a sentence with a pronoun and select the referent of that pronoun from a list of choices. The examples are manually constructed to foil simple statistical methods: Each one is contingent on contextual information provided by a single word or phrase in the sentence. To convert the problem into sentence pair classification, the authors of the benchmark construct sentence pairs by replacing the ambiguous pronoun with each possible referent. The task is to predict if the sentence with the pronoun substituted is entailed by the original sentence. They use a small evaluation set consisting of new examples derived from fiction books that was shared privately by the authors of the original corpus. While the included training set is balanced between two classes, the test set is imbalanced between them (65% not entailment). Also, due to a data quirk, the development set is adversarial: hypotheses are sometimes shared between training and development examples, so if a model memorizes the training examples, they will predict the wrong label on corresponding development set example. As with QNLI, each example is evaluated separately, so there is not a systematic correspondence between a model's score on this task and its score on the unconverted original task. The authors of the benchmark call converted dataset WNLI (Winograd NLI).
|
| 294 |
+
|
| 295 |
+
### Languages
|
| 296 |
+
|
| 297 |
+
All the columns and classes (when textual) in NEDTBench are in English (BCP-47 `en`)
|
| 298 |
+
|
| 299 |
+
## Dataset Structure
|
| 300 |
+
|
| 301 |
+
### Data Instances
|
| 302 |
+
# TODO @Moritz please Check what you can disclose here
|
| 303 |
+
|
| 304 |
+
|
| 305 |
+
#### mnli
|
| 306 |
+
|
| 307 |
+
- **Size of downloaded dataset files:** 312.78 MB
|
| 308 |
+
- **Size of the generated dataset:** 82.47 MB
|
| 309 |
+
- **Total amount of disk used:** 395.26 MB
|
| 310 |
+
|
| 311 |
+
An example of 'train' looks as follows.
|
| 312 |
+
```
|
| 313 |
+
{
|
| 314 |
+
"premise": "Conceptually cream skimming has two basic dimensions - product and geography.",
|
| 315 |
+
"hypothesis": "Product and geography are what make cream skimming work.",
|
| 316 |
+
"label": 1,
|
| 317 |
+
"idx": 0
|
| 318 |
+
}
|
| 319 |
+
```
|
| 320 |
+
|
| 321 |
+
#### qnli
|
| 322 |
+
|
| 323 |
+
- **Size of downloaded dataset files:** ??
|
| 324 |
+
- **Size of the generated dataset:** 28 MB
|
| 325 |
+
- **Total amount of disk used:** ??
|
| 326 |
+
|
| 327 |
+
An example of 'train' looks as follows.
|
| 328 |
+
```
|
| 329 |
+
{
|
| 330 |
+
"question": "When did the third Digimon series begin?",
|
| 331 |
+
"sentence": "Unlike the two seasons before it and most of the seasons that followed, Digimon Tamers takes a darker and more realistic approach to its story featuring Digimon who do not reincarnate after their deaths and more complex character development in the original Japanese.",
|
| 332 |
+
"label": 1,
|
| 333 |
+
"idx": 0
|
| 334 |
+
}
|
| 335 |
+
```
|
| 336 |
+
|
| 337 |
+
#### sst2
|
| 338 |
+
|
| 339 |
+
- **Size of downloaded dataset files:** ??
|
| 340 |
+
- **Size of the generated dataset:** 4.9 MB
|
| 341 |
+
- **Total amount of disk used:** ??
|
| 342 |
+
|
| 343 |
+
An example of 'train' looks as follows.
|
| 344 |
+
```
|
| 345 |
+
{
|
| 346 |
+
"sentence": "hide new secretions from the parental units",
|
| 347 |
+
"label": 0,
|
| 348 |
+
"idx": 0
|
| 349 |
+
}
|
| 350 |
+
```
|
| 351 |
+
|
| 352 |
+
#### wnli
|
| 353 |
+
|
| 354 |
+
- **Size of downloaded dataset files:** ??
|
| 355 |
+
- **Size of the generated dataset:** 0.18 MB
|
| 356 |
+
- **Total amount of disk used:** ??
|
| 357 |
+
|
| 358 |
+
An example of 'train' looks as follows.
|
| 359 |
+
```
|
| 360 |
+
{
|
| 361 |
+
"sentence1": "I stuck a pin through a carrot. When I pulled the pin out, it had a hole.",
|
| 362 |
+
"sentence2": "The carrot had a hole.",
|
| 363 |
+
"label": 1,
|
| 364 |
+
"idx": 0
|
| 365 |
+
}
|
| 366 |
+
```
|
| 367 |
+
|
| 368 |
+
### Data Fields
|
| 369 |
+
# TODO @Moritz please Check what you can disclose here
|
| 370 |
+
|
| 371 |
+
The data fields are the same among all splits.
|
| 372 |
+
|
| 373 |
+
#### ax
|
| 374 |
+
- `premise`: a `string` feature.
|
| 375 |
+
- `hypothesis`: a `string` feature.
|
| 376 |
+
- `label`: a classification label, with possible values including `entailment` (0), `neutral` (1), `contradiction` (2).
|
| 377 |
+
- `idx`: a `int32` feature.
|
| 378 |
+
|
| 379 |
+
#### cola
|
| 380 |
+
- `sentence`: a `string` feature.
|
| 381 |
+
- `label`: a classification label, with possible values including `unacceptable` (0), `acceptable` (1).
|
| 382 |
+
- `idx`: a `int32` feature.
|
| 383 |
+
|
| 384 |
+
#### mnli_matched
|
| 385 |
+
- `premise`: a `string` feature.
|
| 386 |
+
- `hypothesis`: a `string` feature.
|
| 387 |
+
- `label`: a classification label, with possible values including `entailment` (0), `neutral` (1), `contradiction` (2).
|
| 388 |
+
- `idx`: a `int32` feature.
|
| 389 |
+
|
| 390 |
+
#### wnli
|
| 391 |
+
|
| 392 |
+
- `sentence1`: a `string` feature.
|
| 393 |
+
- `sentence2`: a `string` feature.
|
| 394 |
+
- `label`: a classification label, with possible values including `not_entailment` (0), `entailment` (1).
|
| 395 |
+
- `idx`: a `int32` feature.
|
| 396 |
+
|
| 397 |
+
### Data Splits
|
| 398 |
+
# TODO @Moritz please Check what you can disclose here
|
| 399 |
+
|
| 400 |
+
#### cola
|
| 401 |
+
|
| 402 |
+
| |train|validation|test|
|
| 403 |
+
|----|----:|---------:|---:|
|
| 404 |
+
|cola| 8551| 1043|1063|
|
| 405 |
+
|
| 406 |
+
#### mnli
|
| 407 |
+
|
| 408 |
+
| |train |validation_matched|validation_mismatched|test_matched|test_mismatched|
|
| 409 |
+
|----|-----:|-----------------:|--------------------:|-----------:|--------------:|
|
| 410 |
+
|mnli|392702| 9815| 9832| 9796| 9847|
|
| 411 |
+
|
| 412 |
+
|
| 413 |
+
|
| 414 |
+
## Dataset Creation
|
| 415 |
+
# TODO @Moritz please Check what you can disclose here
|
| 416 |
+
|
| 417 |
+
### Curation Rationale
|
| 418 |
+
|
| 419 |
+
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
| 420 |
+
|
| 421 |
+
### Source Data
|
| 422 |
+
# TODO @Moritz please Check what you can disclose here
|
| 423 |
+
|
| 424 |
+
#### Initial Data Collection and Normalization
|
| 425 |
+
|
| 426 |
+
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
| 427 |
+
|
| 428 |
+
#### Who are the source language producers?
|
| 429 |
+
|
| 430 |
+
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
| 431 |
+
|
| 432 |
+
### Annotations
|
| 433 |
+
|
| 434 |
+
#### Annotation process
|
| 435 |
+
|
| 436 |
+
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
| 437 |
+
|
| 438 |
+
#### Who are the annotators?
|
| 439 |
+
|
| 440 |
+
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
| 441 |
+
|
| 442 |
+
### Personal and Sensitive Information
|
| 443 |
+
|
| 444 |
+
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
| 445 |
+
|
| 446 |
+
## Considerations for Using the Data
|
| 447 |
+
|
| 448 |
+
### Social Impact of Dataset
|
| 449 |
+
|
| 450 |
+
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
| 451 |
+
|
| 452 |
+
### Discussion of Biases
|
| 453 |
+
|
| 454 |
+
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
| 455 |
+
|
| 456 |
+
### Other Known Limitations
|
| 457 |
+
|
| 458 |
+
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
| 459 |
+
|
| 460 |
+
## Additional Information
|
| 461 |
+
|
| 462 |
+
### Dataset Curators
|
| 463 |
+
|
| 464 |
+
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
| 465 |
+
|
| 466 |
+
|
| 467 |
+
|
| 468 |
+
### Licensing Information
|
| 469 |
+
|
| 470 |
+
The primary NEDTBench tasks are built on and derived from existing datasets. We refer users to the original licenses accompanying each dataset.
|
| 471 |
+
|
| 472 |
+
|
| 473 |
+
|
| 474 |
+
### Citation Information
|
| 475 |
+
|
| 476 |
+
|
| 477 |
+
We encourage you to use the following BibTeX citation for NEDTBench itself:
|
| 478 |
+
```
|
| 479 |
+
# TODO @Jimmy Add here Final Paper
|
| 480 |
+
```
|
| 481 |
+
|
| 482 |
+
|
| 483 |
+
If you use NEDTBench, please also cite all the individual datasets you use, both to give the original authors their due credit and because venues will expect papers to describe the data they evaluate on.
|
| 484 |
+
The following provides BibTeX for all of the NEDTBench tasks.
|
| 485 |
+
|
| 486 |
+
# TODO: Please Add original DS Papers like this
|
| 487 |
+
```
|
| 488 |
+
@article{warstadt2018neural,
|
| 489 |
+
title={Neural Network Acceptability Judgments},
|
| 490 |
+
author={Warstadt, Alex and Singh, Amanpreet and Bowman, Samuel R.},
|
| 491 |
+
journal={arXiv preprint 1805.12471},
|
| 492 |
+
year={2018}
|
| 493 |
+
}
|
| 494 |
+
@inproceedings{socher2013recursive,
|
| 495 |
+
title={Recursive deep models for semantic compositionality over a sentiment treebank},
|
| 496 |
+
author={Socher, Richard and Perelygin, Alex and Wu, Jean and Chuang, Jason and Manning, Christopher D and Ng, Andrew and Potts, Christopher},
|
| 497 |
+
booktitle={Proceedings of EMNLP},
|
| 498 |
+
pages={1631--1642},
|
| 499 |
+
year={2013}
|
| 500 |
+
}
|
| 501 |
+
```
|
| 502 |
+
|
| 503 |
+
|
| 504 |
+
### Contributions
|
| 505 |
+
|
| 506 |
+
Thanks to [@Moritz Tschöpe](https://github.com/clrfl) and [@Jimmy Pöhlmann](https://github.com/JP-SystemsX) for adding this dataset.
|