Datasets:
Tasks:
Token Classification
Sub-tasks:
named-entity-recognition
Languages:
German
Size:
100K<n<1M
License:
Convert dataset sizes from base 2 to base 10 in the dataset card
#4
by
albertvillanova
HF Staff
- opened
README.md
CHANGED
|
@@ -132,9 +132,9 @@ dataset_info:
|
|
| 132 |
- **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
| 133 |
- **Paper:** [https://pdfs.semanticscholar.org/b250/3144ed2152830f6c64a9f797ab3c5a34fee5.pdf](https://pdfs.semanticscholar.org/b250/3144ed2152830f6c64a9f797ab3c5a34fee5.pdf)
|
| 134 |
- **Point of Contact:** [Darina Benikova](mailto:benikova@aiphes.tu-darmstadt.de)
|
| 135 |
-
- **Size of downloaded dataset files:**
|
| 136 |
-
- **Size of the generated dataset:**
|
| 137 |
-
- **Total amount of disk used:**
|
| 138 |
|
| 139 |
### Dataset Summary
|
| 140 |
|
|
@@ -154,9 +154,9 @@ German
|
|
| 154 |
|
| 155 |
#### germeval_14
|
| 156 |
|
| 157 |
-
- **Size of downloaded dataset files:**
|
| 158 |
-
- **Size of the generated dataset:**
|
| 159 |
-
- **Total amount of disk used:**
|
| 160 |
|
| 161 |
An example of 'train' looks as follows. This example was too long and was cropped:
|
| 162 |
|
|
|
|
| 132 |
- **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
| 133 |
- **Paper:** [https://pdfs.semanticscholar.org/b250/3144ed2152830f6c64a9f797ab3c5a34fee5.pdf](https://pdfs.semanticscholar.org/b250/3144ed2152830f6c64a9f797ab3c5a34fee5.pdf)
|
| 134 |
- **Point of Contact:** [Darina Benikova](mailto:benikova@aiphes.tu-darmstadt.de)
|
| 135 |
+
- **Size of downloaded dataset files:** 10.29 MB
|
| 136 |
+
- **Size of the generated dataset:** 18.03 MB
|
| 137 |
+
- **Total amount of disk used:** 28.31 MB
|
| 138 |
|
| 139 |
### Dataset Summary
|
| 140 |
|
|
|
|
| 154 |
|
| 155 |
#### germeval_14
|
| 156 |
|
| 157 |
+
- **Size of downloaded dataset files:** 10.29 MB
|
| 158 |
+
- **Size of the generated dataset:** 18.03 MB
|
| 159 |
+
- **Total amount of disk used:** 28.31 MB
|
| 160 |
|
| 161 |
An example of 'train' looks as follows. This example was too long and was cropped:
|
| 162 |
|