id stringlengths 2 115 | lastModified stringlengths 24 24 | tags list | author stringlengths 2 42 ⌀ | description stringlengths 0 68.7k ⌀ | citation stringlengths 0 10.7k ⌀ | cardData null | likes int64 0 3.55k | downloads int64 0 10.1M | card stringlengths 0 1.01M |
|---|---|---|---|---|---|---|---|---|---|
results-sd-v1-5-sd-v2-1-if-v1-0-karlo/dbc4cf0f | 2023-07-30T04:09:40.000Z | [
"region:us"
] | results-sd-v1-5-sd-v2-1-if-v1-0-karlo | null | null | null | 0 | 49 | ---
dataset_info:
features:
- name: result
dtype: string
- name: id
dtype: int64
splits:
- name: train
num_bytes: 182
num_examples: 10
download_size: 1329
dataset_size: 182
---
# Dataset Card for "dbc4cf0f"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
results-sd-v1-5-sd-v2-1-if-v1-0-karlo/1a3b5a28 | 2023-08-01T02:26:43.000Z | [
"region:us"
] | results-sd-v1-5-sd-v2-1-if-v1-0-karlo | null | null | null | 0 | 49 | ---
dataset_info:
features:
- name: result
dtype: string
- name: id
dtype: int64
splits:
- name: train
num_bytes: 178
num_examples: 10
download_size: 1330
dataset_size: 178
---
# Dataset Card for "1a3b5a28"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
results-sd-v1-5-sd-v2-1-if-v1-0-karlo/5415ba1e | 2023-08-01T02:26:47.000Z | [
"region:us"
] | results-sd-v1-5-sd-v2-1-if-v1-0-karlo | null | null | null | 0 | 49 | ---
dataset_info:
features:
- name: result
dtype: string
- name: id
dtype: int64
splits:
- name: train
num_bytes: 178
num_examples: 10
download_size: 1330
dataset_size: 178
---
# Dataset Card for "5415ba1e"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
results-sd-v1-5-sd-v2-1-if-v1-0-karlo/67f7015d | 2023-08-01T02:32:00.000Z | [
"region:us"
] | results-sd-v1-5-sd-v2-1-if-v1-0-karlo | null | null | null | 0 | 49 | ---
dataset_info:
features:
- name: result
dtype: string
- name: id
dtype: int64
splits:
- name: train
num_bytes: 184
num_examples: 10
download_size: 1335
dataset_size: 184
---
# Dataset Card for "67f7015d"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
tanganke/EuroSAT | 2023-08-01T08:09:39.000Z | [
"task_categories:image-classification",
"region:us"
] | tanganke | null | null | null | 0 | 49 | ---
task_categories:
- image-classification
---
# EuroSAT
EuroSAT: Downloaded from https://github.com/phelber/EuroSAT (direct link: https://madm.dfki.de/files/sentinel/EuroSAT.zip).
For this dataset we randomly split the downloaded data into train/validation/test (21,600/2,700/2,700 samples). |
results-sd-v1-5-sd-v2-1-if-v1-0-karlo/ff4ad9fc | 2023-08-01T08:26:58.000Z | [
"region:us"
] | results-sd-v1-5-sd-v2-1-if-v1-0-karlo | null | null | null | 0 | 49 | ---
dataset_info:
features:
- name: result
dtype: string
- name: id
dtype: int64
splits:
- name: train
num_bytes: 178
num_examples: 10
download_size: 1345
dataset_size: 178
---
# Dataset Card for "ff4ad9fc"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
results-sd-v1-5-sd-v2-1-if-v1-0-karlo/3088326b | 2023-08-01T08:29:56.000Z | [
"region:us"
] | results-sd-v1-5-sd-v2-1-if-v1-0-karlo | null | null | null | 0 | 49 | ---
dataset_info:
features:
- name: result
dtype: string
- name: id
dtype: int64
splits:
- name: train
num_bytes: 172
num_examples: 10
download_size: 1312
dataset_size: 172
---
# Dataset Card for "3088326b"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
results-sd-v1-5-sd-v2-1-if-v1-0-karlo/5c480e63 | 2023-08-01T17:09:35.000Z | [
"region:us"
] | results-sd-v1-5-sd-v2-1-if-v1-0-karlo | null | null | null | 0 | 49 | ---
dataset_info:
features:
- name: result
dtype: string
- name: id
dtype: int64
splits:
- name: train
num_bytes: 176
num_examples: 10
download_size: 1325
dataset_size: 176
---
# Dataset Card for "5c480e63"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
results-sd-v1-5-sd-v2-1-if-v1-0-karlo/a5a6e439 | 2023-08-02T21:54:42.000Z | [
"region:us"
] | results-sd-v1-5-sd-v2-1-if-v1-0-karlo | null | null | null | 0 | 49 | ---
dataset_info:
features:
- name: result
dtype: string
- name: id
dtype: int64
splits:
- name: train
num_bytes: 186
num_examples: 10
download_size: 1331
dataset_size: 186
---
# Dataset Card for "a5a6e439"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
results-sd-v1-5-sd-v2-1-if-v1-0-karlo/851887d0 | 2023-08-02T23:58:24.000Z | [
"region:us"
] | results-sd-v1-5-sd-v2-1-if-v1-0-karlo | null | null | null | 0 | 49 | ---
dataset_info:
features:
- name: result
dtype: string
- name: id
dtype: int64
splits:
- name: train
num_bytes: 182
num_examples: 10
download_size: 1340
dataset_size: 182
---
# Dataset Card for "851887d0"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
results-sd-v1-5-sd-v2-1-if-v1-0-karlo/65cb2066 | 2023-08-03T01:56:44.000Z | [
"region:us"
] | results-sd-v1-5-sd-v2-1-if-v1-0-karlo | null | null | null | 0 | 49 | ---
dataset_info:
features:
- name: result
dtype: string
- name: id
dtype: int64
splits:
- name: train
num_bytes: 182
num_examples: 10
download_size: 1341
dataset_size: 182
---
# Dataset Card for "65cb2066"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
results-sd-v1-5-sd-v2-1-if-v1-0-karlo/80b69762 | 2023-08-03T18:02:27.000Z | [
"region:us"
] | results-sd-v1-5-sd-v2-1-if-v1-0-karlo | null | null | null | 0 | 49 | ---
dataset_info:
features:
- name: result
dtype: string
- name: id
dtype: int64
splits:
- name: train
num_bytes: 180
num_examples: 10
download_size: 1330
dataset_size: 180
---
# Dataset Card for "80b69762"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
results-sd-v1-5-sd-v2-1-if-v1-0-karlo/f71baca3 | 2023-08-03T20:36:38.000Z | [
"region:us"
] | results-sd-v1-5-sd-v2-1-if-v1-0-karlo | null | null | null | 0 | 49 | ---
dataset_info:
features:
- name: result
dtype: string
- name: id
dtype: int64
splits:
- name: train
num_bytes: 184
num_examples: 10
download_size: 1340
dataset_size: 184
---
# Dataset Card for "f71baca3"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
results-sd-v1-5-sd-v2-1-if-v1-0-karlo/85515c38 | 2023-08-03T21:47:17.000Z | [
"region:us"
] | results-sd-v1-5-sd-v2-1-if-v1-0-karlo | null | null | null | 0 | 49 | ---
dataset_info:
features:
- name: result
dtype: string
- name: id
dtype: int64
splits:
- name: train
num_bytes: 182
num_examples: 10
download_size: 1340
dataset_size: 182
---
# Dataset Card for "85515c38"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
results-sd-v1-5-sd-v2-1-if-v1-0-karlo/eea09ca8 | 2023-08-04T08:25:29.000Z | [
"region:us"
] | results-sd-v1-5-sd-v2-1-if-v1-0-karlo | null | null | null | 0 | 49 | ---
dataset_info:
features:
- name: result
dtype: string
- name: id
dtype: int64
splits:
- name: train
num_bytes: 178
num_examples: 10
download_size: 1326
dataset_size: 178
---
# Dataset Card for "eea09ca8"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
results-sd-v1-5-sd-v2-1-if-v1-0-karlo/aea8c538 | 2023-08-04T20:14:12.000Z | [
"region:us"
] | results-sd-v1-5-sd-v2-1-if-v1-0-karlo | null | null | null | 0 | 49 | ---
dataset_info:
features:
- name: result
dtype: string
- name: id
dtype: int64
splits:
- name: train
num_bytes: 184
num_examples: 10
download_size: 1331
dataset_size: 184
---
# Dataset Card for "aea8c538"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
results-sd-v1-5-sd-v2-1-if-v1-0-karlo/4374abee | 2023-08-04T20:17:34.000Z | [
"region:us"
] | results-sd-v1-5-sd-v2-1-if-v1-0-karlo | null | null | null | 0 | 49 | ---
dataset_info:
features:
- name: result
dtype: string
- name: id
dtype: int64
splits:
- name: train
num_bytes: 178
num_examples: 10
download_size: 1331
dataset_size: 178
---
# Dataset Card for "4374abee"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
results-sd-v1-5-sd-v2-1-if-v1-0-karlo/559f2539 | 2023-08-04T20:21:54.000Z | [
"region:us"
] | results-sd-v1-5-sd-v2-1-if-v1-0-karlo | null | null | null | 0 | 49 | ---
dataset_info:
features:
- name: result
dtype: string
- name: id
dtype: int64
splits:
- name: train
num_bytes: 182
num_examples: 10
download_size: 1312
dataset_size: 182
---
# Dataset Card for "559f2539"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
results-sd-v1-5-sd-v2-1-if-v1-0-karlo/3d3c8f29 | 2023-08-04T20:24:08.000Z | [
"region:us"
] | results-sd-v1-5-sd-v2-1-if-v1-0-karlo | null | null | null | 0 | 49 | ---
dataset_info:
features:
- name: result
dtype: string
- name: id
dtype: int64
splits:
- name: train
num_bytes: 182
num_examples: 10
download_size: 1331
dataset_size: 182
---
# Dataset Card for "3d3c8f29"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
results-sd-v1-5-sd-v2-1-if-v1-0-karlo/068927a6 | 2023-08-05T15:24:56.000Z | [
"region:us"
] | results-sd-v1-5-sd-v2-1-if-v1-0-karlo | null | null | null | 0 | 49 | ---
dataset_info:
features:
- name: result
dtype: string
- name: id
dtype: int64
splits:
- name: train
num_bytes: 182
num_examples: 10
download_size: 1324
dataset_size: 182
---
# Dataset Card for "068927a6"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
results-sd-v1-5-sd-v2-1-if-v1-0-karlo/4de0c02b | 2023-08-06T20:09:49.000Z | [
"region:us"
] | results-sd-v1-5-sd-v2-1-if-v1-0-karlo | null | null | null | 0 | 49 | ---
dataset_info:
features:
- name: result
dtype: string
- name: id
dtype: int64
splits:
- name: train
num_bytes: 182
num_examples: 10
download_size: 1331
dataset_size: 182
---
# Dataset Card for "4de0c02b"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
results-sd-v1-5-sd-v2-1-if-v1-0-karlo/81882391 | 2023-08-06T20:09:52.000Z | [
"region:us"
] | results-sd-v1-5-sd-v2-1-if-v1-0-karlo | null | null | null | 0 | 49 | ---
dataset_info:
features:
- name: result
dtype: string
- name: id
dtype: int64
splits:
- name: train
num_bytes: 182
num_examples: 10
download_size: 1331
dataset_size: 182
---
# Dataset Card for "81882391"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
results-sd-v1-5-sd-v2-1-if-v1-0-karlo/bab683f5 | 2023-08-07T04:28:32.000Z | [
"region:us"
] | results-sd-v1-5-sd-v2-1-if-v1-0-karlo | null | null | null | 0 | 49 | ---
dataset_info:
features:
- name: result
dtype: string
- name: id
dtype: int64
splits:
- name: train
num_bytes: 182
num_examples: 10
download_size: 1342
dataset_size: 182
---
# Dataset Card for "bab683f5"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
results-sd-v1-5-sd-v2-1-if-v1-0-karlo/c7ead5d2 | 2023-08-07T17:43:16.000Z | [
"region:us"
] | results-sd-v1-5-sd-v2-1-if-v1-0-karlo | null | null | null | 0 | 49 | ---
dataset_info:
features:
- name: result
dtype: string
- name: id
dtype: int64
splits:
- name: train
num_bytes: 182
num_examples: 10
download_size: 1330
dataset_size: 182
---
# Dataset Card for "c7ead5d2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
results-sd-v1-5-sd-v2-1-if-v1-0-karlo/5096d09e | 2023-08-07T17:43:19.000Z | [
"region:us"
] | results-sd-v1-5-sd-v2-1-if-v1-0-karlo | null | null | null | 0 | 49 | ---
dataset_info:
features:
- name: result
dtype: string
- name: id
dtype: int64
splits:
- name: train
num_bytes: 182
num_examples: 10
download_size: 1330
dataset_size: 182
---
# Dataset Card for "5096d09e"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
results-sd-v1-5-sd-v2-1-if-v1-0-karlo/f7c57b0e | 2023-08-07T18:21:31.000Z | [
"region:us"
] | results-sd-v1-5-sd-v2-1-if-v1-0-karlo | null | null | null | 0 | 49 | ---
dataset_info:
features:
- name: result
dtype: string
- name: id
dtype: int64
splits:
- name: train
num_bytes: 180
num_examples: 10
download_size: 1343
dataset_size: 180
---
# Dataset Card for "f7c57b0e"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
results-sd-v1-5-sd-v2-1-if-v1-0-karlo/6f12be3a | 2023-08-07T18:30:44.000Z | [
"region:us"
] | results-sd-v1-5-sd-v2-1-if-v1-0-karlo | null | null | null | 0 | 49 | ---
dataset_info:
features:
- name: result
dtype: string
- name: id
dtype: int64
splits:
- name: train
num_bytes: 178
num_examples: 10
download_size: 1337
dataset_size: 178
---
# Dataset Card for "6f12be3a"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
results-sd-v1-5-sd-v2-1-if-v1-0-karlo/7ba466d4 | 2023-08-08T01:34:48.000Z | [
"region:us"
] | results-sd-v1-5-sd-v2-1-if-v1-0-karlo | null | null | null | 0 | 49 | ---
dataset_info:
features:
- name: result
dtype: string
- name: id
dtype: int64
splits:
- name: train
num_bytes: 182
num_examples: 10
download_size: 1314
dataset_size: 182
---
# Dataset Card for "7ba466d4"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
results-sd-v1-5-sd-v2-1-if-v1-0-karlo/5045c911 | 2023-08-08T10:33:40.000Z | [
"region:us"
] | results-sd-v1-5-sd-v2-1-if-v1-0-karlo | null | null | null | 0 | 49 | ---
dataset_info:
features:
- name: result
dtype: string
- name: id
dtype: int64
splits:
- name: train
num_bytes: 178
num_examples: 10
download_size: 1330
dataset_size: 178
---
# Dataset Card for "5045c911"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
results-sd-v1-5-sd-v2-1-if-v1-0-karlo/d5bdcdb8 | 2023-08-09T00:52:54.000Z | [
"region:us"
] | results-sd-v1-5-sd-v2-1-if-v1-0-karlo | null | null | null | 0 | 49 | ---
dataset_info:
features:
- name: result
dtype: string
- name: id
dtype: int64
splits:
- name: train
num_bytes: 176
num_examples: 10
download_size: 1340
dataset_size: 176
---
# Dataset Card for "d5bdcdb8"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
results-sd-v1-5-sd-v2-1-if-v1-0-karlo/5106c6e4 | 2023-08-09T17:04:56.000Z | [
"region:us"
] | results-sd-v1-5-sd-v2-1-if-v1-0-karlo | null | null | null | 0 | 49 | ---
dataset_info:
features:
- name: result
dtype: string
- name: id
dtype: int64
splits:
- name: train
num_bytes: 176
num_examples: 10
download_size: 1341
dataset_size: 176
---
# Dataset Card for "5106c6e4"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
results-sd-v1-5-sd-v2-1-if-v1-0-karlo/9ed9197a | 2023-08-09T17:05:01.000Z | [
"region:us"
] | results-sd-v1-5-sd-v2-1-if-v1-0-karlo | null | null | null | 0 | 49 | ---
dataset_info:
features:
- name: result
dtype: string
- name: id
dtype: int64
splits:
- name: train
num_bytes: 176
num_examples: 10
download_size: 1341
dataset_size: 176
---
# Dataset Card for "9ed9197a"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
results-sd-v1-5-sd-v2-1-if-v1-0-karlo/c4f6e6c7 | 2023-08-10T04:08:33.000Z | [
"region:us"
] | results-sd-v1-5-sd-v2-1-if-v1-0-karlo | null | null | null | 0 | 49 | ---
dataset_info:
features:
- name: result
dtype: string
- name: id
dtype: int64
splits:
- name: train
num_bytes: 176
num_examples: 10
download_size: 1311
dataset_size: 176
---
# Dataset Card for "c4f6e6c7"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
results-sd-v1-5-sd-v2-1-if-v1-0-karlo/cbf4595f | 2023-08-10T20:11:44.000Z | [
"region:us"
] | results-sd-v1-5-sd-v2-1-if-v1-0-karlo | null | null | null | 0 | 49 | ---
dataset_info:
features:
- name: result
dtype: string
- name: id
dtype: int64
splits:
- name: train
num_bytes: 180
num_examples: 10
download_size: 1340
dataset_size: 180
---
# Dataset Card for "cbf4595f"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
results-sd-v1-5-sd-v2-1-if-v1-0-karlo/f1e9865c | 2023-08-10T20:11:48.000Z | [
"region:us"
] | results-sd-v1-5-sd-v2-1-if-v1-0-karlo | null | null | null | 0 | 49 | ---
dataset_info:
features:
- name: result
dtype: string
- name: id
dtype: int64
splits:
- name: train
num_bytes: 180
num_examples: 10
download_size: 1340
dataset_size: 180
---
# Dataset Card for "f1e9865c"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
results-sd-v1-5-sd-v2-1-if-v1-0-karlo/90c8145e | 2023-08-12T13:49:56.000Z | [
"region:us"
] | results-sd-v1-5-sd-v2-1-if-v1-0-karlo | null | null | null | 0 | 49 | ---
dataset_info:
features:
- name: result
dtype: string
- name: id
dtype: int64
splits:
- name: train
num_bytes: 178
num_examples: 10
download_size: 1336
dataset_size: 178
---
# Dataset Card for "90c8145e"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
results-sd-v1-5-sd-v2-1-if-v1-0-karlo/ead6d7ef | 2023-08-12T17:02:17.000Z | [
"region:us"
] | results-sd-v1-5-sd-v2-1-if-v1-0-karlo | null | null | null | 0 | 49 | ---
dataset_info:
features:
- name: result
dtype: string
- name: id
dtype: int64
splits:
- name: train
num_bytes: 182
num_examples: 10
download_size: 1341
dataset_size: 182
---
# Dataset Card for "ead6d7ef"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
results-sd-v1-5-sd-v2-1-if-v1-0-karlo/9823869d | 2023-08-13T11:58:23.000Z | [
"region:us"
] | results-sd-v1-5-sd-v2-1-if-v1-0-karlo | null | null | null | 0 | 49 | ---
dataset_info:
features:
- name: result
dtype: string
- name: id
dtype: int64
splits:
- name: train
num_bytes: 178
num_examples: 10
download_size: 1332
dataset_size: 178
---
# Dataset Card for "9823869d"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
results-sd-v1-5-sd-v2-1-if-v1-0-karlo/245282ee | 2023-08-14T15:27:29.000Z | [
"region:us"
] | results-sd-v1-5-sd-v2-1-if-v1-0-karlo | null | null | null | 0 | 49 | ---
dataset_info:
features:
- name: result
dtype: string
- name: id
dtype: int64
splits:
- name: train
num_bytes: 182
num_examples: 10
download_size: 1331
dataset_size: 182
---
# Dataset Card for "245282ee"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
results-sd-v1-5-sd-v2-1-if-v1-0-karlo/e13f8e8b | 2023-08-14T15:27:31.000Z | [
"region:us"
] | results-sd-v1-5-sd-v2-1-if-v1-0-karlo | null | null | null | 0 | 49 | ---
dataset_info:
features:
- name: result
dtype: string
- name: id
dtype: int64
splits:
- name: train
num_bytes: 182
num_examples: 10
download_size: 1331
dataset_size: 182
---
# Dataset Card for "e13f8e8b"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
results-sd-v1-5-sd-v2-1-if-v1-0-karlo/b8e528dc | 2023-08-14T22:31:33.000Z | [
"region:us"
] | results-sd-v1-5-sd-v2-1-if-v1-0-karlo | null | null | null | 0 | 49 | ---
dataset_info:
features:
- name: result
dtype: string
- name: id
dtype: int64
splits:
- name: train
num_bytes: 178
num_examples: 10
download_size: 1342
dataset_size: 178
---
# Dataset Card for "b8e528dc"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
hugcyp/LCSTS | 2023-08-16T02:48:38.000Z | [
"region:us"
] | hugcyp | null | null | null | 1 | 49 | Entry not found |
results-sd-v1-5-sd-v2-1-if-v1-0-karlo/68759f6d | 2023-08-16T16:29:21.000Z | [
"region:us"
] | results-sd-v1-5-sd-v2-1-if-v1-0-karlo | null | null | null | 0 | 49 | ---
dataset_info:
features:
- name: result
dtype: string
- name: id
dtype: int64
splits:
- name: train
num_bytes: 176
num_examples: 10
download_size: 1331
dataset_size: 176
---
# Dataset Card for "68759f6d"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
results-sd-v1-5-sd-v2-1-if-v1-0-karlo/942ab115 | 2023-08-17T21:19:15.000Z | [
"region:us"
] | results-sd-v1-5-sd-v2-1-if-v1-0-karlo | null | null | null | 0 | 49 | ---
dataset_info:
features:
- name: result
dtype: string
- name: id
dtype: int64
splits:
- name: train
num_bytes: 178
num_examples: 10
download_size: 1314
dataset_size: 178
---
# Dataset Card for "942ab115"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
ymoslem/Law-StackExchange | 2023-08-20T17:25:54.000Z | [
"task_categories:question-answering",
"task_categories:text-classification",
"task_categories:sentence-similarity",
"size_categories:10K<n<100K",
"language:en",
"license:cc-by-sa-4.0",
"legal",
"region:us"
] | ymoslem | null | null | null | 7 | 49 | ---
license: cc-by-sa-4.0
task_categories:
- question-answering
- text-classification
- sentence-similarity
language:
- en
tags:
- legal
pretty_name: Law Stack Exchange Questions and Answers
size_categories:
- 10K<n<100K
---
All StackExchange legal questions and their answers from the Law site, up to 14 August 2023. The repository includes a notebook for the process using the official StackExchange API. |
results-sd-v1-5-sd-v2-1-if-v1-0-karlo/5e9951c3 | 2023-08-21T09:20:49.000Z | [
"region:us"
] | results-sd-v1-5-sd-v2-1-if-v1-0-karlo | null | null | null | 0 | 49 | ---
dataset_info:
features:
- name: result
dtype: string
- name: id
dtype: int64
splits:
- name: train
num_bytes: 178
num_examples: 10
download_size: 1339
dataset_size: 178
---
# Dataset Card for "5e9951c3"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
results-sd-v1-5-sd-v2-1-if-v1-0-karlo/180ed6b5 | 2023-08-22T06:47:53.000Z | [
"region:us"
] | results-sd-v1-5-sd-v2-1-if-v1-0-karlo | null | null | null | 0 | 49 | ---
dataset_info:
features:
- name: result
dtype: string
- name: id
dtype: int64
splits:
- name: train
num_bytes: 184
num_examples: 10
download_size: 1339
dataset_size: 184
---
# Dataset Card for "180ed6b5"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
results-sd-v1-5-sd-v2-1-if-v1-0-karlo/9bf6da77 | 2023-08-24T02:54:39.000Z | [
"region:us"
] | results-sd-v1-5-sd-v2-1-if-v1-0-karlo | null | null | null | 0 | 49 | ---
dataset_info:
features:
- name: result
dtype: string
- name: id
dtype: int64
splits:
- name: train
num_bytes: 182
num_examples: 10
download_size: 1340
dataset_size: 182
---
# Dataset Card for "9bf6da77"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
results-sd-v1-5-sd-v2-1-if-v1-0-karlo/d78679c7 | 2023-08-24T02:54:42.000Z | [
"region:us"
] | results-sd-v1-5-sd-v2-1-if-v1-0-karlo | null | null | null | 0 | 49 | ---
dataset_info:
features:
- name: result
dtype: string
- name: id
dtype: int64
splits:
- name: train
num_bytes: 182
num_examples: 10
download_size: 1340
dataset_size: 182
---
# Dataset Card for "d78679c7"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
results-sd-v1-5-sd-v2-1-if-v1-0-karlo/107a1506 | 2023-08-24T02:54:45.000Z | [
"region:us"
] | results-sd-v1-5-sd-v2-1-if-v1-0-karlo | null | null | null | 0 | 49 | ---
dataset_info:
features:
- name: result
dtype: string
- name: id
dtype: int64
splits:
- name: train
num_bytes: 182
num_examples: 10
download_size: 1340
dataset_size: 182
---
# Dataset Card for "107a1506"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
hitachi-nlp/FLD.v2 | 2023-10-03T12:19:29.000Z | [
"region:us"
] | hitachi-nlp | null | null | null | 3 | 49 | ---
dataset_info:
- config_name: default
features:
- name: hypothesis
dtype: string
- name: context
dtype: string
- name: hypothesis_formula
dtype: string
- name: context_formula
dtype: string
- name: proofs
sequence: string
- name: proof_label
dtype: string
- name: proofs_formula
sequence: string
- name: world_assump_label
dtype: string
- name: original_tree_depth
dtype: int64
- name: depth
dtype: int64
- name: num_formula_distractors
dtype: int64
- name: num_translation_distractors
dtype: int64
- name: num_all_distractors
dtype: int64
- name: negative_hypothesis
dtype: string
- name: negative_hypothesis_formula
dtype: string
- name: negative_original_tree_depth
dtype: int64
- name: negative_proofs
sequence: string
- name: negative_proof_label
dtype: string
- name: negative_world_assump_label
dtype: string
- name: prompt_serial
dtype: string
- name: proof_serial
dtype: string
- name: version
dtype: string
splits:
- name: train
num_bytes: 102341795
num_examples: 30000
- name: validation
num_bytes: 17036757
num_examples: 5000
- name: test
num_bytes: 17032009
num_examples: 5000
download_size: 50518265
dataset_size: 136410561
- config_name: star
features:
- name: hypothesis
dtype: string
- name: context
dtype: string
- name: hypothesis_formula
dtype: string
- name: context_formula
dtype: string
- name: proofs
sequence: string
- name: proof_label
dtype: string
- name: proofs_formula
sequence: string
- name: world_assump_label
dtype: string
- name: original_tree_depth
dtype: int64
- name: depth
dtype: int64
- name: num_formula_distractors
dtype: int64
- name: num_translation_distractors
dtype: int64
- name: num_all_distractors
dtype: int64
- name: negative_hypothesis
dtype: string
- name: negative_hypothesis_formula
dtype: string
- name: negative_original_tree_depth
dtype: int64
- name: negative_proofs
sequence: string
- name: negative_proof_label
dtype: string
- name: negative_world_assump_label
dtype: string
- name: prompt_serial
dtype: string
- name: proof_serial
dtype: string
- name: version
dtype: string
splits:
- name: train
num_bytes: 127005152
num_examples: 30000
- name: validation
num_bytes: 21077447
num_examples: 5000
- name: test
num_bytes: 21297828
num_examples: 5000
download_size: 61803899
dataset_size: 169380427
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
- config_name: star
data_files:
- split: train
path: star/train-*
- split: validation
path: star/validation-*
- split: test
path: star/test-*
---
# Dataset Card for "FLD.v2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Dippi9845/my_arxiv | 2023-09-01T13:21:11.000Z | [
"license:cc-by-nc-nd-3.0",
"region:us"
] | Dippi9845 | null | null | null | 0 | 49 | ---
license: cc-by-nc-nd-3.0
---
|
results-sd-v1-5-sd-v2-1-if-v1-0-karlo/95de681c | 2023-08-29T12:39:25.000Z | [
"region:us"
] | results-sd-v1-5-sd-v2-1-if-v1-0-karlo | null | null | null | 0 | 49 | ---
dataset_info:
features:
- name: result
dtype: string
- name: id
dtype: int64
splits:
- name: train
num_bytes: 182
num_examples: 10
download_size: 1338
dataset_size: 182
---
# Dataset Card for "95de681c"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
results-sd-v1-5-sd-v2-1-if-v1-0-karlo/7bd33ae8 | 2023-09-02T00:21:12.000Z | [
"region:us"
] | results-sd-v1-5-sd-v2-1-if-v1-0-karlo | null | null | null | 0 | 49 | ---
dataset_info:
features:
- name: result
dtype: string
- name: id
dtype: int64
splits:
- name: train
num_bytes: 184
num_examples: 10
download_size: 1336
dataset_size: 184
---
# Dataset Card for "7bd33ae8"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
results-sd-v1-5-sd-v2-1-if-v1-0-karlo/56d4a1b5 | 2023-09-02T17:24:09.000Z | [
"region:us"
] | results-sd-v1-5-sd-v2-1-if-v1-0-karlo | null | null | null | 0 | 49 | ---
dataset_info:
features:
- name: result
dtype: string
- name: id
dtype: int64
splits:
- name: train
num_bytes: 184
num_examples: 10
download_size: 1335
dataset_size: 184
---
# Dataset Card for "56d4a1b5"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
rizerphe/glaive-function-calling-v2-llama | 2023-09-05T12:51:42.000Z | [
"task_categories:text-generation",
"size_categories:100K<n<1M",
"language:en",
"license:cc-by-sa-4.0",
"region:us"
] | rizerphe | null | null | null | 5 | 49 | ---
license: cc-by-sa-4.0
task_categories:
- text-generation
language:
- en
size_categories:
- 100K<n<1M
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 228119663
num_examples: 103091
download_size: 95393598
dataset_size: 228119663
pretty_name: a
---
# Glaive's Function Calling V2 for Llama2
[Glaive's Function Calling V2 dataset](https://huggingface.co/datasets/glaiveai/glaive-function-calling-v2), formatted according to the Llama2 chat schema, with all the data that I wasn't able to automatically convert removed manually.
Adds a special `<function>` token. Here's an example prompt:
```
<s>[INST] <<SYS>>
<function>Available functions:
<function>{
"name": "generate_password",
"description": "Generate a random password with specified criteria",
"parameters": {
"type": "object",
"properties": {
"length": {
"type": "integer",
"description": "The length of the password"
},
"include_numbers": {
"type": "boolean",
"description": "Include numbers in the password"
},
"include_special_characters": {
"type": "boolean",
"description": "Include special characters in the password"
}
},
"required": [
"length"
]
}
}
<</SYS>>
I need a new password. Can you generate one for me? [/INST] Of course! How long would you like your password to be? And do you want it to include numbers and special characters?</s><s>[INST] I want it to be 12 characters long and yes, it should include both numbers and special characters. [/INST]<function>generate_password
{
"length": 12,
"include_numbers": true,
"include_special_characters": true
}</s><s>[INST] <function>{"password": "4#7gB6&9L1!0"} [/INST] Here is your new password: 4#7gB6&9L1!0. Please make sure to save it in a secure place.</s>
``` |
results-sd-v1-5-sd-v2-1-if-v1-0-karlo/04a71b5a | 2023-09-05T05:17:18.000Z | [
"region:us"
] | results-sd-v1-5-sd-v2-1-if-v1-0-karlo | null | null | null | 0 | 49 | ---
dataset_info:
features:
- name: result
dtype: string
- name: id
dtype: int64
splits:
- name: train
num_bytes: 184
num_examples: 10
download_size: 1337
dataset_size: 184
---
# Dataset Card for "04a71b5a"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
results-sd-v1-5-sd-v2-1-if-v1-0-karlo/e7874b25 | 2023-09-05T22:33:52.000Z | [
"region:us"
] | results-sd-v1-5-sd-v2-1-if-v1-0-karlo | null | null | null | 0 | 49 | ---
dataset_info:
features:
- name: result
dtype: string
- name: id
dtype: int64
splits:
- name: train
num_bytes: 182
num_examples: 10
download_size: 1341
dataset_size: 182
---
# Dataset Card for "e7874b25"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
results-sd-v1-5-sd-v2-1-if-v1-0-karlo/392e8719 | 2023-09-07T15:26:19.000Z | [
"region:us"
] | results-sd-v1-5-sd-v2-1-if-v1-0-karlo | null | null | null | 0 | 49 | ---
dataset_info:
features:
- name: result
dtype: string
- name: id
dtype: int64
splits:
- name: train
num_bytes: 184
num_examples: 10
download_size: 1336
dataset_size: 184
---
# Dataset Card for "392e8719"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
results-sd-v1-5-sd-v2-1-if-v1-0-karlo/d9f92c60 | 2023-09-07T23:08:08.000Z | [
"region:us"
] | results-sd-v1-5-sd-v2-1-if-v1-0-karlo | null | null | null | 0 | 49 | ---
dataset_info:
features:
- name: result
dtype: string
- name: id
dtype: int64
splits:
- name: train
num_bytes: 180
num_examples: 10
download_size: 1336
dataset_size: 180
---
# Dataset Card for "d9f92c60"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Linhz/qag_vimmrc2.0 | 2023-09-08T04:04:29.000Z | [
"region:us"
] | Linhz | null | null | null | 0 | 49 | Entry not found |
results-sd-v1-5-sd-v2-1-if-v1-0-karlo/fc019c9e | 2023-09-08T18:39:17.000Z | [
"region:us"
] | results-sd-v1-5-sd-v2-1-if-v1-0-karlo | null | null | null | 0 | 49 | ---
dataset_info:
features:
- name: result
dtype: string
- name: id
dtype: int64
splits:
- name: train
num_bytes: 182
num_examples: 10
download_size: 1340
dataset_size: 182
---
# Dataset Card for "fc019c9e"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
zxvix/pubmed_subset_wiki_10p | 2023-09-20T11:46:57.000Z | [
"region:us"
] | zxvix | null | null | null | 0 | 49 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 3163168567.210593
num_examples: 1110859
- name: test
num_bytes: 1024229
num_examples: 1000
download_size: 826503443
dataset_size: 3164192796.210593
---
# Dataset Card for "pubmed_subset_wiki_10p"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
dim/grammarly_coedit | 2023-09-21T16:25:22.000Z | [
"region:us"
] | dim | null | null | null | 0 | 49 | ---
dataset_info:
features:
- name: _id
dtype: string
- name: task
dtype: string
- name: src
dtype: string
- name: tgt
dtype: string
splits:
- name: train
num_bytes: 19943349
num_examples: 82466
download_size: 11658767
dataset_size: 19943349
---
# Dataset Card for "grammarly_coedit"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
dim/law_stackexchange_prompts | 2023-09-21T21:00:28.000Z | [
"region:us"
] | dim | null | null | null | 0 | 49 | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: solution
dtype: string
splits:
- name: train
num_bytes: 64447591
num_examples: 24343
download_size: 38111723
dataset_size: 64447591
---
# Dataset Card for "law_stackexchange_prompts"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
zxvix/pubmed_subset_c4_40p | 2023-09-24T12:45:55.000Z | [
"region:us"
] | zxvix | null | null | null | 0 | 49 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 2339969714.971987
num_examples: 1666514
- name: test
num_bytes: 1024229
num_examples: 1000
download_size: 1389229225
dataset_size: 2340993943.971987
---
# Dataset Card for "pubmed_subset_c4_40p"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
mammoth-blaze/ParcelSummaryDS | 2023-09-27T20:38:12.000Z | [
"task_categories:text-classification",
"size_categories:n<1K",
"doi:10.57967/hf/1149",
"region:us"
] | mammoth-blaze | null | null | null | 0 | 49 | ---
dataset_info:
features:
- name: contactNames
dtype: string
- name: parcelId
dtype: string
- name: parcelAddress
dtype: string
- name: description
dtype: string
- name: propertyUseCode
dtype: string
- name: acreage
dtype: string
- name: homestead
dtype: string
- name: link
dtype: string
splits:
- name: train
num_bytes: 700
num_examples: 1
download_size: 639
dataset_size: 1400
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
task_categories:
- text-classification
size_categories:
- n<1K
---
# Dataset Card for "ParcelSummaryDS" |
tyzhu/squad_baseline_v4_train_10_eval_10 | 2023-09-26T14:58:51.000Z | [
"region:us"
] | tyzhu | null | null | null | 0 | 49 | ---
dataset_info:
features:
- name: id
dtype: string
- name: title
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: text
dtype: string
- name: answer_start
dtype: int32
- name: inputs
dtype: string
- name: targets
dtype: string
splits:
- name: train
num_bytes: 45381
num_examples: 44
- name: validation
num_bytes: 47457
num_examples: 50
download_size: 43725
dataset_size: 92838
---
# Dataset Card for "squad_baseline_v4_train_10_eval_10"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
shossain/govreport-qa-no-pad-16384 | 2023-10-04T04:09:51.000Z | [
"region:us"
] | shossain | null | null | null | 0 | 49 | ---
dataset_info:
features:
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
- name: labels
sequence: int64
splits:
- name: train
num_bytes: 808259754
num_examples: 7238
download_size: 215788123
dataset_size: 808259754
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "govreport-qa-no-pad-16384"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
nthngdy/oscar-small | 2023-03-08T09:57:45.000Z | [
"task_categories:text-generation",
"task_ids:language-modeling",
"annotations_creators:no-annotation",
"language_creators:found",
"multilinguality:multilingual",
"source_datasets:oscar",
"language:af",
"language:am",
"language:ar",
"language:arz",
"language:as",
"language:az",
"language:azb"... | nthngdy | The Open Super-large Crawled ALMAnaCH coRpus is a huge multilingual corpus obtained by language classification and filtering of the Common Crawl corpus using the goclassy architecture.\ | @inproceedings{ortiz-suarez-etal-2020-monolingual,
title = "A Monolingual Approach to Contextualized Word Embeddings for Mid-Resource Languages",
author = "Ortiz Su{\'a}rez, Pedro Javier and
Romary, Laurent and
Sagot, Benoit",
booktitle = "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
month = jul,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2020.acl-main.156",
pages = "1703--1714",
abstract = "We use the multilingual OSCAR corpus, extracted from Common Crawl via language classification, filtering and cleaning, to train monolingual contextualized word embeddings (ELMo) for five mid-resource languages. We then compare the performance of OSCAR-based and Wikipedia-based ELMo embeddings for these languages on the part-of-speech tagging and parsing tasks. We show that, despite the noise in the Common-Crawl-based OSCAR data, embeddings trained on OSCAR perform much better than monolingual embeddings trained on Wikipedia. They actually equal or improve the current state of the art in tagging and parsing for all five languages. In particular, they also improve over multilingual Wikipedia-based contextual embeddings (multilingual BERT), which almost always constitutes the previous state of the art, thereby showing that the benefit of a larger, more diverse corpus surpasses the cross-lingual benefit of multilingual embedding architectures.",
}
@inproceedings{OrtizSuarezSagotRomary2019,
author = {Pedro Javier {Ortiz Su{\'a}rez} and Benoit Sagot and Laurent Romary},
title = {Asynchronous pipelines for processing huge corpora on medium to low resource infrastructures},
series = {Proceedings of the Workshop on Challenges in the Management of Large Corpora (CMLC-7) 2019. Cardiff, 22nd July 2019},
editor = {Piotr Bański and Adrien Barbaresi and Hanno Biber and Evelyn Breiteneder and Simon Clematide and Marc Kupietz and Harald L{\"u}ngen and Caroline Iliadi},
publisher = {Leibniz-Institut f{\"u}r Deutsche Sprache},
address = {Mannheim},
doi = {10.14618/ids-pub-9021},
url = {http://nbn-resolving.de/urn:nbn:de:bsz:mh39-90215},
pages = {9 -- 16},
year = {2019},
abstract = {Common Crawl is a considerably large, heterogeneous multilingual corpus comprised of crawled documents from the internet, surpassing 20TB of data and distributed as a set of more than 50 thousand plain text files where each contains many documents written in a wide variety of languages. Even though each document has a metadata block associated to it, this data lacks any information about the language in which each document is written, making it extremely difficult to use Common Crawl for monolingual applications. We propose a general, highly parallel, multithreaded pipeline to clean and classify Common Crawl by language; we specifically design it so that it runs efficiently on medium to low resource infrastructures where I/O speeds are the main constraint. We develop the pipeline so that it can be easily reapplied to any kind of heterogeneous corpus and so that it can be parameterised to a wide range of infrastructures. We also distribute a 6.3TB version of Common Crawl, filtered, classified by language, shuffled at line level in order to avoid copyright issues, and ready to be used for NLP applications.},
language = {en}
} | null | 4 | 48 | ---
annotations_creators:
- no-annotation
language_creators:
- found
language:
- af
- am
- ar
- arz
- as
- az
- azb
- ba
- be
- bg
- bn
- bo
- br
- ca
- ce
- ceb
- ckb
- cs
- cv
- cy
- da
- de
- dv
- el
- en
- eo
- es
- et
- eu
- fa
- fi
- fr
- fy
- ga
- gl
- gu
- he
- hi
- hr
- hu
- hy
- id
- is
- it
- ja
- ka
- kk
- km
- kn
- ko
- ku
- ky
- la
- lb
- lo
- lt
- lv
- mg
- mhr
- mk
- ml
- mn
- mr
- ms
- mt
- my
- nds
- ne
- nl
- nn
- 'no'
- or
- os
- pa
- pl
- pnb
- ps
- pt
- ro
- ru
- sa
- sah
- sd
- sh
- si
- sk
- sl
- sq
- sr
- sv
- sw
- ta
- te
- tg
- th
- tk
- tl
- tr
- tt
- ug
- uk
- ur
- uz
- vi
- yi
- zh
license:
- cc0-1.0
multilinguality:
- multilingual
source_datasets:
- oscar
task_categories:
- text-generation
task_ids:
- language-modeling
paperswithcode_id: oscar
pretty_name: OSCAR
---
## WARNING: this dataset is an extract of the OSCAR dataset published here to simulate the use of the full dataset in low-resource contexts.
Using this dataset is equivalent to using a processed version of OSCAR legally speaking. I take no credit for the gathering of the original data and hence refer entirely to the original dataset in the card below.
# Dataset Card for "oscar"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://oscar-corpus.com](https://oscar-corpus.com)
- **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Dataset Summary
OSCAR or **O**pen **S**uper-large **C**rawled [**A**LMAnaCH](https://team.inria.fr/almanach/) co**R**pus is a huge multilingual corpus obtained by language classification and filtering of the [Common Crawl](https://commoncrawl.org/) corpus using the [goclassy](https://github.com/pjox/goclassy) architecture. Data is distributed by language in both original and deduplicated form.
### Supported Tasks and Leaderboards
OSCAR is mainly inteded to pretrain language models and word represantations.
### Languages
All the data is distributed by language, both the original and the deduplicated versions of the data are available. 166 different languages are available. The table in subsection [Data Splits Sample Size](#data-splits-sample-size) provides the language code for each subcorpus as well as the number of words (space separated tokens), lines and sizes for both the original and the deduplicated versions of OSCAR.
## Dataset Structure
We show detailed information for all the configurations of the dataset.
## Dataset Creation
### Curation Rationale
OSCAR was constructed new pipeline derived from the [fastText's one](https://github.com/facebookresearch/fastText), called [_goclassy_](https://github.com/pjox/goclassy). Goclassy reuses the [fastText linear classifier](https://fasttext.cc) and the pre-trained fastText model for language recognition, but it completely rewrites and parallelises their pipeline in an asynchronous manner.
The order of operations is more or less the same as in the fastText pre-processing pipeline but instead of clustering multiple operations into a single blocking process, a worker is launched for each operation but bounding the number of possible parallel operations at a given time by the number of available threads instead of the number of CPUs. Goclassy is implemented in the [Go programming language](https://golang.org/) so it lets the [Go runtime](https://golang.org/src/runtime/mprof.go) handle the scheduling of the processes. Thus the goclassy's pipeline one does not have to wait for a whole WET file to download, decompress and classify in order to start downloading and processing the next one, a new file will start downloading and processing as soon as the scheduler is able to allocate a new process.
Filtering and cleaning processes at line level are done before feeding each line to the classifier. Lines shorter than 100 UTF-8 characters and lines containing invalid UTF-8 characters are discarted and are not classified. After all files are proccesed the deduplicated versions are constructed and everything is then splitted in shards and compressed.
### Source Data
#### Initial Data Collection and Normalization
[Common Crawl](https://commoncrawl.org/) is a non-profit foundation which produces and maintains an open repository of web crawled data that is both accessible and analysable. Common Crawl's complete web archive consists of petabytes of data collected over 8 years of web crawling. The repository contains raw web page HTML data (WARC files), metdata extracts (WAT files) and plain text extracts (WET files). The organisation's crawlers has always respected [nofollow](http://microformats.org/wiki/rel-nofollow) and [robots.txt](https://www.robotstxt.org/) policies.
Each monthly Common Crawl snapshot is in itself a massive multilingual corpus, where every single file contains data coming from multiple web pages written in a large variety of languages and covering all possible types of topics.
To construct OSCAR the WET files of Common Crawl were used. These contain the extracted plain texts from the websites mostly converted to UTF-8, as well as headers containing the metatada of each crawled document. Each WET file comes compressed in gzip format and is stored on Amazon Web Services. In the case of OSCAR, the **November 2018** snapshot was used. It surpasses 20TB of uncompressed data and contains more than 50 thousand plain text files where each file consists of the plain text from multiple websites along its metadata header.
#### Who are the source language producers?
The data comes from multiple web pages in a large variety of languages.
### Annotations
The dataset does not contain any additional annotations.
#### Annotation process
N/A
#### Who are the annotators?
N/A
### Personal and Sensitive Information
Being constructed from Common Crawl, Personal and sensitive information might be present. This **must** be considered before training deep learning models with OSCAR, specially in the case of text-generation models.
## Considerations for Using the Data
### Social Impact of Dataset
OSCAR is intended to bring more data to a wide variety of lanuages, the aim of the corpus is to make large amounts of data available to lower resource languages in order to facilitate the pre-training of state-of-the-art language modeling architectures.
### Discussion of Biases
OSCAR is not properly filtered yet and this can be reflected on the models trained with it. Care is advised specially concerning biases of the resulting models.
### Other Known Limitations
The [fastText linear classifier](https://fasttext.cc) is limed both in performance and the variety of languages it can recognize, so the quality of some OSCAR sub-corpora might be lower than expected, specially for the lowest-resource langiuages. Some audits have already been done by [third parties](https://arxiv.org/abs/2010.14571).
## Additional Information
### Dataset Curators
The corpus was put together by [Pedro J. Ortiz](https://pjortiz.eu/), [Benoît Sagot](http://pauillac.inria.fr/~sagot/), and [Laurent Romary](https://cv.archives-ouvertes.fr/laurentromary), during work done at [Inria](https://www.inria.fr/en), particularly at the [ALMAnaCH team](https://team.inria.fr/almanach/).
### Licensing Information
These data are released under this licensing scheme
We do not own any of the text from which these data has been extracted.
We license the actual packaging of these data under the Creative Commons CC0 license ("no rights reserved") http://creativecommons.org/publicdomain/zero/1.0/
To the extent possible under law, Inria has waived all copyright and related or neighboring rights to OSCAR
This work is published from: France.
Should you consider that our data contains material that is owned by you and should therefore not be reproduced here, please:
* Clearly identify yourself, with detailed contact data such as an address, telephone number or email address at which you can be contacted.
* Clearly identify the copyrighted work claimed to be infringed.
* Clearly identify the material that is claimed to be infringing and information reasonably sufficient to allow us to locate the material.
We will comply to legitimate requests by removing the affected sources from the next release of the corpus.
### Citation Information
```
@inproceedings{ortiz-suarez-etal-2020-monolingual,
title = "A Monolingual Approach to Contextualized Word Embeddings for Mid-Resource Languages",
author = "Ortiz Su{'a}rez, Pedro Javier and
Romary, Laurent and
Sagot, Benoit",
booktitle = "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
month = jul,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2020.acl-main.156",
pages = "1703--1714",
abstract = "We use the multilingual OSCAR corpus, extracted from Common Crawl via language classification, filtering and cleaning, to train monolingual contextualized word embeddings (ELMo) for five mid-resource languages. We then compare the performance of OSCAR-based and Wikipedia-based ELMo embeddings for these languages on the part-of-speech tagging and parsing tasks. We show that, despite the noise in the Common-Crawl-based OSCAR data, embeddings trained on OSCAR perform much better than monolingual embeddings trained on Wikipedia. They actually equal or improve the current state of the art in tagging and parsing for all five languages. In particular, they also improve over multilingual Wikipedia-based contextual embeddings (multilingual BERT), which almost always constitutes the previous state of the art, thereby showing that the benefit of a larger, more diverse corpus surpasses the cross-lingual benefit of multilingual embedding architectures.",
}
@inproceedings{OrtizSuarezSagotRomary2019,
author = {Pedro Javier {Ortiz Su{'a}rez} and Benoit Sagot and Laurent Romary},
title = {Asynchronous pipelines for processing huge corpora on medium to low resource infrastructures},
series = {Proceedings of the Workshop on Challenges in the Management of Large Corpora (CMLC-7) 2019. Cardiff, 22nd July 2019},
editor = {Piotr Bański and Adrien Barbaresi and Hanno Biber and Evelyn Breiteneder and Simon Clematide and Marc Kupietz and Harald L{"u}ngen and Caroline Iliadi},
publisher = {Leibniz-Institut f{"u}r Deutsche Sprache},
address = {Mannheim},
doi = {10.14618/ids-pub-9021},
url = {http://nbn-resolving.de/urn:nbn:de:bsz:mh39-90215},
pages = {9 -- 16},
year = {2019},
abstract = {Common Crawl is a considerably large, heterogeneous multilingual corpus comprised of crawled documents from the internet, surpassing 20TB of data and distributed as a set of more than 50 thousand plain text files where each contains many documents written in a wide variety of languages. Even though each document has a metadata block associated to it, this data lacks any information about the language in which each document is written, making it extremely difficult to use Common Crawl for monolingual applications. We propose a general, highly parallel, multithreaded pipeline to clean and classify Common Crawl by language; we specifically design it so that it runs efficiently on medium to low resource infrastructures where I/O speeds are the main constraint. We develop the pipeline so that it can be easily reapplied to any kind of heterogeneous corpus and so that it can be parameterised to a wide range of infrastructures. We also distribute a 6.3TB version of Common Crawl, filtered, classified by language, shuffled at line level in order to avoid copyright issues, and ready to be used for NLP applications.},
language = {en}
}
```
### Contributions
Thanks to [@pjox](https://github.com/pjox) and [@lhoestq](https://github.com/lhoestq) for adding this dataset.
|
ScandEval/scala-nn | 2023-07-05T09:49:32.000Z | [
"task_categories:text-classification",
"size_categories:1K<n<10K",
"language:nn",
"license:cc-by-sa-4.0",
"region:us"
] | ScandEval | null | null | null | 0 | 48 | ---
license: cc-by-sa-4.0
task_categories:
- text-classification
language:
- nn
size_categories:
- 1K<n<10K
--- |
VietAI/spoken_norm_assignment | 2022-07-12T13:33:30.000Z | [
"region:us"
] | VietAI | null | null | null | 3 | 48 | # VietAI assignment: Vietnamese Inverse Text Normalization dataset
## Dataset Description
Inverse text normalization (ITN) is the task that transforms spoken to written styles. It is particularly useful in automatic speech recognition (ASR) systems where proper names are often miss-recognized by their pronunciations instead of the written forms. By applying ITN, we can improve the readability of the ASR system’s output significantly. This dataset provides data for doing ITN task in the Vietnamese language.
For example:
| Spoken | Written | Types |
|--------------------------------------------------|--------------|----------------------------|
| tám giờ chín phút ngày ba tháng tư năm hai nghìn | 8h9 3/4/2000 | time and date |
| tám mét khối năm mươi ki lô gam | 8m3 50 kg | number and unit of measure |
| không chín sáu hai bảy bảy chín chín không bốn | 0962779904 | phone number |
### Data Splits
The ITN dataset has 3 splits: _train_, _validation_, and _test_. In _train_, _validation_ splits, the input (src) and their label (tgt) are provided. In the _test_ splits, only the input (src) is provided.
| Dataset Split | Number of Instances in Split |
| ------------- |----------------------------- |
| Train | 500,000 |
| Validation | 2,500 |
| Test | 2,500 |
|
fabiochiu/medium-articles | 2022-07-17T15:17:09.000Z | [
"license:mit",
"region:us"
] | fabiochiu | null | null | null | 5 | 48 | ---
license: mit
---
# Data source
This data has been collected through a standard scraping process from the [Medium website](https://medium.com/), looking for published articles.
# Data description
Each row in the data is a different article published on Medium. For each article, you have the following features:
- **title** *[string]*: The title of the article.
- **text** *[string]*: The text content of the article.
- **url** *[string]*: The URL associated to the article.
- **authors** *[list of strings]*: The article authors.
- **timestamp** *[string]*: The publication datetime of the article.
- **tags** *[list of strings]*: List of tags associated to the article.
# Data analysis
You can find a very quick data analysis in this [notebook](https://www.kaggle.com/code/fabiochiusano/medium-articles-simple-data-analysis).
# What can I do with this data?
- A multilabel classification model that assigns tags to articles.
- A seq2seq model that generates article titles.
- Text analysis.
- Finetune text generation models on the general domain of Medium, or on specific domains by filtering articles by the appropriate tags.
# Collection methodology
Scraping has been done with Python and the requests library. Starting from a random article on Medium, the next articles to scrape are selected by visiting:
1. The author archive pages.
2. The publication archive pages (if present).
3. The tags archives (if present).
The article HTML pages have been parsed with the [newspaper Python library](https://github.com/codelucas/newspaper).
Published articles have been filtered for English articles only, using the Python [langdetect library](https://pypi.org/project/langdetect/).
As a consequence of the collection methodology, the scraped articles are coming from a not uniform publication date distribution. This means that there are articles published in 2016 and in 2022, but the number of articles in this dataset published in 2016 is not the same as the number of articles published in 2022. In particular, there is a strong prevalence of articles published in 2020. Have a look at the [accompanying notebook](https://www.kaggle.com/code/fabiochiusano/medium-articles-simple-data-analysis) to see the distribution of the publication dates. |
imodels/compas-recidivism | 2022-08-13T04:17:29.000Z | [
"task_categories:tabular-classification",
"size_categories:1K<n<10K",
"interpretability",
"fairness",
"region:us"
] | imodels | null | null | null | 1 | 48 | ---
annotations_creators: []
language: []
language_creators: []
license: []
multilinguality: []
pretty_name: compas-recividivsm
size_categories:
- 1K<n<10K
source_datasets: []
tags:
- interpretability
- fairness
task_categories:
- tabular-classification
task_ids: []
---
Port of the compas-recidivism dataset from propublica (github [here](https://github.com/propublica/compas-analysis)). See details there and use carefully, as there are serious known social impacts and biases present in this dataset.
Basic preprocessing done by the [imodels team](https://github.com/csinva/imodels) in [this notebook](https://github.com/csinva/imodels-data/blob/master/notebooks_fetch_data/00_get_datasets_custom.ipynb).
The target is the binary outcome `is_recid`.
### Sample usage
Load the data:
```
from datasets import load_dataset
dataset = load_dataset("imodels/compas-recidivism")
df = pd.DataFrame(dataset['train'])
X = df.drop(columns=['is_recid'])
y = df['is_recid'].values
```
Fit a model:
```
import imodels
import numpy as np
m = imodels.FIGSClassifier(max_rules=5)
m.fit(X, y)
print(m)
```
Evaluate:
```
df_test = pd.DataFrame(dataset['test'])
X_test = df.drop(columns=['is_recid'])
y_test = df['is_recid'].values
print('accuracy', np.mean(m.predict(X_test) == y_test))
``` |
climatebert/environmental_claims | 2023-05-23T08:53:10.000Z | [
"task_categories:text-classification",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:en",
"license:cc-by-nc-sa-4.0",
"arxiv:2209.00507",
"region:us"
] | climatebert | null | null | null | 9 | 48 | ---
annotations_creators:
- expert-generated
language_creators:
- found
language:
- en
license: cc-by-nc-sa-4.0
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- text-classification
task_ids: []
pretty_name: EnvironmentalClaims
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': 'no'
'1': 'yes'
splits:
- name: train
num_bytes: 346686
num_examples: 2117
- name: validation
num_bytes: 43018
num_examples: 265
- name: test
num_bytes: 42810
num_examples: 265
download_size: 272422
dataset_size: 432514
---
# Dataset Card for environmental_claims
## Dataset Description
- **Homepage:** [climatebert.ai](https://climatebert.ai)
- **Repository:**
- **Paper:** [arxiv.org/abs/2209.00507](https://arxiv.org/abs/2209.00507)
- **Leaderboard:**
- **Point of Contact:** [Dominik Stammbach](mailto:dominsta@ethz.ch)
### Dataset Summary
We introduce an expert-annotated dataset for detecting real-world environmental claims made by listed companies.
### Supported Tasks and Leaderboards
The dataset supports a binary classification task of whether a given sentence is an environmental claim or not.
### Languages
The text in the dataset is in English.
## Dataset Structure
### Data Instances
```
{
"text": "It will enable E.ON to acquire and leverage a comprehensive understanding of the transfor- mation of the energy system and the interplay between the individual submarkets in regional and local energy supply sys- tems.",
"label": 0
}
```
### Data Fields
- text: a sentence extracted from corporate annual reports, sustainability reports and earning calls transcripts
- label: the label (0 -> no environmental claim, 1 -> environmental claim)
### Data Splits
The dataset is split into:
- train: 2,400
- validation: 300
- test: 300
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
Our dataset contains environmental claims by firms, often in the financial domain. We collect text from corporate annual reports, sustainability reports, and earning calls transcripts.
For more information regarding our sample selection, please refer to Appendix B of our paper, which is provided for [citation](#citation-information).
#### Who are the source language producers?
Mainly large listed companies.
### Annotations
#### Annotation process
For more information on our annotation process and annotation guidelines, please refer to Appendix C of our paper, which is provided for [citation](#citation-information).
#### Who are the annotators?
The authors and students at University of Zurich with majors in finance and sustainable finance.
### Personal and Sensitive Information
Since our text sources contain public information, no personal and sensitive information should be included.
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
- Dominik Stammbach
- Nicolas Webersinke
- Julia Anna Bingler
- Mathias Kraus
- Markus Leippold
### Licensing Information
This dataset is licensed under the Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International license (cc-by-nc-sa-4.0). To view a copy of this license, visit [creativecommons.org/licenses/by-nc-sa/4.0](https://creativecommons.org/licenses/by-nc-sa/4.0/).
If you are interested in commercial use of the dataset, please contact [markus.leippold@bf.uzh.ch](mailto:markus.leippold@bf.uzh.ch).
### Citation Information
```bibtex
@misc{stammbach2022environmentalclaims,
title = {A Dataset for Detecting Real-World Environmental Claims},
author = {Stammbach, Dominik and Webersinke, Nicolas and Bingler, Julia Anna and Kraus, Mathias and Leippold, Markus},
year = {2022},
doi = {10.48550/ARXIV.2209.00507},
url = {https://arxiv.org/abs/2209.00507},
publisher = {arXiv},
}
```
### Contributions
Thanks to [@webersni](https://github.com/webersni) for adding this dataset. |
ysharma/short_jokes | 2022-09-26T17:11:06.000Z | [
"license:mit",
"region:us"
] | ysharma | null | null | null | 1 | 48 | ---
license: mit
---
**Context**
Generating humor is a complex task in the domain of machine learning, and it requires the models to understand the deep semantic meaning of a joke in order to generate new ones. Such problems, however, are difficult to solve due to a number of reasons, one of which is the lack of a database that gives an elaborate list of jokes. Thus, a large corpus of over 0.2 million jokes has been collected by scraping several websites containing funny and short jokes.
You can visit the [Github repository](https://github.com/amoudgl/short-jokes-dataset) from [amoudgl](https://github.com/amoudgl) for more information regarding collection of data and the scripts used.
**Content**
This dataset is in the form of a csv file containing 231,657 jokes. Length of jokes ranges from 10 to 200 characters. Each line in the file contains a unique ID and joke.
**Disclaimer**
It has been attempted to keep the jokes as clean as possible. Since the data has been collected by scraping websites, it is possible that there may be a few jokes that are inappropriate or offensive to some people.
**Note**
This dataset is taken from Kaggle dataset that can be found [here](https://www.kaggle.com/datasets/abhinavmoudgil95/short-jokes). |
projecte-aina/Parafraseja | 2023-09-13T12:37:01.000Z | [
"task_categories:text-classification",
"task_ids:multi-input-text-classification",
"annotations_creators:CLiC-UB",
"language_creators:found",
"multilinguality:monolingual",
"language:ca",
"license:cc-by-nc-nd-4.0",
"region:us"
] | projecte-aina | Parafraseja is a dataset of 16,584 pairs of sentences with a label that indicates if they are paraphrases or not. The original sentences were collected from TE-ca and STS-ca. For each sentence, an annotator wrote a sentence that was a paraphrase and another that was not. The guidelines of this annotation are available. | null | 1 | 48 | ---
annotations_creators:
- CLiC-UB
language_creators:
- found
language:
- ca
license:
- cc-by-nc-nd-4.0
multilinguality:
- monolingual
pretty_name: Parafraseja
size_categories:
- ?
task_categories:
- text-classification
task_ids:
- multi-input-text-classification
---
# Dataset Card for Parafraseja
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Point of Contact:** [blanca.calvo@bsc.es](blanca.calvo@bsc.es)
### Dataset Summary
Parafraseja is a dataset of 21,984 pairs of sentences with a label that indicates if they are paraphrases or not. The original sentences were collected from [TE-ca](https://huggingface.co/datasets/projecte-aina/teca) and [STS-ca](https://huggingface.co/datasets/projecte-aina/sts-ca). For each sentence, an annotator wrote a sentence that was a paraphrase and another that was not. The guidelines of this annotation are available.
### Supported Tasks and Leaderboards
This dataset is mainly intended to train models for paraphrase detection.
### Languages
The dataset is in Catalan (`ca-ES`).
## Dataset Structure
The dataset consists of pairs of sentences labelleCAd with "Parafrasis" or "No Parafrasis" in a jsonl format.
### Data Instances
<pre>
{
"id": "te1_14977_1",
"source": "teca",
"original": "La 2a part consta de 23 cap\u00edtols, cadascun dels quals descriu un ocell diferent.",
"new": "La segona part consisteix en vint-i-tres cap\u00edtols, cada un dels quals descriu un ocell diferent.",
"label": "Parafrasis"
}
</pre>
### Data Fields
- original: original sentence
- new: new sentence, which could be a paraphrase or a non-paraphrase
- label: relation between original and new
### Data Splits
* dev.json: 2,000 examples
* test.json: 4,000 examples
* train.json: 15,984 examples
## Dataset Creation
### Curation Rationale
We created this corpus to contribute to the development of language models in Catalan, a low-resource language.
### Source Data
The original sentences of this dataset came from the [STS-ca](https://huggingface.co/datasets/projecte-aina/sts-ca) and the [TE-ca](https://huggingface.co/datasets/projecte-aina/teca).
#### Initial Data Collection and Normalization
11,543 of the original sentences came from TE-ca, and 10,441 came from STS-ca.
#### Who are the source language producers?
TE-ca and STS-ca come from the [Catalan Textual Corpus](https://zenodo.org/record/4519349#.Y1Zs__uxXJF), which consists of several corpora gathered from web crawling and public corpora, and [Vilaweb](https://www.vilaweb.cat), a Catalan newswire.
### Annotations
The dataset is annotated with the label "Parafrasis" or "No Parafrasis" for each pair of sentences.
CA
#### Annotation process
The annotation process was done by a single annotator and reviewed by another.
#### Who are the annotators?
The annotators were Catalan native speakers, with a background on linguistics.
### Personal and Sensitive Information
No personal or sensitive information included.
## Considerations for Using the Data
### Social Impact of Dataset
We hope this corpus contributes to the development of language models in Catalan, a low-resource language.
### Discussion of Biases
We are aware that this data might contain biases. We have not applied any steps to reduce their impact.
### Other Known Limitations
[N/A]
## Additional Information
### Dataset Curators
Text Mining Unit (TeMU) at the Barcelona Supercomputing Center (bsc-temu@bsc.es)
This work was funded by the [Departament de la Vicepresidència i de Polítiques Digitals i Territori de la Generalitat de Catalunya](https://politiquesdigitals.gencat.cat/ca/inici/index.html#googtrans(ca|en) within the framework of [Projecte AINA](https://politiquesdigitals.gencat.cat/ca/economia/catalonia-ai/aina).
### Licensing Information
[Creative Commons Attribution Non-commercial No-Derivatives 4.0 International](https://creativecommons.org/licenses/by-nc-nd/4.0/).
### Contributions
[N/A]
| |
bigbio/an_em | 2022-12-22T15:43:14.000Z | [
"multilinguality:monolingual",
"language:en",
"license:cc-by-sa-3.0",
"region:us"
] | bigbio | AnEM corpus is a domain- and species-independent resource manually annotated for anatomical
entity mentions using a fine-grained classification system. The corpus consists of 500 documents
(over 90,000 words) selected randomly from citation abstracts and full-text papers with
the aim of making the corpus representative of the entire available biomedical scientific
literature. The corpus annotation covers mentions of both healthy and pathological anatomical
entities and contains over 3,000 annotated mentions. | @inproceedings{ohta-etal-2012-open,
author = {Ohta, Tomoko and Pyysalo, Sampo and Tsujii, Jun{'}ichi and Ananiadou, Sophia},
title = {Open-domain Anatomical Entity Mention Detection},
journal = {},
volume = {W12-43},
year = {2012},
url = {https://aclanthology.org/W12-4304},
doi = {},
biburl = {},
bibsource = {},
publisher = {Association for Computational Linguistics}
} | null | 0 | 48 |
---
language:
- en
bigbio_language:
- English
license: cc-by-sa-3.0
multilinguality: monolingual
bigbio_license_shortname: CC_BY_SA_3p0
pretty_name: AnEM
homepage: http://www.nactem.ac.uk/anatomy/
bigbio_pubmed: True
bigbio_public: True
bigbio_tasks:
- NAMED_ENTITY_RECOGNITION
- COREFERENCE_RESOLUTION
- RELATION_EXTRACTION
---
# Dataset Card for AnEM
## Dataset Description
- **Homepage:** http://www.nactem.ac.uk/anatomy/
- **Pubmed:** True
- **Public:** True
- **Tasks:** NER,COREF,RE
AnEM corpus is a domain- and species-independent resource manually annotated for anatomical
entity mentions using a fine-grained classification system. The corpus consists of 500 documents
(over 90,000 words) selected randomly from citation abstracts and full-text papers with
the aim of making the corpus representative of the entire available biomedical scientific
literature. The corpus annotation covers mentions of both healthy and pathological anatomical
entities and contains over 3,000 annotated mentions.
## Citation Information
```
@inproceedings{ohta-etal-2012-open,
author = {Ohta, Tomoko and Pyysalo, Sampo and Tsujii, Jun{'}ichi and Ananiadou, Sophia},
title = {Open-domain Anatomical Entity Mention Detection},
journal = {},
volume = {W12-43},
year = {2012},
url = {https://aclanthology.org/W12-4304},
doi = {},
biburl = {},
bibsource = {},
publisher = {Association for Computational Linguistics}
}
```
|
bigbio/distemist | 2023-04-01T16:51:57.000Z | [
"multilinguality:monolingual",
"language:es",
"license:cc-by-4.0",
"region:us"
] | bigbio | The DisTEMIST corpus is a collection of 1000 clinical cases with disease annotations linked with Snomed-CT concepts.
All documents are released in the context of the BioASQ DisTEMIST track for CLEF 2022. | @article{miranda2022overview,
title={Overview of DisTEMIST at BioASQ: Automatic detection and normalization of diseases
from clinical texts: results, methods, evaluation and multilingual resources},
author={Miranda-Escalada, Antonio and Gascó, Luis and Lima-López, Salvador and Farré-Maduell,
Eulàlia and Estrada, Darryl and Nentidis, Anastasios and Krithara, Anastasia and Katsimpras,
Georgios and Paliouras, Georgios and Krallinger, Martin},
booktitle={Working Notes of Conference and Labs of the Evaluation (CLEF) Forum.
CEUR Workshop Proceedings},
year={2022}
} | null | 2 | 48 |
---
language:
- es
bigbio_language:
- Spanish
license: cc-by-4.0
multilinguality: monolingual
bigbio_license_shortname: CC_BY_4p0
pretty_name: DisTEMIST
homepage: https://zenodo.org/record/6671292
bigbio_pubmed: False
bigbio_public: True
bigbio_tasks:
- NAMED_ENTITY_RECOGNITION
- NAMED_ENTITY_DISAMBIGUATION
---
# Dataset Card for DisTEMIST
## Dataset Description
- **Homepage:** https://zenodo.org/record/6671292
- **Pubmed:** False
- **Public:** True
- **Tasks:** NER,NED
The DisTEMIST corpus is a collection of 1000 clinical cases with disease annotations linked with Snomed-CT concepts.
All documents are released in the context of the BioASQ DisTEMIST track for CLEF 2022.
## Citation Information
```
@article{miranda2022overview,
title={Overview of DisTEMIST at BioASQ: Automatic detection and normalization of diseases
from clinical texts: results, methods, evaluation and multilingual resources},
author={Miranda-Escalada, Antonio and Gascó, Luis and Lima-López, Salvador and Farré-Maduell,
Eulàlia and Estrada, Darryl and Nentidis, Anastasios and Krithara, Anastasia and Katsimpras,
Georgios and Paliouras, Georgios and Krallinger, Martin},
booktitle={Working Notes of Conference and Labs of the Evaluation (CLEF) Forum.
CEUR Workshop Proceedings},
year={2022}
}
```
|
SandipPalit/Movie_Dataset | 2023-01-14T15:41:07.000Z | [
"task_categories:text-classification",
"task_categories:text-generation",
"task_categories:summarization",
"task_categories:sentence-similarity",
"size_categories:10K<n<100K",
"language:en",
"Movie",
"Cinema",
"Film",
"region:us"
] | SandipPalit | null | null | null | 2 | 48 | ---
task_categories:
- text-classification
- text-generation
- summarization
- sentence-similarity
language:
- en
tags:
- Movie
- Cinema
- Film
pretty_name: Movie Dataset
size_categories:
- 10K<n<100K
--- |
nielsr/countbench | 2023-03-07T20:53:01.000Z | [
"region:us"
] | nielsr | null | null | null | 1 | 48 | ---
dataset_info:
features:
- name: image_url
dtype: string
- name: text
dtype: string
- name: number
dtype: int64
- name: image
dtype: image
splits:
- name: train
num_bytes: 23622859.0
num_examples: 540
download_size: 23350530
dataset_size: 23622859.0
---
# Dataset Card for "countbench"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
marekk/testing_dataset_article_category | 2023-04-04T06:29:35.000Z | [
"task_categories:text-classification",
"size_categories:n<1K",
"region:us"
] | marekk | null | null | null | 0 | 48 | ---
task_categories:
- text-classification
pretty_name: Testing dataset Article Category
size_categories:
- n<1K
--- |
michaelwzhu/ChatMed_Consult_Dataset | 2023-05-05T13:41:10.000Z | [
"task_categories:text-generation",
"language:zh",
"license:cc-by-4.0",
"medical consultation",
"finetuning",
"region:us"
] | michaelwzhu | null | null | null | 61 | 48 | ---
license: cc-by-4.0
language:
- zh
tags:
- medical consultation
- finetuning
pretty_name: ChatMed-Dataset
task_categories:
- text-generation
---
# Dataset Card for ChatMed
## Dataset Description
- **Homepage:** https://huggingface.co/datasets/michaelwzhu/ChatMed-Datasets
- **Repository:** https://github.com/michael-wzhu/ChatMed
- **Paper:**
- **Leaderboard:**
- **Point of Contact:** Wei Zhu
### Dataset Summary
ChatMed-Dataset is a dataset of 110,113 medical query-response pairs (in Chinese) generated by OpenAI's `GPT-3.5` engine. The queries are crawled from several online medical consultation sites, reflecting the medical needs in the real world. The responses are generated by the OpenAI engine. This dataset is designated to to inject medical knowledge into Chinese large language models.
The dataset size growing rapidly. Stay tuned for updates!
在大模型时代,模型都是趋同的,比如开源社区由于LlaMa的"开源",出现了各种stanford-alpaca等一系列的开源模型。但是中文垂直领域上的AGI数据集相对匮乏。
本数据集, ChatMed-Dataset, 中的query(或者是prompt)来自于互联网上的医疗问诊问题(110,113),反映了真实世界的不同用户/患者的医疗问诊需求。目前response都是由OpenAI `GPT-3.5`引擎回答的。我们后续会对互联网上的医生回答与患者回答进行筛选甄别,择优选择,构建质量更优的数据集。
### Supported Tasks and Leaderboards
The ChatMed-Dataset designed for fine-tuning pretrained language models to perform better in automatic medical consultations.
### Languages
The data in ChatMed-Dataset are in Chinese (zh).
## Dataset Structure
### Data Instances
The dataset is in json-line format and is very convenient to use! An example in the ChatMed-Dataset looks as follows:
```json
{
"query": "你好,视神经萎缩,黄斑变性。怎样缓解,及日常注意事项。谢谢。",
"response": "视神经萎缩和黄斑变性是两种不同的疾病,需要分别进行治疗。\n\n视神经萎缩是指视神经逐渐受损,导致视力逐渐减退,常见的治疗方法包括:\n\n1.药物治疗:如促进视神经细胞再生的药物,改善视网膜供血的药物等。\n\n2.手术治疗:如视神经萎缩的手术治疗可以通过神经移植、修复和再生等手术方式来缓解症状。\n\n3.康复训练:通过视力训练等康复训练可以帮助患者恢复视力。\n\n黄斑变性则是指黄斑区域受损,导致中央视力减退,常见的治疗方法包括:\n\n1.药物治疗:如抗氧化剂、维生素等可以缓解症状。\n\n2.手术治疗:如黄斑脱离手术等可以恢复视力。\n\n3.日常注意事项:保持良好的生活习惯,如戒烟、控制血压、血糖等,避免暴晒和过度用眼,保持充足的睡眠和均衡的饮食等。\n\n需要根据具体情况选择合适的治疗方案,建议您咨询专业医生进行诊断和治疗。"
}
```
### Data Fields
The data fields are as follows:
* `query`: real-life medical query from the web.
* `response`: response generated by the OpenAI `GPT-3.5` engine.
### Data Splits
| | train |
|---------------|------:|
| ChatMed-Dataset | 110,113 |
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
> We believe that releasing the above assets will enable the academic community to perform controlled scientific studies on how large language models responses to real-life medical query.
> More social impacts will be discussed soon!
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
The `ChatMed-Dataset` data is generated by a language model (`GPT-3.5`) and inevitably contains some errors or biases. We encourage users to use this data with caution and propose new methods to filter or improve the imperfections.
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
The dataset is available under the [Creative Commons NonCommercial (CC BY-NC 4.0)](https://creativecommons.org/licenses/by-nc/4.0/legalcode).
### Citation Information
```
@misc{alpaca,
author = {Wei Zhu },
title = {ChatMed-Dataset: An GPT generated medical query-response datasets for medcial large language models},
year = {2023},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/michael-wzhu/ChatMed}},
}
```
### Contributions
[More Information Needed] |
Dr-BERT/QUAERO | 2023-06-12T20:53:41.000Z | [
"task_categories:token-classification",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"language:fr",
"license:other",
"medical",
"region:us"
] | Dr-BERT | The QUAERO French Medical Corpus has been initially developed as a resource for named entity recognition and normalization [1]. It was then improved with the purpose of creating a gold standard set of normalized entities for French biomedical text, that was used in the CLEF eHealth evaluation lab [2][3].
A selection of MEDLINE titles and EMEA documents were manually annotated. The annotation process was guided by concepts in the Unified Medical Language System (UMLS):
1. Ten types of clinical entities, as defined by the following UMLS Semantic Groups (Bodenreider and McCray 2003) were annotated: Anatomy, Chemical and Drugs, Devices, Disorders, Geographic Areas, Living Beings, Objects, Phenomena, Physiology, Procedures.
2. The annotations were made in a comprehensive fashion, so that nested entities were marked, and entities could be mapped to more than one UMLS concept. In particular: (a) If a mention can refer to more than one Semantic Group, all the relevant Semantic Groups should be annotated. For instance, the mention “récidive” (recurrence) in the phrase “prévention des récidives” (recurrence prevention) should be annotated with the category “DISORDER” (CUI C2825055) and the category “PHENOMENON” (CUI C0034897); (b) If a mention can refer to more than one UMLS concept within the same Semantic Group, all the relevant concepts should be annotated. For instance, the mention “maniaques” (obsessive) in the phrase “patients maniaques” (obsessive patients) should be annotated with CUIs C0564408 and C0338831 (category “DISORDER”); (c) Entities which span overlaps with that of another entity should still be annotated. For instance, in the phrase “infarctus du myocarde” (myocardial infarction), the mention “myocarde” (myocardium) should be annotated with category “ANATOMY” (CUI C0027061) and the mention “infarctus du myocarde” should be annotated with category “DISORDER” (CUI C0027051)
The QUAERO French Medical Corpus BioC release comprises a subset of the QUAERO French Medical corpus, as follows:
Training data (BRAT version used in CLEF eHealth 2015 task 1b as training data):
- MEDLINE_train_bioc file: 833 MEDLINE titles, annotated with normalized entities in the BioC format
- EMEA_train_bioc file: 3 EMEA documents, segmented into 11 sub-documents, annotated with normalized entities in the BioC format
Development data (BRAT version used in CLEF eHealth 2015 task 1b as test data and in CLEF eHealth 2016 task 2 as development data):
- MEDLINE_dev_bioc file: 832 MEDLINE titles, annotated with normalized entities in the BioC format
- EMEA_dev_bioc file: 3 EMEA documents, segmented into 12 sub-documents, annotated with normalized entities in the BioC format
Test data (BRAT version used in CLEF eHealth 2016 task 2 as test data):
- MEDLINE_test_bioc folder: 833 MEDLINE titles, annotated with normalized entities in the BioC format
- EMEA folder_test_bioc: 4 EMEA documents, segmented into 15 sub-documents, annotated with normalized entities in the BioC format
This release of the QUAERO French medical corpus, BioC version, comes in the BioC format, through automatic conversion from the original BRAT format obtained with the Brat2BioC tool https://bitbucket.org/nicta_biomed/brat2bioc developped by Jimeno Yepes et al.
Antonio Jimeno Yepes, Mariana Neves, Karin Verspoor
Brat2BioC: conversion tool between brat and BioC
BioCreative IV track 1 - BioC: The BioCreative Interoperability Initiative, 2013
Please note that the original version of the QUAERO corpus distributed in the CLEF eHealth challenge 2015 and 2016 came in the BRAT stand alone format. It was distributed with the CLEF eHealth evaluation tool. This original distribution of the QUAERO French Medical corpus is available separately from https://quaerofrenchmed.limsi.fr
All questions regarding the task or data should be addressed to aurelie.neveol@limsi.fr | @InProceedings{neveol14quaero,
author = {Névéol, Aurélie and Grouin, Cyril and Leixa, Jeremy
and Rosset, Sophie and Zweigenbaum, Pierre},
title = {The {QUAERO} {French} Medical Corpus: A Ressource for
Medical Entity Recognition and Normalization},
OPTbooktitle = {Proceedings of the Fourth Workshop on Building
and Evaluating Ressources for Health and Biomedical
Text Processing},
booktitle = {Proc of BioTextMining Work},
OPTseries = {BioTxtM 2014},
year = {2014},
pages = {24--30},
} | null | 3 | 48 | ---
language:
- fr
license: other
multilinguality: monolingual
pretty_name: QUAERO
homepage: https://quaerofrenchmed.limsi.fr/
task_categories:
- token-classification
tags:
- medical
size_categories:
- 1K<n<10K
---
# Dataset Card for QUAERO
## Dataset Description
- **Homepage:** https://quaerofrenchmed.limsi.fr/
- **Pubmed:** True
- **Public:** True
- **Tasks:** Named-Entity Recognition (NER)
The QUAERO French Medical Corpus has been initially developed as a resource for named entity recognition and normalization [1]. It was then improved with the purpose of creating a gold standard set of normalized entities for French biomedical text, that was used in the CLEF eHealth evaluation lab [2][3].
A selection of MEDLINE titles and EMEA documents were manually annotated. The annotation process was guided by concepts in the Unified Medical Language System (UMLS):
1. Ten types of clinical entities, as defined by the following UMLS Semantic Groups (Bodenreider and McCray 2003) were annotated: Anatomy, Chemical and Drugs, Devices, Disorders, Geographic Areas, Living Beings, Objects, Phenomena, Physiology, Procedures.
2. The annotations were made in a comprehensive fashion, so that nested entities were marked, and entities could be mapped to more than one UMLS concept. In particular: (a) If a mention can refer to more than one Semantic Group, all the relevant Semantic Groups should be annotated. For instance, the mention “récidive” (recurrence) in the phrase “prévention des récidives” (recurrence prevention) should be annotated with the category “DISORDER” (CUI C2825055) and the category “PHENOMENON” (CUI C0034897); (b) If a mention can refer to more than one UMLS concept within the same Semantic Group, all the relevant concepts should be annotated. For instance, the mention “maniaques” (obsessive) in the phrase “patients maniaques” (obsessive patients) should be annotated with CUIs C0564408 and C0338831 (category “DISORDER”); (c) Entities which span overlaps with that of another entity should still be annotated. For instance, in the phrase “infarctus du myocarde” (myocardial infarction), the mention “myocarde” (myocardium) should be annotated with category “ANATOMY” (CUI C0027061) and the mention “infarctus du myocarde” should be annotated with category “DISORDER” (CUI C0027051)
The QUAERO French Medical Corpus BioC release comprises a subset of the QUAERO French Medical corpus, as follows:
Training data (BRAT version used in CLEF eHealth 2015 task 1b as training data):
- MEDLINE_train_bioc file: 833 MEDLINE titles, annotated with normalized entities in the BioC format
- EMEA_train_bioc file: 3 EMEA documents, segmented into 11 sub-documents, annotated with normalized entities in the BioC format
Development data (BRAT version used in CLEF eHealth 2015 task 1b as test data and in CLEF eHealth 2016 task 2 as development data):
- MEDLINE_dev_bioc file: 832 MEDLINE titles, annotated with normalized entities in the BioC format
- EMEA_dev_bioc file: 3 EMEA documents, segmented into 12 sub-documents, annotated with normalized entities in the BioC format
Test data (BRAT version used in CLEF eHealth 2016 task 2 as test data):
- MEDLINE_test_bioc folder: 833 MEDLINE titles, annotated with normalized entities in the BioC format
- EMEA folder_test_bioc: 4 EMEA documents, segmented into 15 sub-documents, annotated with normalized entities in the BioC format
This release of the QUAERO French medical corpus, BioC version, comes in the BioC format, through automatic conversion from the original BRAT format obtained with the Brat2BioC tool https://bitbucket.org/nicta_biomed/brat2bioc developped by Jimeno Yepes et al.
Antonio Jimeno Yepes, Mariana Neves, Karin Verspoor
Brat2BioC: conversion tool between brat and BioC
BioCreative IV track 1 - BioC: The BioCreative Interoperability Initiative, 2013
Please note that the original version of the QUAERO corpus distributed in the CLEF eHealth challenge 2015 and 2016 came in the BRAT stand alone format. It was distributed with the CLEF eHealth evaluation tool. This original distribution of the QUAERO French Medical corpus is available separately from https://quaerofrenchmed.limsi.fr
All questions regarding the task or data should be addressed to aurelie.neveol@limsi.fr
## Citation Information
```
@InProceedings{neveol14quaero,
author = {Névéol, Aurélie and Grouin, Cyril and Leixa, Jeremy
and Rosset, Sophie and Zweigenbaum, Pierre},
title = {The {QUAERO} {French} Medical Corpus: A Ressource for
Medical Entity Recognition and Normalization},
OPTbooktitle = {Proceedings of the Fourth Workshop on Building
and Evaluating Ressources for Health and Biomedical
Text Processing},
booktitle = {Proc of BioTextMining Work},
OPTseries = {BioTxtM 2014},
year = {2014},
pages = {24--30},
}
``` |
inseq/scat | 2023-08-30T17:05:34.000Z | [
"task_categories:translation",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:translation",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:en",
"language:fr",
"license:unknown",
"contextual-mt",
"document-mt",
"region:us"
] | inseq | The Supporting Context for Ambiguous Translations corpus (SCAT) is a dataset
of English-to-French translations annotated with human rationales used for resolving ambiguity
in pronoun anaphora resolution for multi-sentence translation. | @inproceedings{yin-etal-2021-context,
title = "Do Context-Aware Translation Models Pay the Right Attention?",
author = "Yin, Kayo and
Fernandes, Patrick and
Pruthi, Danish and
Chaudhary, Aditi and
Martins, Andr{\'e} F. T. and
Neubig, Graham",
booktitle = "Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)",
month = aug,
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.acl-long.65",
doi = "10.18653/v1/2021.acl-long.65",
pages = "788--801",
} | null | 1 | 48 | ---
annotations_creators:
- expert-generated
language:
- en
- fr
language_creators:
- expert-generated
license:
- unknown
multilinguality:
- translation
pretty_name: scat
size_categories:
- 10K<n<100K
source_datasets:
- original
tags:
- contextual-mt
- document-mt
task_categories:
- translation
task_ids: []
---
# Dataset Card for SCAT
## Table of Contents
- [Dataset Card for SCAT](#dataset-card-for-scat)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Machine Translation](#machine-translation)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Additional Preprocessing](#additional-preprocessing)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Repository:** [Github](https://github.com/neulab/contextual-mt/tree/master/data/scat)
- **Paper:** [ACL 2021](https://aclanthology.org/2021.acl-long.65/)
- **Point of Contact:** [Kayo Yin](mailto:kayoyin@berkeley.edu)
### Dataset Summary
The Supporting Context for Ambiguous Translations corpus (SCAT) is a dataset of English-to-French translations annotated with human rationales used for resolving ambiguity in pronoun anaphora resolution for multi-sentence translation.
**Disclaimer**: *The SCAT corpus was released in the ACL 2021 paper ["Do Context-Aware Translation Models Pay the Right Attention?"](https://aclanthology.org/2021.acl-long.65/) by Yin et al. (2021), and an original version of the corpus is hosted on [Github](https://github.com/neulab/contextual-mt/tree/master/data/scat) with no licensing information. This dataset contains a curated version of the original corpus where examples containing nested or malformed tags were removed (refer to the [filter_scat.py](filter_scat.py) script for more details).*
### Supported Tasks and Leaderboards
#### Machine Translation
Refer to the [original paper](https://aclanthology.org/2021.acl-long.65/) for additional details on plausibility evaluation for document-level MT systems.
### Languages
The dataset contains source English sentences containing anaphoric pronouns selected from the [OpenSubtitles 2018](https://www.aclweb.org/anthology/L18-1275/) corpus and translated into the French (`fr`)
## Dataset Structure
### Data Instances
The dataset contains a single default configuration. Dataset examples have the following format:
```json
{
"id": 0,
"context_en": "Air, water, the continents. So, what is your project about and what are its chances of winning? - Well, my project is awesome. - Oh, good. I took two plants, and I gave them sun and water",
"en": "But I gave one special attention to see if it would grow more.",
"context_fr": "L'air, l'eau, les continents. Donc, quel est le sujet de ton projet et quelles sont ses chances de gagner ? - Bien, mon projet est impressionnant. - Oh, bien. J'ai pris deux plantes , et je leur ai donné de l'eau et du soleil.",
"fr": "Mais j'ai donné une attention particulière à une pour voir si elle grandit plus.",
"contrast_fr": "Mais j'ai donné une attention particulière à une pour voir si il grandit plus.",
"context_en_with_tags": "Air, water, the continents. So, what is your project about and what are its chances of winning? - Well, my project is awesome. - Oh, good. I took two plants, and I gave them sun and water",
"en_with_tags": "But I gave one special attention to see if <p>it</p> would grow more.",
"context_fr_with_tags": "L'air, l'eau, les continents. Donc, quel est le sujet de ton projet et quelles sont ses chances de gagner ? - Bien, mon projet est impressionnant. - Oh, bien. J'ai pris deux <hon>plantes<hoff> , et je leur ai donné de l'eau et du soleil.",
"fr_with_tags": "Mais j'ai donné une attention particulière à une pour voir si <p>elle</p> grandit plus.",
"contrast_fr_with_tags": "Mais j'ai donné une attention particulière à une pour voir si <p>il</p> grandit plus.",
"has_supporting_context": true,
"has_supporting_preceding_context": true,
}
```
In every example, the pronoun of interest and its translation are surrounded by `<p>...</p>` tags. These are guaranteed to be found in the `en_with_tags` and `fr_with_tags` field, respectively.
Any span surrounded by `<hon>...<hoff>` tags was identified by human annotators as supporting context to correctly translate the pronoun of interest. These spans can be missing altogether (i.e. no contextual information needed), or they can be found in any of the available fields. The `has_supporting_context` field indicates whether the example contains any supporting context.
In the example above, the translation of the pronoun `it` (field `en`) is ambiguous, and the correct translation to the feminine French pronoun `elle` (in field `fr`) is only possible thanks to the supporting feminine noun `plantes` in the field `context_fr`. Since the example contains supporting context, the `has_supporting_context` field is set to `true`. Since the context is not in the same sentence as the target pronoun but in a preceding one, `has_supporting_preceding_context` is also true.
Fields with the `_with_tags` suffix contain tags around pronouns of interest and supporting context, while their counterparts without the suffix contain the same text without tags, to facilitate direct usage with machine translation models.
### Data Splits
The dataset is split into `train`, `validation` and `test` sets. In the following table, we report the number of examples in the original dataset and in this filtered version in which examples containing malformed tags were removed.
| Split | # Examples (original) | # Examples (**this**) |
| :-----------: | :-------------------: | :-------------------: |
| `train` | 11471 | 6086 |
| `validation` | 145 | 66 |
| `test` | 1000 | 500 |
### Dataset Creation
From the original paper:
>We recruited 20 freelance English-French translators on Upwork. We annotate examples from the contrastive test set by Lopes et al. (2020). This set includes 14K examples from the OpenSubtitles2018 dataset. Through our annotation effort, we obtain 14K examples of supporting context for pronoun anaphora resolution in ambiguous translations selected by professional human translators.
Please refer to the original article [Do Context-Aware Translation Models Pay the Right Attention?](https://aclanthology.org/2021.acl-long.65/) for additional information on dataset creation.
### Additional Preprocessing
Compared to the original SCAT corpus, the following differences are present in this version:
- Examples were filtered using the [filter_scat.py](filter_scat.py) script to retain only examples containing well-formed tags, and remove superfluous tags. Superfluous tags are defined as nested `<hon><p>...</p><hoff>` tags that represent lack of contextual information for disambiguating the correct pronoun. In this case, the outer `<hon>...<hoff>` tag was removed. We also filter examples that do not contain tags for the pronoun on interest in the target sentence for English and French (`en`, `fr`).
- Sentences stripped from tags are provided in fields without the `_with_tags` suffix.
- Tags erroneously embedded within words of target sentence (e.g. `just s<p>it</p>ting there, wa<p>it</p>ing for his face`) were removed.
- An extra contrastive sentence using the pronoun of interest belonging to the opposite gender is available in the `contrast_fr` field. The swap was performed using a simple lexical heuristic (refer to `swap_pronoun` in [`scat.py`](./scat.py)), and we do not guarantee grammatical correctness of the sentence.
The test set was manually curated to include 250 examples with supporting context in the previous sentences (`has_supporting_preceding_context` is `true`), and 250 examples with supporting context in the same sentence (`has_supporting_preceding_context` is `false`).
## Additional Information
### Dataset Curators
The original authors of SCAT are the curators of the original released dataset. For problems or updates on this 🤗 Datasets version, please contact [gabriele.sarti996@gmail.com](mailto:gabriele.sarti996@gmail.com).
### Licensing Information
The dataset license is unknown.
### Citation Information
Please cite the authors if you use these corpus in your work.
```bibtex
@inproceedings{yin-etal-2021-context,
title = "Do Context-Aware Translation Models Pay the Right Attention?",
author = "Yin, Kayo and
Fernandes, Patrick and
Pruthi, Danish and
Chaudhary, Aditi and
Martins, Andr{\'e} F. T. and
Neubig, Graham",
booktitle = "Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)",
month = aug,
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.acl-long.65",
doi = "10.18653/v1/2021.acl-long.65",
pages = "788--801",
}
``` |
results-sd-v1-5-sd-v2-1-if-v1-0-karlo/6c06c658 | 2023-05-15T12:42:08.000Z | [
"region:us"
] | results-sd-v1-5-sd-v2-1-if-v1-0-karlo | null | null | null | 0 | 48 | ---
dataset_info:
features:
- name: result
dtype: string
- name: id
dtype: int64
splits:
- name: train
num_bytes: 182
num_examples: 10
download_size: 1338
dataset_size: 182
---
# Dataset Card for "6c06c658"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
results-sd-v1-5-sd-v2-1-if-v1-0-karlo/7254e21f | 2023-05-15T13:03:40.000Z | [
"region:us"
] | results-sd-v1-5-sd-v2-1-if-v1-0-karlo | null | null | null | 0 | 48 | ---
dataset_info:
features:
- name: result
dtype: string
- name: id
dtype: int64
splits:
- name: train
num_bytes: 180
num_examples: 10
download_size: 1339
dataset_size: 180
---
# Dataset Card for "7254e21f"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
results-sd-v1-5-sd-v2-1-if-v1-0-karlo/0840a30b | 2023-05-15T13:25:43.000Z | [
"region:us"
] | results-sd-v1-5-sd-v2-1-if-v1-0-karlo | null | null | null | 0 | 48 | ---
dataset_info:
features:
- name: result
dtype: string
- name: id
dtype: int64
splits:
- name: train
num_bytes: 178
num_examples: 10
download_size: 1342
dataset_size: 178
---
# Dataset Card for "0840a30b"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
results-sd-v1-5-sd-v2-1-if-v1-0-karlo/deec2759 | 2023-05-15T13:45:22.000Z | [
"region:us"
] | results-sd-v1-5-sd-v2-1-if-v1-0-karlo | null | null | null | 0 | 48 | ---
dataset_info:
features:
- name: result
dtype: string
- name: id
dtype: int64
splits:
- name: train
num_bytes: 182
num_examples: 10
download_size: 1329
dataset_size: 182
---
# Dataset Card for "deec2759"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
results-sd-v1-5-sd-v2-1-if-v1-0-karlo/747e7d53 | 2023-05-15T13:59:59.000Z | [
"region:us"
] | results-sd-v1-5-sd-v2-1-if-v1-0-karlo | null | null | null | 0 | 48 | ---
dataset_info:
features:
- name: result
dtype: string
- name: id
dtype: int64
splits:
- name: train
num_bytes: 180
num_examples: 10
download_size: 1341
dataset_size: 180
---
# Dataset Card for "747e7d53"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
results-sd-v1-5-sd-v2-1-if-v1-0-karlo/f2585c11 | 2023-05-15T14:29:57.000Z | [
"region:us"
] | results-sd-v1-5-sd-v2-1-if-v1-0-karlo | null | null | null | 0 | 48 | ---
dataset_info:
features:
- name: result
dtype: string
- name: id
dtype: int64
splits:
- name: train
num_bytes: 188
num_examples: 10
download_size: 1320
dataset_size: 188
---
# Dataset Card for "f2585c11"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
results-sd-v1-5-sd-v2-1-if-v1-0-karlo/0ca7bb7b | 2023-05-15T15:42:02.000Z | [
"region:us"
] | results-sd-v1-5-sd-v2-1-if-v1-0-karlo | null | null | null | 0 | 48 | ---
dataset_info:
features:
- name: result
dtype: string
- name: id
dtype: int64
splits:
- name: train
num_bytes: 182
num_examples: 10
download_size: 1337
dataset_size: 182
---
# Dataset Card for "0ca7bb7b"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
results-sd-v1-5-sd-v2-1-if-v1-0-karlo/e6ed0e01 | 2023-05-15T20:23:29.000Z | [
"region:us"
] | results-sd-v1-5-sd-v2-1-if-v1-0-karlo | null | null | null | 0 | 48 | ---
dataset_info:
features:
- name: result
dtype: string
- name: id
dtype: int64
splits:
- name: train
num_bytes: 186
num_examples: 10
download_size: 1336
dataset_size: 186
---
# Dataset Card for "e6ed0e01"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
results-sd-v1-5-sd-v2-1-if-v1-0-karlo/52473874 | 2023-05-15T21:29:43.000Z | [
"region:us"
] | results-sd-v1-5-sd-v2-1-if-v1-0-karlo | null | null | null | 0 | 48 | ---
dataset_info:
features:
- name: result
dtype: string
- name: id
dtype: int64
splits:
- name: train
num_bytes: 180
num_examples: 10
download_size: 1341
dataset_size: 180
---
# Dataset Card for "52473874"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
results-sd-v1-5-sd-v2-1-if-v1-0-karlo/c961a6e2 | 2023-05-15T22:30:44.000Z | [
"region:us"
] | results-sd-v1-5-sd-v2-1-if-v1-0-karlo | null | null | null | 0 | 48 | ---
dataset_info:
features:
- name: result
dtype: string
- name: id
dtype: int64
splits:
- name: train
num_bytes: 186
num_examples: 10
download_size: 1338
dataset_size: 186
---
# Dataset Card for "c961a6e2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
results-sd-v1-5-sd-v2-1-if-v1-0-karlo/d7749d15 | 2023-05-16T01:57:21.000Z | [
"region:us"
] | results-sd-v1-5-sd-v2-1-if-v1-0-karlo | null | null | null | 0 | 48 | ---
dataset_info:
features:
- name: result
dtype: string
- name: id
dtype: int64
splits:
- name: train
num_bytes: 178
num_examples: 10
download_size: 1341
dataset_size: 178
---
# Dataset Card for "d7749d15"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
results-sd-v1-5-sd-v2-1-if-v1-0-karlo/62de9313 | 2023-05-16T03:08:24.000Z | [
"region:us"
] | results-sd-v1-5-sd-v2-1-if-v1-0-karlo | null | null | null | 0 | 48 | ---
dataset_info:
features:
- name: result
dtype: string
- name: id
dtype: int64
splits:
- name: train
num_bytes: 180
num_examples: 10
download_size: 1332
dataset_size: 180
---
# Dataset Card for "62de9313"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
results-sd-v1-5-sd-v2-1-if-v1-0-karlo/720c5d3f | 2023-05-17T16:21:42.000Z | [
"region:us"
] | results-sd-v1-5-sd-v2-1-if-v1-0-karlo | null | null | null | 0 | 48 | ---
dataset_info:
features:
- name: result
dtype: string
- name: id
dtype: int64
splits:
- name: train
num_bytes: 182
num_examples: 10
download_size: 1341
dataset_size: 182
---
# Dataset Card for "720c5d3f"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
results-sd-v1-5-sd-v2-1-if-v1-0-karlo/aac4766c | 2023-05-17T21:00:37.000Z | [
"region:us"
] | results-sd-v1-5-sd-v2-1-if-v1-0-karlo | null | null | null | 0 | 48 | ---
dataset_info:
features:
- name: result
dtype: string
- name: id
dtype: int64
splits:
- name: train
num_bytes: 188
num_examples: 10
download_size: 1336
dataset_size: 188
---
# Dataset Card for "aac4766c"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
results-sd-v1-5-sd-v2-1-if-v1-0-karlo/0f1659c6 | 2023-05-17T21:01:41.000Z | [
"region:us"
] | results-sd-v1-5-sd-v2-1-if-v1-0-karlo | null | null | null | 0 | 48 | ---
dataset_info:
features:
- name: result
dtype: string
- name: id
dtype: int64
splits:
- name: train
num_bytes: 184
num_examples: 10
download_size: 1326
dataset_size: 184
---
# Dataset Card for "0f1659c6"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.