id stringlengths 2 115 | author stringlengths 2 42 ⌀ | last_modified timestamp[us, tz=UTC] | downloads int64 0 8.87M | likes int64 0 3.84k | paperswithcode_id stringlengths 2 45 ⌀ | tags list | lastModified timestamp[us, tz=UTC] | createdAt stringlengths 24 24 | key stringclasses 1 value | created timestamp[us] | card stringlengths 1 1.01M | embedding list | library_name stringclasses 21 values | pipeline_tag stringclasses 27 values | mask_token null | card_data null | widget_data null | model_index null | config null | transformers_info null | spaces null | safetensors null | transformersInfo null | modelId stringlengths 5 111 ⌀ | embeddings list |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
alexrs/alpaca-cleaned-30-clusters | alexrs | 2023-10-16T14:44:34Z | 105 | 0 | null | [
"region:us"
] | 2023-10-16T14:44:34Z | 2023-10-16T14:44:30.000Z | 2023-10-16T14:44:30 | ---
dataset_info:
features:
- name: instruction
dtype: string
- name: output
dtype: string
- name: input
dtype: string
- name: cluster
dtype: int32
splits:
- name: train
num_bytes: 40490946
num_examples: 51760
download_size: 24195677
dataset_size: 40490946
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "alpaca-cleaned-30-clusters"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.8724452257156372,
-0.29266300797462463,
0.3046573996543884,
0.3076786398887634,
-0.2809907793998718,
-0.02209457755088806,
0.2584841251373291,
-0.22056573629379272,
0.9889364242553711,
0.6122406125068665,
-0.8972985148429871,
-0.9470528364181519,
-0.5436582565307617,
-0.2355789244174957... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Jackmin108/xtreme | Jackmin108 | 2023-10-21T20:14:19Z | 105 | 0 | null | [
"license:apache-2.0",
"region:us"
] | 2023-10-21T20:14:19Z | 2023-10-20T13:16:37.000Z | 2023-10-20T13:16:37 | ---
license: apache-2.0
configs:
- config_name: mnli
data_files:
- split: train
path:
- "mnli/train-0000.parquet"
- "mnli/train-0001.parquet"
- "mnli/train-0002.parquet"
- "mnli/train-0003.parquet"
features:
sentence1:
dtype: string
_type: Value
sentence2:
dtype: string
_type: Value
label:
names:
- entailment
- neutral
- contradiction
_type: ClassLabel
idx:
dtype: int32
_type: Value
- config_name: tydiqa
data_files:
- split: train
path:
- "tydiqa/ko/train.parquet"
- "tydiqa/sw/train.parquet"
- "tydiqa/ru/train.parquet"
- "tydiqa/te/train.parquet"
- "tydiqa/ar/train.parquet"
- "tydiqa/fi/train.parquet"
- "tydiqa/bn/train.parquet"
- "tydiqa/en/train.parquet"
- "tydiqa/id/train.parquet"
- split: validation
path:
- "tydiqa/ko/validation.parquet"
- "tydiqa/sw/validation.parquet"
- "tydiqa/ru/validation.parquet"
- "tydiqa/te/validation.parquet"
- "tydiqa/ar/validation.parquet"
- "tydiqa/fi/validation.parquet"
- "tydiqa/bn/validation.parquet"
- "tydiqa/en/validation.parquet"
- "tydiqa/id/validation.parquet"
- config_name: tydiqa.ko
data_files:
- split: train
path: "tydiqa/ko/train.parquet"
- split: validation
path: "tydiqa/ko/validation.parquet"
- config_name: tydiqa.sw
data_files:
- split: train
path: "tydiqa/sw/train.parquet"
- split: validation
path: "tydiqa/sw/validation.parquet"
- config_name: tydiqa.ru
data_files:
- split: train
path: "tydiqa/ru/train.parquet"
- split: validation
path: "tydiqa/ru/validation.parquet"
- config_name: tydiqa.te
data_files:
- split: train
path: "tydiqa/te/train.parquet"
- split: validation
path: "tydiqa/te/validation.parquet"
- config_name: tydiqa.ar
data_files:
- split: train
path: "tydiqa/ar/train.parquet"
- split: validation
path: "tydiqa/ar/validation.parquet"
- config_name: tydiqa.fi
data_files:
- split: train
path: "tydiqa/fi/train.parquet"
- split: validation
path: "tydiqa/fi/validation.parquet"
- config_name: tydiqa.bn
data_files:
- split: train
path: "tydiqa/bn/train.parquet"
- split: validation
path: "tydiqa/bn/validation.parquet"
- config_name: tydiqa.en
data_files:
- split: train
path: "tydiqa/en/train.parquet"
- split: validation
path: "tydiqa/en/validation.parquet"
- config_name: tydiqa.id
data_files:
- split: train
path: "tydiqa/id/train.parquet"
- split: validation
path: "tydiqa/id/validation.parquet"
- config_name: xnli
data_files:
- split: validation
path:
- xnli/hi/validation.parquet
- xnli/zh/validation.parquet
- xnli/sw/validation.parquet
- xnli/tr/validation.parquet
- xnli/en/validation.parquet
- xnli/th/validation.parquet
- xnli/ru/validation.parquet
- xnli/ar/validation.parquet
- xnli/vi/validation.parquet
- xnli/bg/validation.parquet
- xnli/es/validation.parquet
- xnli/el/validation.parquet
- xnli/fr/validation.parquet
- xnli/ur/validation.parquet
- xnli/de/validation.parquet
- split: test
path:
- xnli/hi/test.parquet
- xnli/zh/test.parquet
- xnli/sw/test.parquet
- xnli/tr/test.parquet
- xnli/en/test.parquet
- xnli/th/test.parquet
- xnli/ru/test.parquet
- xnli/ar/test.parquet
- xnli/vi/test.parquet
- xnli/bg/test.parquet
- xnli/es/test.parquet
- xnli/el/test.parquet
- xnli/fr/test.parquet
- xnli/ur/test.parquet
- xnli/de/test.parquet
features:
sentence1:
dtype: string
_type: Value
sentence2:
dtype: string
_type: Value
label:
names:
- entailment
- neutral
- contradiction
_type: ClassLabel
idx:
dtype: int32
_type: Value
- config_name: xnli.hi
data_files:
- split: validation
path: xnli/hi/validation.parquet
- split: test
path: xnli/hi/test.parquet
features:
sentence1:
dtype: string
_type: Value
sentence2:
dtype: string
_type: Value
label:
names:
- entailment
- neutral
- contradiction
_type: ClassLabel
idx:
dtype: int32
_type: Value
- config_name: xnli.zh
data_files:
- split: validation
path: xnli/zh/validation.parquet
- split: test
path: xnli/zh/test.parquet
features:
sentence1:
dtype: string
_type: Value
sentence2:
dtype: string
_type: Value
label:
names:
- entailment
- neutral
- contradiction
_type: ClassLabel
idx:
dtype: int32
_type: Value
- config_name: xnli.sw
data_files:
- split: validation
path: xnli/sw/validation.parquet
- split: test
path: xnli/sw/test.parquet
features:
sentence1:
dtype: string
_type: Value
sentence2:
dtype: string
_type: Value
label:
names:
- entailment
- neutral
- contradiction
_type: ClassLabel
idx:
dtype: int32
_type: Value
- config_name: xnli.tr
data_files:
- split: validation
path: xnli/tr/validation.parquet
- split: test
path: xnli/tr/test.parquet
features:
sentence1:
dtype: string
_type: Value
sentence2:
dtype: string
_type: Value
label:
names:
- entailment
- neutral
- contradiction
_type: ClassLabel
idx:
dtype: int32
_type: Value
- config_name: xnli.en
data_files:
- split: validation
path: xnli/en/validation.parquet
- split: test
path: xnli/en/test.parquet
features:
sentence1:
dtype: string
_type: Value
sentence2:
dtype: string
_type: Value
label:
names:
- entailment
- neutral
- contradiction
_type: ClassLabel
idx:
dtype: int32
_type: Value
- config_name: xnli.th
data_files:
- split: validation
path: xnli/th/validation.parquet
- split: test
path: xnli/th/test.parquet
features:
sentence1:
dtype: string
_type: Value
sentence2:
dtype: string
_type: Value
label:
names:
- entailment
- neutral
- contradiction
_type: ClassLabel
idx:
dtype: int32
_type: Value
- config_name: xnli.ru
data_files:
- split: validation
path: xnli/ru/validation.parquet
- split: test
path: xnli/ru/test.parquet
features:
sentence1:
dtype: string
_type: Value
sentence2:
dtype: string
_type: Value
label:
names:
- entailment
- neutral
- contradiction
_type: ClassLabel
idx:
dtype: int32
_type: Value
- config_name: xnli.ar
data_files:
- split: validation
path: xnli/ar/validation.parquet
- split: test
path: xnli/ar/test.parquet
features:
sentence1:
dtype: string
_type: Value
sentence2:
dtype: string
_type: Value
label:
names:
- entailment
- neutral
- contradiction
_type: ClassLabel
idx:
dtype: int32
_type: Value
- config_name: xnli.vi
data_files:
- split: validation
path: xnli/vi/validation.parquet
- split: test
path: xnli/vi/test.parquet
features:
sentence1:
dtype: string
_type: Value
sentence2:
dtype: string
_type: Value
label:
names:
- entailment
- neutral
- contradiction
_type: ClassLabel
idx:
dtype: int32
_type: Value
- config_name: xnli.bg
data_files:
- split: validation
path: xnli/bg/validation.parquet
- split: test
path: xnli/bg/test.parquet
features:
sentence1:
dtype: string
_type: Value
sentence2:
dtype: string
_type: Value
label:
names:
- entailment
- neutral
- contradiction
_type: ClassLabel
idx:
dtype: int32
_type: Value
- config_name: xnli.es
data_files:
- split: validation
path: xnli/es/validation.parquet
- split: test
path: xnli/es/test.parquet
features:
sentence1:
dtype: string
_type: Value
sentence2:
dtype: string
_type: Value
label:
names:
- entailment
- neutral
- contradiction
_type: ClassLabel
idx:
dtype: int32
_type: Value
- config_name: xnli.el
data_files:
- split: validation
path: xnli/el/validation.parquet
- split: test
path: xnli/el/test.parquet
features:
sentence1:
dtype: string
_type: Value
sentence2:
dtype: string
_type: Value
label:
names:
- entailment
- neutral
- contradiction
_type: ClassLabel
idx:
dtype: int32
_type: Value
- config_name: xnli.fr
data_files:
- split: validation
path: xnli/fr/validation.parquet
- split: test
path: xnli/fr/test.parquet
features:
sentence1:
dtype: string
_type: Value
sentence2:
dtype: string
_type: Value
label:
names:
- entailment
- neutral
- contradiction
_type: ClassLabel
idx:
dtype: int32
_type: Value
- config_name: xnli.ur
data_files:
- split: validation
path: xnli/ur/validation.parquet
- split: test
path: xnli/ur/test.parquet
features:
sentence1:
dtype: string
_type: Value
sentence2:
dtype: string
_type: Value
label:
names:
- entailment
- neutral
- contradiction
_type: ClassLabel
idx:
dtype: int32
_type: Value
- config_name: xnli.de
data_files:
- split: validation
path: xnli/de/validation.parquet
- split: test
path: xnli/de/test.parquet
features:
sentence1:
dtype: string
_type: Value
sentence2:
dtype: string
_type: Value
label:
names:
- entailment
- neutral
- contradiction
_type: ClassLabel
idx:
dtype: int32
_type: Value
- config_name: paws-x.de
data_files:
- split: train
path: paws-x/de/train.parquet
- split: validation
path: paws-x/de/validation.parquet
- split: test
path: paws-x/de/test.parquet
features:
sentence1:
dtype: string
_type: Value
sentence2:
dtype: string
_type: Value
label:
names:
- not_paraphrase
- paraphrase
_type: ClassLabel
- config_name: paws-x.en
data_files:
- split: train
path: paws-x/en/train.parquet
- split: validation
path: paws-x/en/validation.parquet
- split: test
path: paws-x/en/test.parquet
features:
sentence1:
dtype: string
_type: Value
sentence2:
dtype: string
_type: Value
label:
names:
- not_paraphrase
- paraphrase
_type: ClassLabel
- config_name: paws-x.es
data_files:
- split: train
path: paws-x/es/train.parquet
- split: validation
path: paws-x/es/validation.parquet
- split: test
path: paws-x/es/test.parquet
features:
sentence1:
dtype: string
_type: Value
sentence2:
dtype: string
_type: Value
label:
names:
- not_paraphrase
- paraphrase
_type: ClassLabel
- config_name: paws-x.fr
data_files:
- split: train
path: paws-x/fr/train.parquet
- split: validation
path: paws-x/fr/validation.parquet
- split: test
path: paws-x/fr/test.parquet
features:
sentence1:
dtype: string
_type: Value
sentence2:
dtype: string
_type: Value
label:
names:
- not_paraphrase
- paraphrase
_type: ClassLabel
- config_name: paws-x.ja
data_files:
- split: train
path: paws-x/ja/train.parquet
- split: validation
path: paws-x/ja/validation.parquet
- split: test
path: paws-x/ja/test.parquet
features:
sentence1:
dtype: string
_type: Value
sentence2:
dtype: string
_type: Value
label:
names:
- not_paraphrase
- paraphrase
_type: ClassLabel
- config_name: paws-x.ko
data_files:
- split: train
path: paws-x/ko/train.parquet
- split: validation
path: paws-x/ko/validation.parquet
- split: test
path: paws-x/ko/test.parquet
features:
sentence1:
dtype: string
_type: Value
sentence2:
dtype: string
_type: Value
label:
names:
- not_paraphrase
- paraphrase
_type: ClassLabel
- config_name: paws-x.zh
data_files:
- split: train
path: paws-x/zh/train.parquet
- split: validation
path: paws-x/zh/validation.parquet
- split: test
path: paws-x/zh/test.parquet
features:
sentence1:
dtype: string
_type: Value
sentence2:
dtype: string
_type: Value
label:
names:
- not_paraphrase
- paraphrase
_type: ClassLabel
---
| [
-0.12853392958641052,
-0.18616779148578644,
0.6529127955436707,
0.49436280131340027,
-0.19319361448287964,
0.23607419431209564,
0.36072003841400146,
0.050563063472509384,
0.579365611076355,
0.7400140762329102,
-0.6508104205131531,
-0.23783954977989197,
-0.7102249264717102,
-0.0478260256350... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
kaitchup/opus-Indonesian-to-English | kaitchup | 2023-11-01T19:15:17Z | 105 | 1 | null | [
"region:us"
] | 2023-11-01T19:15:17Z | 2023-11-01T19:15:12.000Z | 2023-11-01T19:15:12 | ---
configs:
- config_name: default
data_files:
- split: validation
path: data/validation-*
- split: train
path: data/train-*
dataset_info:
features:
- name: text
dtype: string
splits:
- name: validation
num_bytes: 182024
num_examples: 2000
- name: train
num_bytes: 74451703
num_examples: 989529
download_size: 53126195
dataset_size: 74633727
---
# Dataset Card for "opus-id-en"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.5262299180030823,
-0.16085439920425415,
0.23130974173545837,
0.3310612440109253,
-0.3072739243507385,
-0.10470747202634811,
0.1409159153699875,
-0.1004781723022461,
0.8189362287521362,
0.4779894948005676,
-0.9439737200737,
-0.9504457116127014,
-0.4454711079597473,
-0.06945610046386719,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Long1311/alphaca-25k | Long1311 | 2023-11-11T05:11:00Z | 105 | 0 | null | [
"region:us"
] | 2023-11-11T05:11:00Z | 2023-11-05T03:30:38.000Z | 2023-11-05T03:30:38 | Entry not found | [
-0.32276472449302673,
-0.22568407654762268,
0.8622258901596069,
0.4346148371696472,
-0.5282984972000122,
0.7012965679168701,
0.7915717363357544,
0.07618629932403564,
0.7746022939682007,
0.2563222646713257,
-0.785281777381897,
-0.22573848068714142,
-0.9104482531547546,
0.5715669393539429,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
pkarypis/ultrachat_filtered | pkarypis | 2023-11-16T23:05:33Z | 105 | 0 | null | [
"region:us"
] | 2023-11-16T23:05:33Z | 2023-11-16T22:51:33.000Z | 2023-11-16T22:51:33 | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: prompt_id
dtype: string
- name: messages
list:
- name: content
dtype: string
- name: role
dtype: string
splits:
- name: test_gen
num_bytes: 148276089
num_examples: 28304
- name: test_sft
num_bytes: 154695659
num_examples: 23110
- name: train_gen
num_bytes: 1347396812
num_examples: 256032
- name: train_sft
num_bytes: 1350777817.931667
num_examples: 200979
download_size: 1596770502
dataset_size: 3001146377.9316673
---
# Dataset Card for "ultrachat_filtered"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.5699363350868225,
-0.5374593138694763,
0.2602177858352661,
0.060116421431303024,
-0.4968734085559845,
0.1485530138015747,
0.3392567038536072,
-0.2494535744190216,
0.8538781404495239,
0.8389818668365479,
-0.8331774473190308,
-0.8222620487213135,
-0.32984837889671326,
-0.3736993074417114,... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
KantoRegion/hp_conversations_with_hermione_granger_movie | KantoRegion | 2023-11-20T16:12:20Z | 105 | 0 | null | [
"task_categories:conversational",
"size_categories:1K<n<10K",
"language:en",
"region:us"
] | 2023-11-20T16:12:20Z | 2023-11-17T15:24:45.000Z | 2023-11-17T15:24:45 | ---
task_categories:
- conversational
language:
- en
size_categories:
- 1K<n<10K
--- | [
-0.1285335123538971,
-0.1861683875322342,
0.6529128551483154,
0.49436232447624207,
-0.19319400191307068,
0.23607441782951355,
0.36072009801864624,
0.05056373029947281,
0.5793656706809998,
0.7400146722793579,
-0.650810182094574,
-0.23784008622169495,
-0.7102247476577759,
-0.0478255338966846... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
aakanksha19/pico_bigbio_processed | aakanksha19 | 2023-11-21T05:20:46Z | 105 | 0 | null | [
"license:unknown",
"region:us"
] | 2023-11-21T05:20:46Z | 2023-11-21T05:17:08.000Z | 2023-11-21T05:17:08 | ---
license: unknown
---
| [
-0.1285335123538971,
-0.1861683875322342,
0.6529128551483154,
0.49436232447624207,
-0.19319400191307068,
0.23607441782951355,
0.36072009801864624,
0.05056373029947281,
0.5793656706809998,
0.7400146722793579,
-0.650810182094574,
-0.23784008622169495,
-0.7102247476577759,
-0.0478255338966846... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
strombergnlp/bornholmsk_parallel | strombergnlp | 2022-07-01T15:45:35Z | 104 | 2 | bornholmsk-parallel | [
"task_categories:translation",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:translation",
"size_categories:1K<n<10K",
"source_datasets:original",
"license:cc-by-4.0",
"region:us"
] | 2022-07-01T15:45:35Z | 2022-05-11T08:29:38.000Z | 2022-05-11T08:29:38 | ---
annotations_creators:
- expert-generated
language_creators:
- found
language:
- da
- da-bornholm
license:
- cc-by-4.0
multilinguality:
- translation
pretty_name: Bornholmsk/Danish Parallel Texts
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- translation
task_ids: []
paperswithcode_id: bornholmsk-parallel
---
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** [https://github.com/StrombergNLP/bornholmsk](https://github.com/StrombergNLP/bornholmsk)
- **Repository:** [https://github.com/StrombergNLP/bornholmsk](https://github.com/StrombergNLP/bornholmsk)
- **Paper:** [https://aclanthology.org/W19-6138/](https://aclanthology.org/W19-6138/)
- **Point of Contact:** [Leon Derczynski](https://github.com/leondz)
- **Size of downloaded dataset files:** 490 KB
- **Size of the generated dataset:** 582 KB
- **Total amount of disk used:** 1072 KB
### Dataset Summary
This dataset is parallel text for Bornholmsk and Danish.
For more details, see the paper [Bornholmsk Natural Language Processing: Resources and Tools](https://aclanthology.org/W19-6138/).
### Supported Tasks and Leaderboards
*
### Languages
Bornholmsk, a language variant of Danish spoken on the island of Bornholm, and Danish. bcp47: `da-bornholm` and `da-DK`
## Dataset Structure
### Data Instances
### Data Fields
`id`: the sentence ID, `int`
`da-bornholm`: the Bornholmsk text, `string`
`da`: the Danish translation, `string`
### Data Splits
* Train: 5785 sentence pairs
* Validation: 500 sentence pairs
* Test: 500 sentence pairs
## Dataset Creation
### Curation Rationale
To gather as much parallel Bornholmsk together as possible
### Source Data
#### Initial Data Collection and Normalization
From a translation of Kuhre's Sansager, a selection of colloquial resources, and a prototype Bornholmsk/Danish dictionary
#### Who are the source language producers?
Native speakers of Bornholmsk who have produced works in their native language, or translated them to Danish. Much of the data is the result of a community of Bornholmsk speakers volunteering their time across the island in an effort to capture this endangered language.
### Annotations
#### Annotation process
No annotations
#### Who are the annotators?
Native speakers of Bornholmsk, mostly aged 60+.
### Personal and Sensitive Information
Unknown, but low risk of presence, given the source material
## Considerations for Using the Data
### Social Impact of Dataset
The hope behind this data is to enable people to learn and use Bornholmsk
### Discussion of Biases
[Needs More Information]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
This collection of Bornholmsk is curated by Leon Derczynski and Alex Speed Kjeldsen
### Licensing Information
Creative Commons Attribution 4.0
### Citation Information
```
@inproceedings{derczynski-kjeldsen-2019-bornholmsk,
title = "Bornholmsk Natural Language Processing: Resources and Tools",
author = "Derczynski, Leon and
Kjeldsen, Alex Speed",
booktitle = "Proceedings of the 22nd Nordic Conference on Computational Linguistics",
month = sep # "{--}" # oct,
year = "2019",
address = "Turku, Finland",
publisher = {Link{\"o}ping University Electronic Press},
url = "https://aclanthology.org/W19-6138",
pages = "338--344",
}
``` | [
-0.5069226622581482,
-0.659329891204834,
0.2657212018966675,
0.2975161373615265,
-0.30318304896354675,
0.15965060889720917,
-0.5498010516166687,
-0.587029755115509,
0.5711255669593811,
0.5758209228515625,
-0.6929596662521362,
-0.9845055937767029,
-0.5510607361793518,
0.4354388117790222,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
tomekkorbak/detoxify-pile-chunk3-1700000-1750000 | tomekkorbak | 2022-10-04T23:58:51Z | 104 | 0 | null | [
"region:us"
] | 2022-10-04T23:58:51Z | 2022-10-04T23:58:43.000Z | 2022-10-04T23:58:43 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622263669967651,
0.43461522459983826,
-0.52829909324646,
0.7012971639633179,
0.7915719747543335,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104475975036621,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
venelin/inferes | venelin | 2022-10-08T01:25:47Z | 104 | 0 | null | [
"task_categories:text-classification",
"task_ids:natural-language-inference",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:es",
"license:cc-by-4.0",
"nli",
"spanish"... | 2022-10-08T01:25:47Z | 2022-10-07T16:57:37.000Z | 2022-10-07T16:57:37 | ---
annotations_creators:
- expert-generated
language:
- es
language_creators:
- expert-generated
license:
- cc-by-4.0
multilinguality:
- monolingual
pretty_name: InferES
size_categories:
- 1K<n<10K
source_datasets:
- original
tags:
- nli
- spanish
- negation
- coreference
task_categories:
- text-classification
task_ids:
- natural-language-inference
---
# Dataset Card for InferES
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://github.com/venelink/inferes
- **Repository:** https://github.com/venelink/inferes
- **Paper:** https://arxiv.org/abs/2210.03068
- **Point of Contact:** venelin [at] utexas [dot] edu
### Dataset Summary
Natural Language Inference dataset for European Spanish
Paper accepted and (to be) presented at COLING 2022
### Supported Tasks and Leaderboards
Natural Language Inference
### Languages
Spanish
## Dataset Structure
The dataset contains two texts inputs (Premise and Hypothesis), Label for three-way classification, and annotation data.
### Data Instances
train size = 6444
test size = 1612
### Data Fields
ID : the unique ID of the instance
Premise
Hypothesis
Label: cnt, ent, neutral
Topic: 1 (Picasso), 2 (Columbus), 3 (Videogames), 4 (Olympic games), 5 (EU), 6 (USSR)
Anno: ID of the annotators (in cases of undergrads or crowd - the ID of the group)
Anno Type: Generate, Rewrite, Crowd, and Automated
### Data Splits
train size = 6444
test size = 1612
The train/test split is stratified by a key that combines Label + Anno + Anno type
### Source Data
Wikipedia + text generated from "sentence generators" hired as part of the process
#### Who are the annotators?
Native speakers of European Spanish
### Personal and Sensitive Information
No personal or Sensitive information is included.
Annotators are anonymized and only kept as "ID" for research purposes.
### Dataset Curators
Venelin Kovatchev
### Licensing Information
cc-by-4.0
### Citation Information
To be added after proceedings from COLING 2022 appear
### Contributions
Thanks to [@venelink](https://github.com/venelink) for adding this dataset.
| [
-0.47406908869743347,
-0.5665661692619324,
0.35627108812332153,
0.43460896611213684,
-0.11532066762447357,
-0.0749925896525383,
-0.2612646520137787,
-0.49839478731155396,
0.4961046278476715,
0.6623136401176453,
-0.7028201818466187,
-0.8303930163383484,
-0.6240107417106628,
0.29980859160423... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
qanastek/HoC | qanastek | 2022-11-01T15:03:11Z | 104 | 1 | null | [
"task_categories:text-classification",
"task_ids:multi-class-classification",
"annotations_creators:machine-generated",
"annotations_creators:expert-generated",
"language_creators:found",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:en",
"region:us"
] | 2022-11-01T15:03:11Z | 2022-11-01T10:49:52.000Z | 2022-11-01T10:49:52 | ---
annotations_creators:
- machine-generated
- expert-generated
language_creators:
- found
language:
- en
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- multi-class-classification
pretty_name: HoC
language_bcp47:
- en-US
---
# HoC : Hallmarks of Cancer Corpus
## Table of Contents
- [Dataset Card for [Needs More Information]](#dataset-card-for-needs-more-information)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Initial Data Collection and Normalization](#initial-data-collection-and-normalization)
- [Who are the source language producers?](#who-are-the-source-language-producers)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [No Warranty](#no-warranty)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** https://s-baker.net/resource/hoc/
- **Repository:** https://github.com/sb895/Hallmarks-of-Cancer
- **Paper:** https://academic.oup.com/bioinformatics/article/32/3/432/1743783
- **Leaderboard:** https://paperswithcode.com/dataset/hoc-1
- **Point of Contact:** [Yanis Labrak](mailto:yanis.labrak@univ-avignon.fr)
### Dataset Summary
The Hallmarks of Cancer Corpus for text classification
The Hallmarks of Cancer (HOC) Corpus consists of 1852 PubMed publication abstracts manually annotated by experts according to a taxonomy. The taxonomy consists of 37 classes in a hierarchy. Zero or more class labels are assigned to each sentence in the corpus. The labels are found under the "labels" directory, while the tokenized text can be found under "text" directory. The filenames are the corresponding PubMed IDs (PMID).
In addition to the HOC corpus, we also have the [Cancer Hallmarks Analytics Tool](http://chat.lionproject.net/) which classifes all of PubMed according to the HoC taxonomy.
### Supported Tasks and Leaderboards
The dataset can be used to train a model for `multi-class-classification`.
### Languages
The corpora consists of PubMed article only in english:
- `English - United States (en-US)`
## Load the dataset with HuggingFace
```python
from datasets import load_dataset
dataset = load_dataset("qanastek/HoC")
validation = dataset["validation"]
print("First element of the validation set : ", validation[0])
```
## Dataset Structure
### Data Instances
```json
{
"document_id": "12634122_5",
"text": "Genes that were overexpressed in OM3 included oncogenes , cell cycle regulators , and those involved in signal transduction , whereas genes for DNA repair enzymes and inhibitors of transformation and metastasis were suppressed .",
"label": [9, 5, 0, 6]
}
```
### Data Fields
`document_id`: Unique identifier of the document.
`text`: Raw text of the PubMed abstracts.
`label`: One of the 10 currently known hallmarks of cancer.
| Hallmark | Search term |
|:-------------------------------------------:|:-------------------------------------------:|
| 1. Sustaining proliferative signaling (PS) | Proliferation Receptor Cancer |
| | 'Growth factor' Cancer |
| | 'Cell cycle' Cancer |
| 2. Evading growth suppressors (GS) | 'Cell cycle' Cancer |
| | 'Contact inhibition' |
| 3. Resisting cell death (CD) | Apoptosis Cancer |
| | Necrosis Cancer |
| | Autophagy Cancer |
| 4. Enabling replicative immortality (RI) | Senescence Cancer |
| | Immortalization Cancer |
| 5. Inducing angiogenesis (A) | Angiogenesis Cancer |
| | 'Angiogenic factor' |
| 6. Activating invasion & metastasis (IM) | Metastasis Invasion Cancer |
| 7. Genome instability & mutation (GI) | Mutation Cancer |
| | 'DNA repair' Cancer |
| | Adducts Cancer |
| | 'Strand breaks' Cancer |
| | 'DNA damage' Cancer |
| 8. Tumor-promoting inflammation (TPI) | Inflammation Cancer |
| | 'Oxidative stress' Cancer |
| | Inflammation 'Immune response' Cancer |
| 9. Deregulating cellular energetics (CE) | Glycolysis Cancer; 'Warburg effect' Cancer |
| 10. Avoiding immune destruction (ID) | 'Immune system' Cancer |
| | Immunosuppression Cancer |
### Data Splits
Distribution of data for the 10 hallmarks:
| **Hallmark** | **No. abstracts** | **No. sentences** |
|:------------:|:-----------------:|:-----------------:|
| 1. PS | 462 | 993 |
| 2. GS | 242 | 468 |
| 3. CD | 430 | 883 |
| 4. RI | 115 | 295 |
| 5. A | 143 | 357 |
| 6. IM | 291 | 667 |
| 7. GI | 333 | 771 |
| 8. TPI | 194 | 437 |
| 9. CE | 105 | 213 |
| 10. ID | 108 | 226 |
## Dataset Creation
### Source Data
#### Who are the source language producers?
The corpus has been produced and uploaded by Baker Simon and Silins Ilona and Guo Yufan and Ali Imran and Hogberg Johan and Stenius Ulla and Korhonen Anna.
### Personal and Sensitive Information
The corpora is free of personal or sensitive information.
## Additional Information
### Dataset Curators
__HoC__: Baker Simon and Silins Ilona and Guo Yufan and Ali Imran and Hogberg Johan and Stenius Ulla and Korhonen Anna
__Hugging Face__: Labrak Yanis (Not affiliated with the original corpus)
### Licensing Information
```plain
GNU General Public License v3.0
```
```plain
Permissions
- Commercial use
- Modification
- Distribution
- Patent use
- Private use
Limitations
- Liability
- Warranty
Conditions
- License and copyright notice
- State changes
- Disclose source
- Same license
```
### Citation Information
We would very much appreciate it if you cite our publications:
[Automatic semantic classification of scientific literature according to the hallmarks of cancer](https://academic.oup.com/bioinformatics/article/32/3/432/1743783)
```bibtex
@article{baker2015automatic,
title={Automatic semantic classification of scientific literature according to the hallmarks of cancer},
author={Baker, Simon and Silins, Ilona and Guo, Yufan and Ali, Imran and H{\"o}gberg, Johan and Stenius, Ulla and Korhonen, Anna},
journal={Bioinformatics},
volume={32},
number={3},
pages={432--440},
year={2015},
publisher={Oxford University Press}
}
```
[Cancer Hallmarks Analytics Tool (CHAT): a text mining approach to organize and evaluate scientific literature on cancer](https://www.repository.cam.ac.uk/bitstream/handle/1810/265268/btx454.pdf?sequence=8&isAllowed=y)
```bibtex
@article{baker2017cancer,
title={Cancer Hallmarks Analytics Tool (CHAT): a text mining approach to organize and evaluate scientific literature on cancer},
author={Baker, Simon and Ali, Imran and Silins, Ilona and Pyysalo, Sampo and Guo, Yufan and H{\"o}gberg, Johan and Stenius, Ulla and Korhonen, Anna},
journal={Bioinformatics},
volume={33},
number={24},
pages={3973--3981},
year={2017},
publisher={Oxford University Press}
}
```
[Cancer hallmark text classification using convolutional neural networks](https://www.repository.cam.ac.uk/bitstream/handle/1810/270037/BIOTXTM2016.pdf?sequence=1&isAllowed=y)
```bibtex
@article{baker2017cancer,
title={Cancer hallmark text classification using convolutional neural networks},
author={Baker, Simon and Korhonen, Anna-Leena and Pyysalo, Sampo},
year={2016}
}
```
[Initializing neural networks for hierarchical multi-label text classification](http://www.aclweb.org/anthology/W17-2339)
```bibtex
@article{baker2017initializing,
title={Initializing neural networks for hierarchical multi-label text classification},
author={Baker, Simon and Korhonen, Anna},
journal={BioNLP 2017},
pages={307--315},
year={2017}
}
```
| [
-0.24214932322502136,
-0.3811121881008148,
0.3219483196735382,
0.08555341511964798,
-0.21181926131248474,
0.10597026348114014,
-0.1967848539352417,
-0.29109394550323486,
0.5592149496078491,
0.623931884765625,
-0.5391321778297424,
-1.1251118183135986,
-0.718037486076355,
0.42829227447509766... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
sh0416/sst2-openai | sh0416 | 2023-02-25T12:53:53Z | 104 | 0 | null | [
"task_categories:text-classification",
"region:us"
] | 2023-02-25T12:53:53Z | 2023-02-25T12:16:45.000Z | 2023-02-25T12:16:45 | ---
task_categories:
- text-classification
---
Original source: https://github.com/openai/generating-reviews-discovering-sentiment
This dataset is different from the dataset distributed by GLUE, which means the metric **shouldn't be compared with the SST2 performance in GLUE**.
The description of SST2 dataset in the paper is the following.
> The Stanford Sentiment Treebank (SST)(Socher et al., 2013) was created specifically to evaluate more complex compositional models of language. It is de-rived from the same base dataset as MR but was relabeledvia Amazon Mechanical and includes dense labeling of thephrases of parse trees computed for all sentences. For thebinary subtask, this amounts to 76961 total labels com-pared to the 6920 sentence level labels. As a demonstrationof the capability of unsupervised representation learning tosimplify data collection and remove preprocessing steps,our reported results ignore these dense labels and computedparse trees, using only the raw text and sentence level la-bels
| [
-0.09339125454425812,
-0.4669608771800995,
0.3649962246417999,
0.1741035133600235,
-0.6451656222343445,
0.09110502898693085,
-0.24017654359340668,
-0.21751748025417328,
0.5061249136924744,
0.25383079051971436,
-0.6176317930221558,
-0.6028377413749695,
-0.42935284972190857,
-0.0971868857741... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
maveriq/tobacco3482 | maveriq | 2023-03-02T21:23:58Z | 104 | 1 | null | [
"region:us"
] | 2023-03-02T21:23:58Z | 2023-03-02T20:46:53.000Z | 2023-03-02T20:46:53 | ---
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': ADVE
'1': Email
'2': Form
'3': Letter
'4': Memo
'5': News
'6': Note
'7': Report
'8': Resume
'9': Scientific
splits:
- name: train
num_bytes: 1409969631.808
num_examples: 3482
download_size: 1733093218
dataset_size: 1409969631.808
---
# Dataset Card for "tobacco3482"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.35396626591682434,
-0.031963687390089035,
0.22188414633274078,
0.14517812430858612,
-0.16154326498508453,
-0.0033219652250409126,
0.30078068375587463,
-0.06604253500699997,
0.8617354035377502,
0.6373369693756104,
-0.6287177801132202,
-0.8850177526473999,
-0.6302870512008667,
-0.31608149... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
tarudesu/ViCTSD | tarudesu | 2023-11-28T07:20:27Z | 104 | 0 | null | [
"task_categories:text-classification",
"size_categories:10K<n<100K",
"language:vi",
"arxiv:2103.10069",
"region:us"
] | 2023-11-28T07:20:27Z | 2023-03-12T14:16:24.000Z | 2023-03-12T14:16:24 | ---
task_categories:
- text-classification
language:
- vi
size_categories:
- 10K<n<100K
pretty_name: Vietnamese Constructive and Toxic Speech Detection Dataset
---
# Constructive and Toxic Speech Detection for Open-domain Social Media Comments in Vietnamese
This is the official repository for the UIT-ViCTSD dataset from the paper [Constructive and Toxic Speech Detection for Open-domain Social Media Comments in Vietnamese](https://arxiv.org/pdf/2103.10069.pdf), which was accepted at the [IEA/AIE 2021](https://ieaaie2021.wordpress.com/list-of-accepted-papers/).
# Citation Information
The provided dataset is only used for research purposes!
```
@InProceedings{nguyen2021victsd,
author="Nguyen, Luan Thanh and Van Nguyen, Kiet and Nguyen, Ngan Luu-Thuy",
title="Constructive and Toxic Speech Detection for Open-Domain Social Media Comments in Vietnamese",
booktitle="Advances and Trends in Artificial Intelligence. Artificial Intelligence Practices",
year="2021",
publisher="Springer International Publishing",
address="Cham",
pages="572--583"
}
```
## Abstract
The rise of social media has led to the increasing of comments on online forums. However, there still exists invalid comments which are not informative for users. Moreover, those comments are also quite toxic and harmful to people. In this paper, we create a dataset for constructive and toxic speech detection, named UIT-ViCTSD (Vietnamese Constructive and Toxic Speech Detection dataset) with 10,000 human-annotated comments. For these tasks, we propose a system for constructive and toxic speech detection with the state-of-the-art transfer learning model in Vietnamese NLP as PhoBERT. With this system, we obtain F1-scores of 78.59% and 59.40% for classifying constructive and toxic comments, respectively. Besides, we implement various baseline models as traditional Machine Learning and Deep Neural Network-Based models to evaluate the dataset. With the results, we can solve several tasks on the online discussions and develop the framework for identifying constructiveness and toxicity of Vietnamese social media comments automatically.
## Dataset
The ViCTSD dataset is consist of 10,000 human-annotated comments on 10 domains from Vietnamese users' comments on social media.
The dataset is divided into three parts as below:
1. Train set: 7,000 comments
2. Valid set: 2,000 comments
3. Test set: 1,000 comments
## Contact
Please feel free to contact us by email luannt@uit.edu.vn if you have any further information! | [
-0.18550509214401245,
-1.0067343711853027,
0.44723737239837646,
0.3658893406391144,
-0.5045391917228699,
-0.04756815731525421,
-0.3415054380893707,
-0.35172513127326965,
-0.11558376997709274,
0.5586683750152588,
-0.2711866796016693,
-0.7580047845840454,
-0.47875484824180603,
0.005682725459... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
kuleshov/alpaca-data | kuleshov | 2023-04-13T23:05:16Z | 104 | 0 | null | [
"region:us"
] | 2023-04-13T23:05:16Z | 2023-04-13T23:04:15.000Z | 2023-04-13T23:04:15 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622264862060547,
0.43461528420448303,
-0.52829909324646,
0.7012971639633179,
0.7915720343589783,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104477167129517,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Thaweewat/alpaca-cleaned-52k-th | Thaweewat | 2023-05-09T16:18:02Z | 104 | 3 | null | [
"task_categories:question-answering",
"task_categories:summarization",
"size_categories:10K<n<100K",
"language:th",
"license:cc-by-sa-3.0",
"instruction-finetuning",
"region:us"
] | 2023-05-09T16:18:02Z | 2023-05-09T15:45:46.000Z | 2023-05-09T15:45:46 | ---
license: cc-by-sa-3.0
task_categories:
- question-answering
- summarization
tags:
- instruction-finetuning
language:
- th
size_categories:
- 10K<n<100K
---
# Summary
This is a Thai 🇹🇭-instructed dataset translated from cleaned version of the original Alpaca Dataset released by Stanford using Google Cloud Translation, contain 52,000 instructions and demonstrations generated by OpenAI's `text-davinci-003` engine.
This instruction data can be used to conduct instruction-tuning for language models and make the language model follow instruction better.
The following issues have been identified in the original release and fixed in this dataset:
1. **Hallucinations:** Many instructions in the original dataset had instructions referencing data on the internet, which just caused GPT3 to hallucinate an answer.
2. **Merged Instructions:** There were many instructions that were merged together in the original dataset for some reason.
3. **Empty outputs:** Some entries in the original dataset had empty outputs.
4. **Empty code examples:** Some descriptions in the original dataset were missing code examples, making it difficult to understand the intended behavior of the code.
5. **Instructions to generate images:** Some descriptions in the original dataset included instructions to generate images, something obviously not possible.
6. **N/A outputs:** Some code snippets in the original dataset had N/A outputs.
7. **Inconsistent input field:** The original dataset had inconsistent usage of the input field when it was supposed to be empty.
8. **Wrong answers:** Some instructions/questions in the original dataset had incorrect answers. About 80% of the math problems are estimated to have incorrect answers.
9. **Non-Sensical/Unclear instructions:** Many instructions are unclear, we try to clarify (or re-write) if instructions are non-sensical. Instructions that are slightly unclear, but where one could deduce the meaning are not altered.
10. **Extraneous escape and control characters:** The original dataset had several entries with extraneous escape and control characters.
### Original Alpaca Dataset Summary
Alpaca is a dataset of 52,000 instructions and demonstrations generated by OpenAI's `text-davinci-003` engine. This instruction data can be used to conduct instruction-tuning for language models and make the language model follow instruction better.
The authors built on the data generation pipeline from [Self-Instruct framework](https://github.com/yizhongw/self-instruct) and made the following modifications:
- The `text-davinci-003` engine to generate the instruction data instead of `davinci`.
- A [new prompt](https://github.com/tatsu-lab/stanford_alpaca/blob/main/prompt.txt) was written that explicitly gave the requirement of instruction generation to `text-davinci-003`.
- Much more aggressive batch decoding was used, i.e., generating 20 instructions at once, which significantly reduced the cost of data generation.
- The data generation pipeline was simplified by discarding the difference between classification and non-classification instructions.
- Only a single instance was generated for each instruction, instead of 2 to 3 instances as in Self-Instruct.
The authors built on the data generation pipeline from [Self-Instruct framework](https://github.com/yizhongw/self-instruct) and made the following modifications:
- The `text-davinci-003` engine to generate the instruction data instead of `davinci`.
- A [new prompt](https://github.com/tatsu-lab/stanford_alpaca/blob/main/prompt.txt) was written that explicitly gave the requirement of instruction generation to `text-davinci-003`.
- Much more aggressive batch decoding was used, i.e., generating 20 instructions at once, which significantly reduced the cost of data generation.
- The data generation pipeline was simplified by discarding the difference between classification and non-classification instructions.
- Only a single instance was generated for each instruction, instead of 2 to 3 instances as in Self-Instruct.
This produced an instruction-following dataset with 52K examples obtained at a much lower cost (less than $500).
In a preliminary study, the authors also found that the 52K generated data to be much more diverse than the data released by [Self-Instruct](https://github.com/yizhongw/self-instruct/blob/main/data/seed_tasks.jsonl).
Supported Tasks:
- Training LLMs
- Synthetic Data Generation
- Data Augmentation
Languages: Thai
Version: 1.0
--- | [
-0.3573794960975647,
-0.842934250831604,
0.3581751585006714,
-0.03466460108757019,
-0.2182825654745102,
-0.43008267879486084,
0.05553640425205231,
-0.2648621201515198,
0.007008162792772055,
0.7793431878089905,
-0.8802323937416077,
-0.45867517590522766,
-0.5132665634155273,
0.10931131988763... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
TigerResearch/tigerbot-stackexchange-qa-en-0.5m | TigerResearch | 2023-05-31T02:21:45Z | 104 | 0 | null | [
"language:en",
"license:apache-2.0",
"region:us"
] | 2023-05-31T02:21:45Z | 2023-05-30T15:06:49.000Z | 2023-05-30T15:06:49 | ---
license: apache-2.0
language:
- en
---
[Tigerbot](https://github.com/TigerResearch/TigerBot) 基于stackexchange问答站点dump数据生成sft数据集
<p align="center" width="40%">
原始来源:[https://archive.org/details/stackexchange](https://archive.org/details/stackexchange)
## Usage
```python
import datasets
ds_sft = datasets.load_dataset('TigerResearch/tigerbot-stackexchange-qa-en-0.5m')
``` | [
-0.33918696641921997,
-0.3302007019519806,
-0.060784712433815,
0.4125996530056,
-0.6155014634132385,
-0.14266972243785858,
0.18391814827919006,
0.1519247442483902,
0.7634761929512024,
0.6766886711120605,
-0.510003387928009,
-0.46992728114128113,
-0.1471659243106842,
0.28412675857543945,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
clarin-knext/trec-covid-pl | clarin-knext | 2023-06-07T08:12:18Z | 104 | 0 | null | [
"language:pl",
"arxiv:2305.19840",
"region:us"
] | 2023-06-07T08:12:18Z | 2023-06-06T22:34:50.000Z | 2023-06-06T22:34:50 | ---
language:
- pl
---
Part of **BEIR-PL: Zero Shot Information Retrieval Benchmark for the Polish Language**.
Link to arxiv: https://arxiv.org/pdf/2305.19840.pdf
Contact: konrad.wojtasik@pwr.edu.pl | [
-0.2209920734167099,
-0.9029767513275146,
0.5094642043113708,
0.2354191392660141,
-0.318521112203598,
-0.1491902619600296,
-0.16673962771892548,
-0.49629199504852295,
-0.0189602542668581,
0.41122621297836304,
-0.5503097772598267,
-0.6913566589355469,
-0.4166175425052643,
-0.048304721713066... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
clarin-knext/scidocs-pl | clarin-knext | 2023-06-07T08:10:24Z | 104 | 0 | null | [
"language:pl",
"arxiv:2305.19840",
"region:us"
] | 2023-06-07T08:10:24Z | 2023-06-06T22:48:25.000Z | 2023-06-06T22:48:25 | ---
language:
- pl
---
Part of **BEIR-PL: Zero Shot Information Retrieval Benchmark for the Polish Language**.
Link to arxiv: https://arxiv.org/pdf/2305.19840.pdf
Contact: konrad.wojtasik@pwr.edu.pl | [
-0.2209920734167099,
-0.9029767513275146,
0.5094642043113708,
0.2354191392660141,
-0.318521112203598,
-0.1491902619600296,
-0.16673962771892548,
-0.49629199504852295,
-0.0189602542668581,
0.41122621297836304,
-0.5503097772598267,
-0.6913566589355469,
-0.4166175425052643,
-0.048304721713066... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
nazimali/quran-question-answer-context | nazimali | 2023-07-08T21:35:05Z | 104 | 3 | null | [
"task_categories:question-answering",
"language:ar",
"language:en",
"license:cc-by-4.0",
"islam",
"quran",
"arabic",
"region:us"
] | 2023-07-08T21:35:05Z | 2023-07-03T12:07:47.000Z | 2023-07-03T12:07:47 | ---
dataset_info:
features:
- name: q_id
dtype: int64
- name: question
dtype: string
- name: answer
dtype: string
- name: q_word
dtype: string
- name: q_topic
dtype: string
- name: fine_class
dtype: string
- name: class
dtype: string
- name: ontology_concept
dtype: string
- name: ontology_concept2
dtype: string
- name: source
dtype: string
- name: q_src_id
dtype: int64
- name: quetion_type
dtype: string
- name: chapter_name
dtype: string
- name: chapter_no
dtype: int64
- name: verse
sequence: string
- name: question_en
dtype: string
- name: answer_en
dtype: string
- name: q_word_en
dtype: string
- name: q_topic_en
dtype: string
- name: fine_class_en
dtype: string
- name: class_en
dtype: string
- name: ontology_concept_en
dtype: string
- name: chapter_name_en
dtype: string
- name: context
dtype: string
splits:
- name: train
num_bytes: 2226830.0310711367
num_examples: 978
- name: test
num_bytes: 557845.9689288634
num_examples: 245
download_size: 1515128
dataset_size: 2784676.0
license: cc-by-4.0
task_categories:
- question-answering
pretty_name: Quran Question Answer with Context
language:
- ar
- en
tags:
- islam
- quran
- arabic
---
# Dataset Card for "quran-question-answer-context"
## Dataset Summary
Translated the original dataset from Arabic to English and added the Surah ayahs to the `context` column.
## Usage
```python
from datasets import load_dataset
dataset = load_dataset("nazimali/quran-question-answer-context")
```
```python
DatasetDict({
train: Dataset({
features: ['q_id', 'question', 'answer', 'q_word', 'q_topic', 'fine_class', 'class', 'ontology_concept', 'ontology_concept2', 'source', 'q_src_id', 'quetion_type', 'chapter_name', 'chapter_no', 'verse', 'question_en', 'answer_en', 'q_word_en', 'q_topic_en', 'fine_class_en', 'class_en', 'ontology_concept_en', 'chapter_name_en', 'context'],
num_rows: 978
})
test: Dataset({
features: ['q_id', 'question', 'answer', 'q_word', 'q_topic', 'fine_class', 'class', 'ontology_concept', 'ontology_concept2', 'source', 'q_src_id', 'quetion_type', 'chapter_name', 'chapter_no', 'verse', 'question_en', 'answer_en', 'q_word_en', 'q_topic_en', 'fine_class_en', 'class_en', 'ontology_concept_en', 'chapter_name_en', 'context'],
num_rows: 245
})
})
```
## Translation Info
1. Translated the Arabic questions/concept columns to English with [Helsinki-NLP/opus-mt-ar-en](https://huggingface.co/Helsinki-NLP/opus-mt-ar-en)
2. Used `en-yusufali` translations for ayas [M-AI-C/quran-en-tafssirs](https://huggingface.co/datasets/M-AI-C/quran-en-tafssirs)
3. Renamed Surahs with [kheder/quran](https://huggingface.co/datasets/kheder/quran)
4. Added the ayahs that helped answer the questions
- Split the `ayah` columns string into a list of integers
- Concactenated the Surah:Ayah pairs into a sentence to the `context` column
Columns with the suffix `_en` contain the translations of the original columns.
## TODO
The `context` column has some `null` values that needs to be investigated and fixed
## Initial Data Collection
The original dataset is from **[Annotated Corpus of Arabic Al-Quran Question and Answer](https://archive.researchdata.leeds.ac.uk/464/)**
## Licensing Information
Original dataset [license](https://archive.researchdata.leeds.ac.uk/464/): **Creative Commons Attribution 4.0 International (CC BY 4.0)**
### Contributions
Original paper authors: Alqahtani, Mohammad and Atwell, Eric (2018) Annotated Corpus of Arabic Al-Quran Question and Answer. University of Leeds. https://doi.org/10.5518/356 | [
-0.5905731916427612,
-0.7149754762649536,
-0.017805777490139008,
-0.05799541249871254,
-0.33970126509666443,
-0.38497641682624817,
-0.08686632663011551,
-0.13782384991645813,
0.3028009235858917,
0.6602604985237122,
-0.643056333065033,
-0.8507962226867676,
-0.4686923027038574,
0.59521496295... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
aboonaji/alpaca_micro_demo | aboonaji | 2023-08-08T13:57:18Z | 104 | 0 | null | [
"region:us"
] | 2023-08-08T13:57:18Z | 2023-08-08T13:00:10.000Z | 2023-08-08T13:00:10 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622263669967651,
0.43461522459983826,
-0.52829909324646,
0.7012971639633179,
0.7915719747543335,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104475975036621,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
open-llm-leaderboard/details_lmsys__vicuna-7b-v1.3 | open-llm-leaderboard | 2023-10-21T23:45:34Z | 104 | 0 | null | [
"region:us"
] | 2023-10-21T23:45:34Z | 2023-08-18T11:07:05.000Z | 2023-08-18T11:07:05 | ---
pretty_name: Evaluation run of lmsys/vicuna-7b-v1.3
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [lmsys/vicuna-7b-v1.3](https://huggingface.co/lmsys/vicuna-7b-v1.3) on the [Open\
\ LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 64 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the agregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_lmsys__vicuna-7b-v1.3\"\
,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\
These are the [latest results from run 2023-10-21T23:45:21.646720](https://huggingface.co/datasets/open-llm-leaderboard/details_lmsys__vicuna-7b-v1.3/blob/main/results_2023-10-21T23-45-21.646720.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.12730704697986578,\n\
\ \"em_stderr\": 0.003413474068983651,\n \"f1\": 0.17891254194630765,\n\
\ \"f1_stderr\": 0.0035073277688968674,\n \"acc\": 0.38083789051163464,\n\
\ \"acc_stderr\": 0.0095991004919272\n },\n \"harness|drop|3\": {\n\
\ \"em\": 0.12730704697986578,\n \"em_stderr\": 0.003413474068983651,\n\
\ \"f1\": 0.17891254194630765,\n \"f1_stderr\": 0.0035073277688968674\n\
\ },\n \"harness|gsm8k|5\": {\n \"acc\": 0.05686125852918878,\n \
\ \"acc_stderr\": 0.0063787902420996325\n },\n \"harness|winogrande|5\"\
: {\n \"acc\": 0.7048145224940805,\n \"acc_stderr\": 0.012819410741754765\n\
\ }\n}\n```"
repo_url: https://huggingface.co/lmsys/vicuna-7b-v1.3
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_arc_challenge_25
data_files:
- split: 2023_07_19T16_22_02.219224
path:
- '**/details_harness|arc:challenge|25_2023-07-19T16:22:02.219224.parquet'
- split: latest
path:
- '**/details_harness|arc:challenge|25_2023-07-19T16:22:02.219224.parquet'
- config_name: harness_drop_3
data_files:
- split: 2023_10_21T23_45_21.646720
path:
- '**/details_harness|drop|3_2023-10-21T23-45-21.646720.parquet'
- split: latest
path:
- '**/details_harness|drop|3_2023-10-21T23-45-21.646720.parquet'
- config_name: harness_gsm8k_5
data_files:
- split: 2023_10_21T23_45_21.646720
path:
- '**/details_harness|gsm8k|5_2023-10-21T23-45-21.646720.parquet'
- split: latest
path:
- '**/details_harness|gsm8k|5_2023-10-21T23-45-21.646720.parquet'
- config_name: harness_hellaswag_10
data_files:
- split: 2023_07_19T16_22_02.219224
path:
- '**/details_harness|hellaswag|10_2023-07-19T16:22:02.219224.parquet'
- split: latest
path:
- '**/details_harness|hellaswag|10_2023-07-19T16:22:02.219224.parquet'
- config_name: harness_hendrycksTest_5
data_files:
- split: 2023_07_19T16_22_02.219224
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-07-19T16:22:02.219224.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2023-07-19T16:22:02.219224.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2023-07-19T16:22:02.219224.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2023-07-19T16:22:02.219224.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-07-19T16:22:02.219224.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2023-07-19T16:22:02.219224.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-07-19T16:22:02.219224.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-07-19T16:22:02.219224.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-07-19T16:22:02.219224.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2023-07-19T16:22:02.219224.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2023-07-19T16:22:02.219224.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2023-07-19T16:22:02.219224.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-07-19T16:22:02.219224.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2023-07-19T16:22:02.219224.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-07-19T16:22:02.219224.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-07-19T16:22:02.219224.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2023-07-19T16:22:02.219224.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2023-07-19T16:22:02.219224.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-07-19T16:22:02.219224.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-07-19T16:22:02.219224.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-07-19T16:22:02.219224.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-07-19T16:22:02.219224.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-07-19T16:22:02.219224.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-07-19T16:22:02.219224.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-07-19T16:22:02.219224.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-07-19T16:22:02.219224.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-07-19T16:22:02.219224.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-07-19T16:22:02.219224.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-07-19T16:22:02.219224.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-07-19T16:22:02.219224.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-07-19T16:22:02.219224.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-07-19T16:22:02.219224.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2023-07-19T16:22:02.219224.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-07-19T16:22:02.219224.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2023-07-19T16:22:02.219224.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-07-19T16:22:02.219224.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-07-19T16:22:02.219224.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2023-07-19T16:22:02.219224.parquet'
- '**/details_harness|hendrycksTest-management|5_2023-07-19T16:22:02.219224.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2023-07-19T16:22:02.219224.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-07-19T16:22:02.219224.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-07-19T16:22:02.219224.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-07-19T16:22:02.219224.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-07-19T16:22:02.219224.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2023-07-19T16:22:02.219224.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2023-07-19T16:22:02.219224.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2023-07-19T16:22:02.219224.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-07-19T16:22:02.219224.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2023-07-19T16:22:02.219224.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-07-19T16:22:02.219224.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-07-19T16:22:02.219224.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2023-07-19T16:22:02.219224.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2023-07-19T16:22:02.219224.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2023-07-19T16:22:02.219224.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-07-19T16:22:02.219224.parquet'
- '**/details_harness|hendrycksTest-virology|5_2023-07-19T16:22:02.219224.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2023-07-19T16:22:02.219224.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-07-19T16:22:02.219224.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2023-07-19T16:22:02.219224.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2023-07-19T16:22:02.219224.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2023-07-19T16:22:02.219224.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-07-19T16:22:02.219224.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2023-07-19T16:22:02.219224.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-07-19T16:22:02.219224.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-07-19T16:22:02.219224.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-07-19T16:22:02.219224.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2023-07-19T16:22:02.219224.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2023-07-19T16:22:02.219224.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2023-07-19T16:22:02.219224.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-07-19T16:22:02.219224.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2023-07-19T16:22:02.219224.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-07-19T16:22:02.219224.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-07-19T16:22:02.219224.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2023-07-19T16:22:02.219224.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2023-07-19T16:22:02.219224.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-07-19T16:22:02.219224.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-07-19T16:22:02.219224.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-07-19T16:22:02.219224.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-07-19T16:22:02.219224.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-07-19T16:22:02.219224.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-07-19T16:22:02.219224.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-07-19T16:22:02.219224.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-07-19T16:22:02.219224.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-07-19T16:22:02.219224.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-07-19T16:22:02.219224.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-07-19T16:22:02.219224.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-07-19T16:22:02.219224.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-07-19T16:22:02.219224.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-07-19T16:22:02.219224.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2023-07-19T16:22:02.219224.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-07-19T16:22:02.219224.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2023-07-19T16:22:02.219224.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-07-19T16:22:02.219224.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-07-19T16:22:02.219224.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2023-07-19T16:22:02.219224.parquet'
- '**/details_harness|hendrycksTest-management|5_2023-07-19T16:22:02.219224.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2023-07-19T16:22:02.219224.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-07-19T16:22:02.219224.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-07-19T16:22:02.219224.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-07-19T16:22:02.219224.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-07-19T16:22:02.219224.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2023-07-19T16:22:02.219224.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2023-07-19T16:22:02.219224.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2023-07-19T16:22:02.219224.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-07-19T16:22:02.219224.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2023-07-19T16:22:02.219224.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-07-19T16:22:02.219224.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-07-19T16:22:02.219224.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2023-07-19T16:22:02.219224.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2023-07-19T16:22:02.219224.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2023-07-19T16:22:02.219224.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-07-19T16:22:02.219224.parquet'
- '**/details_harness|hendrycksTest-virology|5_2023-07-19T16:22:02.219224.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2023-07-19T16:22:02.219224.parquet'
- config_name: harness_hendrycksTest_abstract_algebra_5
data_files:
- split: 2023_07_19T16_22_02.219224
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-07-19T16:22:02.219224.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-07-19T16:22:02.219224.parquet'
- config_name: harness_hendrycksTest_anatomy_5
data_files:
- split: 2023_07_19T16_22_02.219224
path:
- '**/details_harness|hendrycksTest-anatomy|5_2023-07-19T16:22:02.219224.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-anatomy|5_2023-07-19T16:22:02.219224.parquet'
- config_name: harness_hendrycksTest_astronomy_5
data_files:
- split: 2023_07_19T16_22_02.219224
path:
- '**/details_harness|hendrycksTest-astronomy|5_2023-07-19T16:22:02.219224.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-astronomy|5_2023-07-19T16:22:02.219224.parquet'
- config_name: harness_hendrycksTest_business_ethics_5
data_files:
- split: 2023_07_19T16_22_02.219224
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2023-07-19T16:22:02.219224.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2023-07-19T16:22:02.219224.parquet'
- config_name: harness_hendrycksTest_clinical_knowledge_5
data_files:
- split: 2023_07_19T16_22_02.219224
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-07-19T16:22:02.219224.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-07-19T16:22:02.219224.parquet'
- config_name: harness_hendrycksTest_college_biology_5
data_files:
- split: 2023_07_19T16_22_02.219224
path:
- '**/details_harness|hendrycksTest-college_biology|5_2023-07-19T16:22:02.219224.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_biology|5_2023-07-19T16:22:02.219224.parquet'
- config_name: harness_hendrycksTest_college_chemistry_5
data_files:
- split: 2023_07_19T16_22_02.219224
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-07-19T16:22:02.219224.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-07-19T16:22:02.219224.parquet'
- config_name: harness_hendrycksTest_college_computer_science_5
data_files:
- split: 2023_07_19T16_22_02.219224
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-07-19T16:22:02.219224.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-07-19T16:22:02.219224.parquet'
- config_name: harness_hendrycksTest_college_mathematics_5
data_files:
- split: 2023_07_19T16_22_02.219224
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-07-19T16:22:02.219224.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-07-19T16:22:02.219224.parquet'
- config_name: harness_hendrycksTest_college_medicine_5
data_files:
- split: 2023_07_19T16_22_02.219224
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2023-07-19T16:22:02.219224.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2023-07-19T16:22:02.219224.parquet'
- config_name: harness_hendrycksTest_college_physics_5
data_files:
- split: 2023_07_19T16_22_02.219224
path:
- '**/details_harness|hendrycksTest-college_physics|5_2023-07-19T16:22:02.219224.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_physics|5_2023-07-19T16:22:02.219224.parquet'
- config_name: harness_hendrycksTest_computer_security_5
data_files:
- split: 2023_07_19T16_22_02.219224
path:
- '**/details_harness|hendrycksTest-computer_security|5_2023-07-19T16:22:02.219224.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-computer_security|5_2023-07-19T16:22:02.219224.parquet'
- config_name: harness_hendrycksTest_conceptual_physics_5
data_files:
- split: 2023_07_19T16_22_02.219224
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-07-19T16:22:02.219224.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-07-19T16:22:02.219224.parquet'
- config_name: harness_hendrycksTest_econometrics_5
data_files:
- split: 2023_07_19T16_22_02.219224
path:
- '**/details_harness|hendrycksTest-econometrics|5_2023-07-19T16:22:02.219224.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-econometrics|5_2023-07-19T16:22:02.219224.parquet'
- config_name: harness_hendrycksTest_electrical_engineering_5
data_files:
- split: 2023_07_19T16_22_02.219224
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-07-19T16:22:02.219224.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-07-19T16:22:02.219224.parquet'
- config_name: harness_hendrycksTest_elementary_mathematics_5
data_files:
- split: 2023_07_19T16_22_02.219224
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-07-19T16:22:02.219224.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-07-19T16:22:02.219224.parquet'
- config_name: harness_hendrycksTest_formal_logic_5
data_files:
- split: 2023_07_19T16_22_02.219224
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2023-07-19T16:22:02.219224.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2023-07-19T16:22:02.219224.parquet'
- config_name: harness_hendrycksTest_global_facts_5
data_files:
- split: 2023_07_19T16_22_02.219224
path:
- '**/details_harness|hendrycksTest-global_facts|5_2023-07-19T16:22:02.219224.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-global_facts|5_2023-07-19T16:22:02.219224.parquet'
- config_name: harness_hendrycksTest_high_school_biology_5
data_files:
- split: 2023_07_19T16_22_02.219224
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-07-19T16:22:02.219224.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-07-19T16:22:02.219224.parquet'
- config_name: harness_hendrycksTest_high_school_chemistry_5
data_files:
- split: 2023_07_19T16_22_02.219224
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-07-19T16:22:02.219224.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-07-19T16:22:02.219224.parquet'
- config_name: harness_hendrycksTest_high_school_computer_science_5
data_files:
- split: 2023_07_19T16_22_02.219224
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-07-19T16:22:02.219224.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-07-19T16:22:02.219224.parquet'
- config_name: harness_hendrycksTest_high_school_european_history_5
data_files:
- split: 2023_07_19T16_22_02.219224
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-07-19T16:22:02.219224.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-07-19T16:22:02.219224.parquet'
- config_name: harness_hendrycksTest_high_school_geography_5
data_files:
- split: 2023_07_19T16_22_02.219224
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-07-19T16:22:02.219224.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-07-19T16:22:02.219224.parquet'
- config_name: harness_hendrycksTest_high_school_government_and_politics_5
data_files:
- split: 2023_07_19T16_22_02.219224
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-07-19T16:22:02.219224.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-07-19T16:22:02.219224.parquet'
- config_name: harness_hendrycksTest_high_school_macroeconomics_5
data_files:
- split: 2023_07_19T16_22_02.219224
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-07-19T16:22:02.219224.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-07-19T16:22:02.219224.parquet'
- config_name: harness_hendrycksTest_high_school_mathematics_5
data_files:
- split: 2023_07_19T16_22_02.219224
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-07-19T16:22:02.219224.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-07-19T16:22:02.219224.parquet'
- config_name: harness_hendrycksTest_high_school_microeconomics_5
data_files:
- split: 2023_07_19T16_22_02.219224
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-07-19T16:22:02.219224.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-07-19T16:22:02.219224.parquet'
- config_name: harness_hendrycksTest_high_school_physics_5
data_files:
- split: 2023_07_19T16_22_02.219224
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-07-19T16:22:02.219224.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-07-19T16:22:02.219224.parquet'
- config_name: harness_hendrycksTest_high_school_psychology_5
data_files:
- split: 2023_07_19T16_22_02.219224
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-07-19T16:22:02.219224.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-07-19T16:22:02.219224.parquet'
- config_name: harness_hendrycksTest_high_school_statistics_5
data_files:
- split: 2023_07_19T16_22_02.219224
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-07-19T16:22:02.219224.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-07-19T16:22:02.219224.parquet'
- config_name: harness_hendrycksTest_high_school_us_history_5
data_files:
- split: 2023_07_19T16_22_02.219224
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-07-19T16:22:02.219224.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-07-19T16:22:02.219224.parquet'
- config_name: harness_hendrycksTest_high_school_world_history_5
data_files:
- split: 2023_07_19T16_22_02.219224
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-07-19T16:22:02.219224.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-07-19T16:22:02.219224.parquet'
- config_name: harness_hendrycksTest_human_aging_5
data_files:
- split: 2023_07_19T16_22_02.219224
path:
- '**/details_harness|hendrycksTest-human_aging|5_2023-07-19T16:22:02.219224.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_aging|5_2023-07-19T16:22:02.219224.parquet'
- config_name: harness_hendrycksTest_human_sexuality_5
data_files:
- split: 2023_07_19T16_22_02.219224
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-07-19T16:22:02.219224.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-07-19T16:22:02.219224.parquet'
- config_name: harness_hendrycksTest_international_law_5
data_files:
- split: 2023_07_19T16_22_02.219224
path:
- '**/details_harness|hendrycksTest-international_law|5_2023-07-19T16:22:02.219224.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-international_law|5_2023-07-19T16:22:02.219224.parquet'
- config_name: harness_hendrycksTest_jurisprudence_5
data_files:
- split: 2023_07_19T16_22_02.219224
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-07-19T16:22:02.219224.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-07-19T16:22:02.219224.parquet'
- config_name: harness_hendrycksTest_logical_fallacies_5
data_files:
- split: 2023_07_19T16_22_02.219224
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-07-19T16:22:02.219224.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-07-19T16:22:02.219224.parquet'
- config_name: harness_hendrycksTest_machine_learning_5
data_files:
- split: 2023_07_19T16_22_02.219224
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2023-07-19T16:22:02.219224.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2023-07-19T16:22:02.219224.parquet'
- config_name: harness_hendrycksTest_management_5
data_files:
- split: 2023_07_19T16_22_02.219224
path:
- '**/details_harness|hendrycksTest-management|5_2023-07-19T16:22:02.219224.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-management|5_2023-07-19T16:22:02.219224.parquet'
- config_name: harness_hendrycksTest_marketing_5
data_files:
- split: 2023_07_19T16_22_02.219224
path:
- '**/details_harness|hendrycksTest-marketing|5_2023-07-19T16:22:02.219224.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-marketing|5_2023-07-19T16:22:02.219224.parquet'
- config_name: harness_hendrycksTest_medical_genetics_5
data_files:
- split: 2023_07_19T16_22_02.219224
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-07-19T16:22:02.219224.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-07-19T16:22:02.219224.parquet'
- config_name: harness_hendrycksTest_miscellaneous_5
data_files:
- split: 2023_07_19T16_22_02.219224
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-07-19T16:22:02.219224.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-07-19T16:22:02.219224.parquet'
- config_name: harness_hendrycksTest_moral_disputes_5
data_files:
- split: 2023_07_19T16_22_02.219224
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-07-19T16:22:02.219224.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-07-19T16:22:02.219224.parquet'
- config_name: harness_hendrycksTest_moral_scenarios_5
data_files:
- split: 2023_07_19T16_22_02.219224
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-07-19T16:22:02.219224.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-07-19T16:22:02.219224.parquet'
- config_name: harness_hendrycksTest_nutrition_5
data_files:
- split: 2023_07_19T16_22_02.219224
path:
- '**/details_harness|hendrycksTest-nutrition|5_2023-07-19T16:22:02.219224.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-nutrition|5_2023-07-19T16:22:02.219224.parquet'
- config_name: harness_hendrycksTest_philosophy_5
data_files:
- split: 2023_07_19T16_22_02.219224
path:
- '**/details_harness|hendrycksTest-philosophy|5_2023-07-19T16:22:02.219224.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-philosophy|5_2023-07-19T16:22:02.219224.parquet'
- config_name: harness_hendrycksTest_prehistory_5
data_files:
- split: 2023_07_19T16_22_02.219224
path:
- '**/details_harness|hendrycksTest-prehistory|5_2023-07-19T16:22:02.219224.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-prehistory|5_2023-07-19T16:22:02.219224.parquet'
- config_name: harness_hendrycksTest_professional_accounting_5
data_files:
- split: 2023_07_19T16_22_02.219224
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-07-19T16:22:02.219224.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-07-19T16:22:02.219224.parquet'
- config_name: harness_hendrycksTest_professional_law_5
data_files:
- split: 2023_07_19T16_22_02.219224
path:
- '**/details_harness|hendrycksTest-professional_law|5_2023-07-19T16:22:02.219224.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_law|5_2023-07-19T16:22:02.219224.parquet'
- config_name: harness_hendrycksTest_professional_medicine_5
data_files:
- split: 2023_07_19T16_22_02.219224
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-07-19T16:22:02.219224.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-07-19T16:22:02.219224.parquet'
- config_name: harness_hendrycksTest_professional_psychology_5
data_files:
- split: 2023_07_19T16_22_02.219224
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-07-19T16:22:02.219224.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-07-19T16:22:02.219224.parquet'
- config_name: harness_hendrycksTest_public_relations_5
data_files:
- split: 2023_07_19T16_22_02.219224
path:
- '**/details_harness|hendrycksTest-public_relations|5_2023-07-19T16:22:02.219224.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-public_relations|5_2023-07-19T16:22:02.219224.parquet'
- config_name: harness_hendrycksTest_security_studies_5
data_files:
- split: 2023_07_19T16_22_02.219224
path:
- '**/details_harness|hendrycksTest-security_studies|5_2023-07-19T16:22:02.219224.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-security_studies|5_2023-07-19T16:22:02.219224.parquet'
- config_name: harness_hendrycksTest_sociology_5
data_files:
- split: 2023_07_19T16_22_02.219224
path:
- '**/details_harness|hendrycksTest-sociology|5_2023-07-19T16:22:02.219224.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-sociology|5_2023-07-19T16:22:02.219224.parquet'
- config_name: harness_hendrycksTest_us_foreign_policy_5
data_files:
- split: 2023_07_19T16_22_02.219224
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-07-19T16:22:02.219224.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-07-19T16:22:02.219224.parquet'
- config_name: harness_hendrycksTest_virology_5
data_files:
- split: 2023_07_19T16_22_02.219224
path:
- '**/details_harness|hendrycksTest-virology|5_2023-07-19T16:22:02.219224.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-virology|5_2023-07-19T16:22:02.219224.parquet'
- config_name: harness_hendrycksTest_world_religions_5
data_files:
- split: 2023_07_19T16_22_02.219224
path:
- '**/details_harness|hendrycksTest-world_religions|5_2023-07-19T16:22:02.219224.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-world_religions|5_2023-07-19T16:22:02.219224.parquet'
- config_name: harness_truthfulqa_mc_0
data_files:
- split: 2023_07_19T16_22_02.219224
path:
- '**/details_harness|truthfulqa:mc|0_2023-07-19T16:22:02.219224.parquet'
- split: latest
path:
- '**/details_harness|truthfulqa:mc|0_2023-07-19T16:22:02.219224.parquet'
- config_name: harness_winogrande_5
data_files:
- split: 2023_10_21T23_45_21.646720
path:
- '**/details_harness|winogrande|5_2023-10-21T23-45-21.646720.parquet'
- split: latest
path:
- '**/details_harness|winogrande|5_2023-10-21T23-45-21.646720.parquet'
- config_name: results
data_files:
- split: 2023_07_19T16_22_02.219224
path:
- results_2023-07-19T16:22:02.219224.parquet
- split: 2023_10_21T23_45_21.646720
path:
- results_2023-10-21T23-45-21.646720.parquet
- split: latest
path:
- results_2023-10-21T23-45-21.646720.parquet
---
# Dataset Card for Evaluation run of lmsys/vicuna-7b-v1.3
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/lmsys/vicuna-7b-v1.3
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** clementine@hf.co
### Dataset Summary
Dataset automatically created during the evaluation run of model [lmsys/vicuna-7b-v1.3](https://huggingface.co/lmsys/vicuna-7b-v1.3) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_lmsys__vicuna-7b-v1.3",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-10-21T23:45:21.646720](https://huggingface.co/datasets/open-llm-leaderboard/details_lmsys__vicuna-7b-v1.3/blob/main/results_2023-10-21T23-45-21.646720.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"em": 0.12730704697986578,
"em_stderr": 0.003413474068983651,
"f1": 0.17891254194630765,
"f1_stderr": 0.0035073277688968674,
"acc": 0.38083789051163464,
"acc_stderr": 0.0095991004919272
},
"harness|drop|3": {
"em": 0.12730704697986578,
"em_stderr": 0.003413474068983651,
"f1": 0.17891254194630765,
"f1_stderr": 0.0035073277688968674
},
"harness|gsm8k|5": {
"acc": 0.05686125852918878,
"acc_stderr": 0.0063787902420996325
},
"harness|winogrande|5": {
"acc": 0.7048145224940805,
"acc_stderr": 0.012819410741754765
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] | [
-0.37565040588378906,
-0.6621671915054321,
0.3153010606765747,
0.23905324935913086,
-0.25641095638275146,
-0.005155365448445082,
-0.313260555267334,
-0.17650006711483002,
0.4418148398399353,
0.5451952815055847,
-0.6763994693756104,
-0.9673340916633606,
-0.6065927743911743,
0.18666459619998... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
open-llm-leaderboard/details_TheTravellingEngineer__bloom-1b1-RLHF | open-llm-leaderboard | 2023-10-16T19:51:39Z | 104 | 0 | null | [
"region:us"
] | 2023-10-16T19:51:39Z | 2023-08-18T11:13:53.000Z | 2023-08-18T11:13:53 | ---
pretty_name: Evaluation run of TheTravellingEngineer/bloom-1b1-RLHF
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [TheTravellingEngineer/bloom-1b1-RLHF](https://huggingface.co/TheTravellingEngineer/bloom-1b1-RLHF)\
\ on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 64 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the agregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_TheTravellingEngineer__bloom-1b1-RLHF\"\
,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\
These are the [latest results from run 2023-10-16T19:51:27.983287](https://huggingface.co/datasets/open-llm-leaderboard/details_TheTravellingEngineer__bloom-1b1-RLHF/blob/main/results_2023-10-16T19-51-27.983287.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.0,\n \"\
em_stderr\": 0.0,\n \"f1\": 0.0,\n \"f1_stderr\": 0.0,\n \"\
acc\": 0.25453827940015783,\n \"acc_stderr\": 0.007025085047248855\n },\n\
\ \"harness|drop|3\": {\n \"em\": 0.0,\n \"em_stderr\": 0.0,\n\
\ \"f1\": 0.0,\n \"f1_stderr\": 0.0\n },\n \"harness|gsm8k|5\"\
: {\n \"acc\": 0.0,\n \"acc_stderr\": 0.0\n },\n \"harness|winogrande|5\"\
: {\n \"acc\": 0.5090765588003157,\n \"acc_stderr\": 0.01405017009449771\n\
\ }\n}\n```"
repo_url: https://huggingface.co/TheTravellingEngineer/bloom-1b1-RLHF
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_arc_challenge_25
data_files:
- split: 2023_08_09T08_38_39.084452
path:
- '**/details_harness|arc:challenge|25_2023-08-09T08:38:39.084452.parquet'
- split: latest
path:
- '**/details_harness|arc:challenge|25_2023-08-09T08:38:39.084452.parquet'
- config_name: harness_drop_3
data_files:
- split: 2023_10_16T19_51_27.983287
path:
- '**/details_harness|drop|3_2023-10-16T19-51-27.983287.parquet'
- split: latest
path:
- '**/details_harness|drop|3_2023-10-16T19-51-27.983287.parquet'
- config_name: harness_gsm8k_5
data_files:
- split: 2023_10_16T19_51_27.983287
path:
- '**/details_harness|gsm8k|5_2023-10-16T19-51-27.983287.parquet'
- split: latest
path:
- '**/details_harness|gsm8k|5_2023-10-16T19-51-27.983287.parquet'
- config_name: harness_hellaswag_10
data_files:
- split: 2023_08_09T08_38_39.084452
path:
- '**/details_harness|hellaswag|10_2023-08-09T08:38:39.084452.parquet'
- split: latest
path:
- '**/details_harness|hellaswag|10_2023-08-09T08:38:39.084452.parquet'
- config_name: harness_hendrycksTest_5
data_files:
- split: 2023_08_09T08_38_39.084452
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-08-09T08:38:39.084452.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2023-08-09T08:38:39.084452.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2023-08-09T08:38:39.084452.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2023-08-09T08:38:39.084452.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-08-09T08:38:39.084452.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2023-08-09T08:38:39.084452.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-08-09T08:38:39.084452.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-08-09T08:38:39.084452.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-08-09T08:38:39.084452.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2023-08-09T08:38:39.084452.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2023-08-09T08:38:39.084452.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2023-08-09T08:38:39.084452.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-08-09T08:38:39.084452.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2023-08-09T08:38:39.084452.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-08-09T08:38:39.084452.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-08-09T08:38:39.084452.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2023-08-09T08:38:39.084452.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2023-08-09T08:38:39.084452.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-08-09T08:38:39.084452.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-08-09T08:38:39.084452.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-08-09T08:38:39.084452.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-08-09T08:38:39.084452.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-08-09T08:38:39.084452.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-08-09T08:38:39.084452.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-08-09T08:38:39.084452.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-08-09T08:38:39.084452.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-08-09T08:38:39.084452.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-08-09T08:38:39.084452.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-08-09T08:38:39.084452.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-08-09T08:38:39.084452.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-08-09T08:38:39.084452.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-08-09T08:38:39.084452.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2023-08-09T08:38:39.084452.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-08-09T08:38:39.084452.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2023-08-09T08:38:39.084452.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-08-09T08:38:39.084452.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-08-09T08:38:39.084452.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2023-08-09T08:38:39.084452.parquet'
- '**/details_harness|hendrycksTest-management|5_2023-08-09T08:38:39.084452.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2023-08-09T08:38:39.084452.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-08-09T08:38:39.084452.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-08-09T08:38:39.084452.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-08-09T08:38:39.084452.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-08-09T08:38:39.084452.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2023-08-09T08:38:39.084452.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2023-08-09T08:38:39.084452.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2023-08-09T08:38:39.084452.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-08-09T08:38:39.084452.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2023-08-09T08:38:39.084452.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-08-09T08:38:39.084452.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-08-09T08:38:39.084452.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2023-08-09T08:38:39.084452.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2023-08-09T08:38:39.084452.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2023-08-09T08:38:39.084452.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-08-09T08:38:39.084452.parquet'
- '**/details_harness|hendrycksTest-virology|5_2023-08-09T08:38:39.084452.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2023-08-09T08:38:39.084452.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-08-09T08:38:39.084452.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2023-08-09T08:38:39.084452.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2023-08-09T08:38:39.084452.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2023-08-09T08:38:39.084452.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-08-09T08:38:39.084452.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2023-08-09T08:38:39.084452.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-08-09T08:38:39.084452.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-08-09T08:38:39.084452.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-08-09T08:38:39.084452.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2023-08-09T08:38:39.084452.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2023-08-09T08:38:39.084452.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2023-08-09T08:38:39.084452.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-08-09T08:38:39.084452.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2023-08-09T08:38:39.084452.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-08-09T08:38:39.084452.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-08-09T08:38:39.084452.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2023-08-09T08:38:39.084452.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2023-08-09T08:38:39.084452.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-08-09T08:38:39.084452.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-08-09T08:38:39.084452.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-08-09T08:38:39.084452.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-08-09T08:38:39.084452.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-08-09T08:38:39.084452.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-08-09T08:38:39.084452.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-08-09T08:38:39.084452.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-08-09T08:38:39.084452.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-08-09T08:38:39.084452.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-08-09T08:38:39.084452.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-08-09T08:38:39.084452.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-08-09T08:38:39.084452.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-08-09T08:38:39.084452.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-08-09T08:38:39.084452.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2023-08-09T08:38:39.084452.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-08-09T08:38:39.084452.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2023-08-09T08:38:39.084452.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-08-09T08:38:39.084452.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-08-09T08:38:39.084452.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2023-08-09T08:38:39.084452.parquet'
- '**/details_harness|hendrycksTest-management|5_2023-08-09T08:38:39.084452.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2023-08-09T08:38:39.084452.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-08-09T08:38:39.084452.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-08-09T08:38:39.084452.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-08-09T08:38:39.084452.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-08-09T08:38:39.084452.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2023-08-09T08:38:39.084452.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2023-08-09T08:38:39.084452.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2023-08-09T08:38:39.084452.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-08-09T08:38:39.084452.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2023-08-09T08:38:39.084452.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-08-09T08:38:39.084452.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-08-09T08:38:39.084452.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2023-08-09T08:38:39.084452.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2023-08-09T08:38:39.084452.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2023-08-09T08:38:39.084452.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-08-09T08:38:39.084452.parquet'
- '**/details_harness|hendrycksTest-virology|5_2023-08-09T08:38:39.084452.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2023-08-09T08:38:39.084452.parquet'
- config_name: harness_hendrycksTest_abstract_algebra_5
data_files:
- split: 2023_08_09T08_38_39.084452
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-08-09T08:38:39.084452.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-08-09T08:38:39.084452.parquet'
- config_name: harness_hendrycksTest_anatomy_5
data_files:
- split: 2023_08_09T08_38_39.084452
path:
- '**/details_harness|hendrycksTest-anatomy|5_2023-08-09T08:38:39.084452.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-anatomy|5_2023-08-09T08:38:39.084452.parquet'
- config_name: harness_hendrycksTest_astronomy_5
data_files:
- split: 2023_08_09T08_38_39.084452
path:
- '**/details_harness|hendrycksTest-astronomy|5_2023-08-09T08:38:39.084452.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-astronomy|5_2023-08-09T08:38:39.084452.parquet'
- config_name: harness_hendrycksTest_business_ethics_5
data_files:
- split: 2023_08_09T08_38_39.084452
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2023-08-09T08:38:39.084452.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2023-08-09T08:38:39.084452.parquet'
- config_name: harness_hendrycksTest_clinical_knowledge_5
data_files:
- split: 2023_08_09T08_38_39.084452
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-08-09T08:38:39.084452.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-08-09T08:38:39.084452.parquet'
- config_name: harness_hendrycksTest_college_biology_5
data_files:
- split: 2023_08_09T08_38_39.084452
path:
- '**/details_harness|hendrycksTest-college_biology|5_2023-08-09T08:38:39.084452.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_biology|5_2023-08-09T08:38:39.084452.parquet'
- config_name: harness_hendrycksTest_college_chemistry_5
data_files:
- split: 2023_08_09T08_38_39.084452
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-08-09T08:38:39.084452.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-08-09T08:38:39.084452.parquet'
- config_name: harness_hendrycksTest_college_computer_science_5
data_files:
- split: 2023_08_09T08_38_39.084452
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-08-09T08:38:39.084452.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-08-09T08:38:39.084452.parquet'
- config_name: harness_hendrycksTest_college_mathematics_5
data_files:
- split: 2023_08_09T08_38_39.084452
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-08-09T08:38:39.084452.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-08-09T08:38:39.084452.parquet'
- config_name: harness_hendrycksTest_college_medicine_5
data_files:
- split: 2023_08_09T08_38_39.084452
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2023-08-09T08:38:39.084452.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2023-08-09T08:38:39.084452.parquet'
- config_name: harness_hendrycksTest_college_physics_5
data_files:
- split: 2023_08_09T08_38_39.084452
path:
- '**/details_harness|hendrycksTest-college_physics|5_2023-08-09T08:38:39.084452.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_physics|5_2023-08-09T08:38:39.084452.parquet'
- config_name: harness_hendrycksTest_computer_security_5
data_files:
- split: 2023_08_09T08_38_39.084452
path:
- '**/details_harness|hendrycksTest-computer_security|5_2023-08-09T08:38:39.084452.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-computer_security|5_2023-08-09T08:38:39.084452.parquet'
- config_name: harness_hendrycksTest_conceptual_physics_5
data_files:
- split: 2023_08_09T08_38_39.084452
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-08-09T08:38:39.084452.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-08-09T08:38:39.084452.parquet'
- config_name: harness_hendrycksTest_econometrics_5
data_files:
- split: 2023_08_09T08_38_39.084452
path:
- '**/details_harness|hendrycksTest-econometrics|5_2023-08-09T08:38:39.084452.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-econometrics|5_2023-08-09T08:38:39.084452.parquet'
- config_name: harness_hendrycksTest_electrical_engineering_5
data_files:
- split: 2023_08_09T08_38_39.084452
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-08-09T08:38:39.084452.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-08-09T08:38:39.084452.parquet'
- config_name: harness_hendrycksTest_elementary_mathematics_5
data_files:
- split: 2023_08_09T08_38_39.084452
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-08-09T08:38:39.084452.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-08-09T08:38:39.084452.parquet'
- config_name: harness_hendrycksTest_formal_logic_5
data_files:
- split: 2023_08_09T08_38_39.084452
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2023-08-09T08:38:39.084452.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2023-08-09T08:38:39.084452.parquet'
- config_name: harness_hendrycksTest_global_facts_5
data_files:
- split: 2023_08_09T08_38_39.084452
path:
- '**/details_harness|hendrycksTest-global_facts|5_2023-08-09T08:38:39.084452.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-global_facts|5_2023-08-09T08:38:39.084452.parquet'
- config_name: harness_hendrycksTest_high_school_biology_5
data_files:
- split: 2023_08_09T08_38_39.084452
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-08-09T08:38:39.084452.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-08-09T08:38:39.084452.parquet'
- config_name: harness_hendrycksTest_high_school_chemistry_5
data_files:
- split: 2023_08_09T08_38_39.084452
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-08-09T08:38:39.084452.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-08-09T08:38:39.084452.parquet'
- config_name: harness_hendrycksTest_high_school_computer_science_5
data_files:
- split: 2023_08_09T08_38_39.084452
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-08-09T08:38:39.084452.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-08-09T08:38:39.084452.parquet'
- config_name: harness_hendrycksTest_high_school_european_history_5
data_files:
- split: 2023_08_09T08_38_39.084452
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-08-09T08:38:39.084452.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-08-09T08:38:39.084452.parquet'
- config_name: harness_hendrycksTest_high_school_geography_5
data_files:
- split: 2023_08_09T08_38_39.084452
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-08-09T08:38:39.084452.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-08-09T08:38:39.084452.parquet'
- config_name: harness_hendrycksTest_high_school_government_and_politics_5
data_files:
- split: 2023_08_09T08_38_39.084452
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-08-09T08:38:39.084452.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-08-09T08:38:39.084452.parquet'
- config_name: harness_hendrycksTest_high_school_macroeconomics_5
data_files:
- split: 2023_08_09T08_38_39.084452
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-08-09T08:38:39.084452.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-08-09T08:38:39.084452.parquet'
- config_name: harness_hendrycksTest_high_school_mathematics_5
data_files:
- split: 2023_08_09T08_38_39.084452
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-08-09T08:38:39.084452.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-08-09T08:38:39.084452.parquet'
- config_name: harness_hendrycksTest_high_school_microeconomics_5
data_files:
- split: 2023_08_09T08_38_39.084452
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-08-09T08:38:39.084452.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-08-09T08:38:39.084452.parquet'
- config_name: harness_hendrycksTest_high_school_physics_5
data_files:
- split: 2023_08_09T08_38_39.084452
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-08-09T08:38:39.084452.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-08-09T08:38:39.084452.parquet'
- config_name: harness_hendrycksTest_high_school_psychology_5
data_files:
- split: 2023_08_09T08_38_39.084452
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-08-09T08:38:39.084452.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-08-09T08:38:39.084452.parquet'
- config_name: harness_hendrycksTest_high_school_statistics_5
data_files:
- split: 2023_08_09T08_38_39.084452
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-08-09T08:38:39.084452.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-08-09T08:38:39.084452.parquet'
- config_name: harness_hendrycksTest_high_school_us_history_5
data_files:
- split: 2023_08_09T08_38_39.084452
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-08-09T08:38:39.084452.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-08-09T08:38:39.084452.parquet'
- config_name: harness_hendrycksTest_high_school_world_history_5
data_files:
- split: 2023_08_09T08_38_39.084452
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-08-09T08:38:39.084452.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-08-09T08:38:39.084452.parquet'
- config_name: harness_hendrycksTest_human_aging_5
data_files:
- split: 2023_08_09T08_38_39.084452
path:
- '**/details_harness|hendrycksTest-human_aging|5_2023-08-09T08:38:39.084452.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_aging|5_2023-08-09T08:38:39.084452.parquet'
- config_name: harness_hendrycksTest_human_sexuality_5
data_files:
- split: 2023_08_09T08_38_39.084452
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-08-09T08:38:39.084452.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-08-09T08:38:39.084452.parquet'
- config_name: harness_hendrycksTest_international_law_5
data_files:
- split: 2023_08_09T08_38_39.084452
path:
- '**/details_harness|hendrycksTest-international_law|5_2023-08-09T08:38:39.084452.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-international_law|5_2023-08-09T08:38:39.084452.parquet'
- config_name: harness_hendrycksTest_jurisprudence_5
data_files:
- split: 2023_08_09T08_38_39.084452
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-08-09T08:38:39.084452.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-08-09T08:38:39.084452.parquet'
- config_name: harness_hendrycksTest_logical_fallacies_5
data_files:
- split: 2023_08_09T08_38_39.084452
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-08-09T08:38:39.084452.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-08-09T08:38:39.084452.parquet'
- config_name: harness_hendrycksTest_machine_learning_5
data_files:
- split: 2023_08_09T08_38_39.084452
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2023-08-09T08:38:39.084452.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2023-08-09T08:38:39.084452.parquet'
- config_name: harness_hendrycksTest_management_5
data_files:
- split: 2023_08_09T08_38_39.084452
path:
- '**/details_harness|hendrycksTest-management|5_2023-08-09T08:38:39.084452.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-management|5_2023-08-09T08:38:39.084452.parquet'
- config_name: harness_hendrycksTest_marketing_5
data_files:
- split: 2023_08_09T08_38_39.084452
path:
- '**/details_harness|hendrycksTest-marketing|5_2023-08-09T08:38:39.084452.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-marketing|5_2023-08-09T08:38:39.084452.parquet'
- config_name: harness_hendrycksTest_medical_genetics_5
data_files:
- split: 2023_08_09T08_38_39.084452
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-08-09T08:38:39.084452.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-08-09T08:38:39.084452.parquet'
- config_name: harness_hendrycksTest_miscellaneous_5
data_files:
- split: 2023_08_09T08_38_39.084452
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-08-09T08:38:39.084452.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-08-09T08:38:39.084452.parquet'
- config_name: harness_hendrycksTest_moral_disputes_5
data_files:
- split: 2023_08_09T08_38_39.084452
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-08-09T08:38:39.084452.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-08-09T08:38:39.084452.parquet'
- config_name: harness_hendrycksTest_moral_scenarios_5
data_files:
- split: 2023_08_09T08_38_39.084452
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-08-09T08:38:39.084452.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-08-09T08:38:39.084452.parquet'
- config_name: harness_hendrycksTest_nutrition_5
data_files:
- split: 2023_08_09T08_38_39.084452
path:
- '**/details_harness|hendrycksTest-nutrition|5_2023-08-09T08:38:39.084452.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-nutrition|5_2023-08-09T08:38:39.084452.parquet'
- config_name: harness_hendrycksTest_philosophy_5
data_files:
- split: 2023_08_09T08_38_39.084452
path:
- '**/details_harness|hendrycksTest-philosophy|5_2023-08-09T08:38:39.084452.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-philosophy|5_2023-08-09T08:38:39.084452.parquet'
- config_name: harness_hendrycksTest_prehistory_5
data_files:
- split: 2023_08_09T08_38_39.084452
path:
- '**/details_harness|hendrycksTest-prehistory|5_2023-08-09T08:38:39.084452.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-prehistory|5_2023-08-09T08:38:39.084452.parquet'
- config_name: harness_hendrycksTest_professional_accounting_5
data_files:
- split: 2023_08_09T08_38_39.084452
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-08-09T08:38:39.084452.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-08-09T08:38:39.084452.parquet'
- config_name: harness_hendrycksTest_professional_law_5
data_files:
- split: 2023_08_09T08_38_39.084452
path:
- '**/details_harness|hendrycksTest-professional_law|5_2023-08-09T08:38:39.084452.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_law|5_2023-08-09T08:38:39.084452.parquet'
- config_name: harness_hendrycksTest_professional_medicine_5
data_files:
- split: 2023_08_09T08_38_39.084452
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-08-09T08:38:39.084452.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-08-09T08:38:39.084452.parquet'
- config_name: harness_hendrycksTest_professional_psychology_5
data_files:
- split: 2023_08_09T08_38_39.084452
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-08-09T08:38:39.084452.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-08-09T08:38:39.084452.parquet'
- config_name: harness_hendrycksTest_public_relations_5
data_files:
- split: 2023_08_09T08_38_39.084452
path:
- '**/details_harness|hendrycksTest-public_relations|5_2023-08-09T08:38:39.084452.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-public_relations|5_2023-08-09T08:38:39.084452.parquet'
- config_name: harness_hendrycksTest_security_studies_5
data_files:
- split: 2023_08_09T08_38_39.084452
path:
- '**/details_harness|hendrycksTest-security_studies|5_2023-08-09T08:38:39.084452.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-security_studies|5_2023-08-09T08:38:39.084452.parquet'
- config_name: harness_hendrycksTest_sociology_5
data_files:
- split: 2023_08_09T08_38_39.084452
path:
- '**/details_harness|hendrycksTest-sociology|5_2023-08-09T08:38:39.084452.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-sociology|5_2023-08-09T08:38:39.084452.parquet'
- config_name: harness_hendrycksTest_us_foreign_policy_5
data_files:
- split: 2023_08_09T08_38_39.084452
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-08-09T08:38:39.084452.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-08-09T08:38:39.084452.parquet'
- config_name: harness_hendrycksTest_virology_5
data_files:
- split: 2023_08_09T08_38_39.084452
path:
- '**/details_harness|hendrycksTest-virology|5_2023-08-09T08:38:39.084452.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-virology|5_2023-08-09T08:38:39.084452.parquet'
- config_name: harness_hendrycksTest_world_religions_5
data_files:
- split: 2023_08_09T08_38_39.084452
path:
- '**/details_harness|hendrycksTest-world_religions|5_2023-08-09T08:38:39.084452.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-world_religions|5_2023-08-09T08:38:39.084452.parquet'
- config_name: harness_truthfulqa_mc_0
data_files:
- split: 2023_08_09T08_38_39.084452
path:
- '**/details_harness|truthfulqa:mc|0_2023-08-09T08:38:39.084452.parquet'
- split: latest
path:
- '**/details_harness|truthfulqa:mc|0_2023-08-09T08:38:39.084452.parquet'
- config_name: harness_winogrande_5
data_files:
- split: 2023_10_16T19_51_27.983287
path:
- '**/details_harness|winogrande|5_2023-10-16T19-51-27.983287.parquet'
- split: latest
path:
- '**/details_harness|winogrande|5_2023-10-16T19-51-27.983287.parquet'
- config_name: results
data_files:
- split: 2023_08_09T08_38_39.084452
path:
- results_2023-08-09T08:38:39.084452.parquet
- split: 2023_10_16T19_51_27.983287
path:
- results_2023-10-16T19-51-27.983287.parquet
- split: latest
path:
- results_2023-10-16T19-51-27.983287.parquet
---
# Dataset Card for Evaluation run of TheTravellingEngineer/bloom-1b1-RLHF
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/TheTravellingEngineer/bloom-1b1-RLHF
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** clementine@hf.co
### Dataset Summary
Dataset automatically created during the evaluation run of model [TheTravellingEngineer/bloom-1b1-RLHF](https://huggingface.co/TheTravellingEngineer/bloom-1b1-RLHF) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_TheTravellingEngineer__bloom-1b1-RLHF",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-10-16T19:51:27.983287](https://huggingface.co/datasets/open-llm-leaderboard/details_TheTravellingEngineer__bloom-1b1-RLHF/blob/main/results_2023-10-16T19-51-27.983287.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"em": 0.0,
"em_stderr": 0.0,
"f1": 0.0,
"f1_stderr": 0.0,
"acc": 0.25453827940015783,
"acc_stderr": 0.007025085047248855
},
"harness|drop|3": {
"em": 0.0,
"em_stderr": 0.0,
"f1": 0.0,
"f1_stderr": 0.0
},
"harness|gsm8k|5": {
"acc": 0.0,
"acc_stderr": 0.0
},
"harness|winogrande|5": {
"acc": 0.5090765588003157,
"acc_stderr": 0.01405017009449771
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] | [
-0.4130328893661499,
-0.6279197931289673,
0.1686343401670456,
0.3797045052051544,
-0.10069805383682251,
0.0914640799164772,
-0.4006076157093048,
-0.19137080013751984,
0.3554551601409912,
0.5092399716377258,
-0.9348738193511963,
-0.9482432007789612,
-0.6351198554039001,
0.09623102098703384,... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
thu-coai/SafetyBench | thu-coai | 2023-09-14T05:25:39Z | 104 | 0 | null | [
"license:mit",
"arxiv:2309.07045",
"region:us"
] | 2023-09-14T05:25:39Z | 2023-09-12T02:59:06.000Z | 2023-09-12T02:59:06 | ---
license: mit
configs:
- config_name: test
data_files:
- split: zh
path: "test_zh.json"
- split: en
path: "test_en.json"
- split: zh_subset
path: "test_zh_subset.json"
- config_name: dev
data_files:
- split: zh
path: "dev_zh.json"
- split: en
path: "dev_en.json"
---
SafetyBench is a comprehensive benchmark for evaluating the safety of LLMs, which comprises 11,435 diverse multiple choice questions spanning across 7 distinct categories of safety concerns. Notably, SafetyBench also incorporates both Chinese and English data, facilitating the evaluation in both languages.
Please visit our [GitHub](https://github.com/thu-coai/SafetyBench) and [website](https://llmbench.ai/safety) or check our [paper](https://arxiv.org/abs/2309.07045) for more details.
We release three differents test sets including Chinese testset (`test_zh.json`), English testset (`test_en.json`) and Chinese subset testset (`test_zh_subset.json`). The Chinese subset testset removes samples with highly sensitive keywords to test API-based Chinese LLMs with strict filtering mechanism.
We also release 5 examples for each of the 7 safety categories in both Chinese (`dev_zh.json`) and English (`dev_en.json`), to enable few-shot evaluation.
Labels on the test sets are not released and users are required to submit their results to automatically obtain test accuracy. [How to submit?](https://github.com/thu-coai/SafetyBench#how-to-submit)
## Load the Data
```python
from datasets import load_dataset
test_dataset = load_dataset("thu-coai/SafetyBench", "test")
dev_dataset = load_dataset("thu-coai/SafetyBench", "dev")
```
More details about loading the data and evaluating LLMs could be found at our [github repository](https://github.com/thu-coai/SafetyBench).
Please cite our paper if you use our dataset.
```
@article{zhang2023safetybench,
title={SafetyBench: Evaluating the Safety of Large Language Models with Multiple Choice Questions},
author={Zhexin Zhang and Leqi Lei and Lindong Wu and Rui Sun and Yongkang Huang and Chong Long and Xiao Liu and Xuanyu Lei and Jie Tang and Minlie Huang},
journal={arXiv preprint arXiv:2309.07045},
year={2023}
}
``` | [
-0.20052309334278107,
-0.667286217212677,
0.30375924706459045,
0.3683741092681885,
-0.2283141016960144,
0.026865018531680107,
0.06131543591618538,
-0.4678246080875397,
-0.169301375746727,
0.2008712738752365,
-0.5591827034950256,
-0.8824939131736755,
-0.2210124284029007,
0.03723033145070076... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
open-phi/programming_books_llama | open-phi | 2023-10-04T18:02:56Z | 104 | 8 | null | [
"region:us"
] | 2023-10-04T18:02:56Z | 2023-10-03T18:27:59.000Z | 2023-10-03T18:27:59 | ---
dataset_info:
features:
- name: topic
dtype: string
- name: outline
sequence: string
- name: concepts
sequence: string
- name: queries
sequence: string
- name: context
sequence: string
- name: markdown
dtype: string
- name: model
dtype: string
splits:
- name: train
num_bytes: 1677240291
num_examples: 111048
download_size: 631279270
dataset_size: 1677240291
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "programming_books_llama"
400M tokens of programming books generated by gpt-3.5 (70M tokens) and a finetuned codellama 34b. The gpt-3.5 data is extremely high quality. The llama data has lower quality and shorter length, but is still good. This was generated with the [textbook quality](https://github.com/VikParuchuri/textbook_quality) repo. | [
-0.3499472141265869,
-0.2285444438457489,
0.13725440204143524,
0.008345676586031914,
-0.6974971890449524,
0.3590143918991089,
0.14942365884780884,
-0.29959383606910706,
0.08011636883020401,
0.4872745871543884,
-0.21870365738868713,
-0.9478188753128052,
-0.5400791168212891,
0.07762087881565... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
distil-whisper/rev16 | distil-whisper | 2023-10-17T17:15:02Z | 104 | 0 | null | [
"region:us"
] | 2023-10-17T17:15:02Z | 2023-10-13T16:09:08.000Z | 2023-10-13T16:09:08 | ---
dataset_info:
- config_name: full
features:
- name: audio
dtype: audio
- name: file_number
dtype: string
- name: show_title
dtype: string
- name: episode_title
dtype: string
- name: itunes_id
dtype: string
- name: transcription
dtype: string
splits:
- name: test
num_bytes: 1509910660.0
num_examples: 30
download_size: 1445493754
dataset_size: 1509910660.0
- config_name: whisper_subset
features:
- name: audio
dtype: audio
- name: file_number
dtype: string
- name: show_title
dtype: string
- name: episode_title
dtype: string
- name: itunes_id
dtype: string
- name: transcription
dtype: string
splits:
- name: test
num_bytes: 921693242.0
num_examples: 16
download_size: 881542397
dataset_size: 921693242.0
configs:
- config_name: full
data_files:
- split: test
path: full/test-*
- config_name: whisper_subset
data_files:
- split: test
path: whisper_subset/test-*
---
# Dataset Card for "rev16"
Configs:
* `full`: the entire 30 podcast files
* `whisper_subset`: the subset of 16 podcast files used in the Whisper paper for long-form evaluation. The remaining 14 files have mis-matches between the audio and labels, and are thus filtered from the test set. | [
-0.6468865275382996,
-0.18291370570659637,
0.17633676528930664,
0.3127068877220154,
-0.33482062816619873,
0.2491050362586975,
-0.16109183430671692,
-0.4268884062767029,
0.38724285364151,
0.5989072322845459,
-1.2528923749923706,
-0.5211490988731384,
-0.5303670763969421,
0.1961774080991745,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
amlan107/dv | amlan107 | 2023-10-20T13:52:28Z | 104 | 0 | null | [
"region:us"
] | 2023-10-20T13:52:28Z | 2023-10-20T13:52:26.000Z | 2023-10-20T13:52:26 | ---
dataset_info:
features:
- name: bn
dtype: string
- name: ck
dtype: string
splits:
- name: dev_val
num_bytes: 448783.4764662805
num_examples: 3005
download_size: 207781
dataset_size: 448783.4764662805
---
# Dataset Card for "dv"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.5315166711807251,
-0.2960065007209778,
0.2814347445964813,
0.18941445648670197,
-0.30256906151771545,
0.15534602105617523,
0.5373479723930359,
0.08943035453557968,
0.7577397227287292,
0.47802799940109253,
-0.942550778388977,
-0.8472773432731628,
-0.557501494884491,
-0.29788726568222046,... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
detectors/lsun_c-ood | detectors | 2023-10-30T18:25:37Z | 104 | 0 | lsun | [
"task_categories:image-classification",
"size_categories:10K<n<100K",
"license:unknown",
"arxiv:1610.01119",
"arxiv:1706.02690",
"region:us"
] | 2023-10-30T18:25:37Z | 2023-10-30T17:35:57.000Z | 2023-10-30T17:35:57 | ---
license: unknown
size_categories: 10K<n<100K
task_categories:
- image-classification
paperswithcode_id: lsun
pretty_name: LSUN (c)
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: image
dtype: image
splits:
- name: train
num_bytes: 17509356.0
num_examples: 10000
download_size: 0
dataset_size: 17509356.0
---
# Dataset Card for LSUN (c) for OOD Detection
<!-- Provide a quick summary of the dataset. -->
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Original Dataset Authors**: Limin Wang, Sheng Guo, Weilin Huang, Yuanjun Xiong, Yu Qiao
- **OOD Split Authors:** Shiyu Liang, Yixuan Li, R. Srikant
- **Shared by:** Eduardo Dadalto
- **License:** unknown
### Dataset Sources
<!-- Provide the basic links for the dataset. -->
- **Original Dataset Paper:** http://arxiv.org/abs/1610.01119v2
- **First OOD Application Paper:** http://arxiv.org/abs/1706.02690v5
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
This dataset is intended to be used as an ouf-of-distribution dataset for image classification benchmarks.
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
This dataset is not annotated.
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
The goal in curating and sharing this dataset to the HuggingFace Hub is to accelerate research and promote reproducibility in generalized Out-of-Distribution (OOD) detection.
Check the python library [detectors](https://github.com/edadaltocg/detectors) if you are interested in OOD detection.
### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
Please check original paper for details on the dataset.
### Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
Please check original paper for details on the dataset.
## Citation
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
```bibtex
@software{detectors2023,
author = {Eduardo Dadalto},
title = {Detectors: a Python Library for Generalized Out-Of-Distribution Detection},
url = {https://github.com/edadaltocg/detectors},
doi = {https://doi.org/10.5281/zenodo.7883596},
month = {5},
year = {2023}
}
@article{1706.02690v5,
author = {Shiyu Liang and Yixuan Li and R. Srikant},
title = {Enhancing The Reliability of Out-of-distribution Image Detection in
Neural Networks},
year = {2017},
month = {6},
note = {ICLR 2018},
archiveprefix = {arXiv},
url = {http://arxiv.org/abs/1706.02690v5}
}
@article{1610.01119v2,
author = {Limin Wang and Sheng Guo and Weilin Huang and Yuanjun Xiong and Yu Qiao},
title = {Knowledge Guided Disambiguation for Large-Scale Scene Classification
with Multi-Resolution CNNs},
year = {2016},
month = {10},
note = {To appear in IEEE Transactions on Image Processing. Code and models
are available at https://github.com/wanglimin/MRCNN-Scene-Recognition},
archiveprefix = {arXiv},
url = {http://arxiv.org/abs/1610.01119v2}
}
```
## Dataset Card Authors
Eduardo Dadalto
## Dataset Card Contact
https://huggingface.co/edadaltocg | [
-0.45414218306541443,
-0.4357526898384094,
0.4130966067314148,
-0.04897039383649826,
-0.36541908979415894,
-0.4662841260433197,
0.019627613946795464,
-0.5681573152542114,
-0.038129765540361404,
0.31855013966560364,
-0.21037815511226654,
-0.6320910453796387,
-0.4565803110599518,
0.113495707... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
jtjt520j/CSpider_sql_create_context | jtjt520j | 2023-11-26T05:56:31Z | 104 | 0 | null | [
"region:us"
] | 2023-11-26T05:56:31Z | 2023-11-26T05:12:51.000Z | 2023-11-26T05:12:51 | 此数据集原始数据集是CSpider。根据[b-mc2/sql-create-context](https://huggingface.co/datasets/b-mc2/sql-create-context)所创建。 | [
-0.3467886745929718,
-0.685244083404541,
-0.2386091947555542,
0.8301143050193787,
-0.7602779865264893,
-0.08036581426858902,
0.10522020608186722,
0.12070401012897491,
0.6576929688453674,
0.5849460959434509,
-0.6622068881988525,
-0.456194132566452,
-0.41956812143325806,
-0.34729236364364624... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
lewtun/asr_dummy | lewtun | 2021-07-13T13:12:38Z | 103 | 0 | null | [
"region:us"
] | 2021-07-13T13:12:38Z | 2022-03-02T23:29:22.000Z | 2022-03-02T23:29:22 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622263669967651,
0.43461522459983826,
-0.52829909324646,
0.7012971639633179,
0.7915719747543335,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104475975036621,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
usc-isi/WikiConvert | usc-isi | 2022-10-24T17:40:43Z | 103 | 5 | null | [
"task_categories:fill-mask",
"task_categories:other",
"task_categories:text-generation",
"task_ids:language-modeling",
"task_ids:masked-language-modeling",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:extended|wikipedia",
"language:en",
... | 2022-10-24T17:40:43Z | 2022-03-02T23:29:22.000Z | 2022-03-02T23:29:22 | ---
language_creators:
- found
language:
- en
license:
- mit
multilinguality:
- monolingual
size_categories:
- 100K<n<1M
source_datasets:
- extended|wikipedia
task_categories:
- fill-mask
- other
- text-generation
task_ids:
- language-modeling
- masked-language-modeling
pretty_name: Wiki-Convert
YAML tags:
- {}
- found
language_bcp47:
- en-US
tags:
- numeracy
- natural-language-understanding
- tokenization
---
# Dataset Card Creation Guide
## Table of Contents
- [Dataset Card Creation Guide](#dataset-card-creation-guide)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Initial Data Collection and Normalization](#initial-data-collection-and-normalization)
- [Who are the source language producers?](#who-are-the-source-language-producers)
- [Annotations](#annotations)
- [Annotation process](#annotation-process)
- [Who are the annotators?](#who-are-the-annotators)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Repository:** [Github](https://github.com/avi-jit/numeracy-literacy)
- **Paper:** [Anthology](https://aclanthology.org/2021.emnlp-main.557)
- **Point of Contact:** [Avijit Thawani](mailto:thawani@isi.edu)
### Dataset Summary
Wiki-Convert is a 900,000+ sentences dataset of precise number annotations from English Wikipedia. It relies on Wiki contributors' annotations in the form of a [{{Convert}}](https://en.wikipedia.org/wiki/Template:Convert) template.
### Supported Tasks and Leaderboards
- `sequence-modeling`: The dataset can be used to train a model for [Language Mddeling], which consists in [TASK DESCRIPTION]. Success on this task is typically measured by achieving a low [perplexity](https://huggingface.co/transformers/perplexity.html).
### Languages
The dataset is extracted from English Wikipedia, hence overwhelmingly contains English text.
## Dataset Structure
### Data Instances
Each row in the json file contains metadata about the source Wikipedia sentence, along with annotations for a single number, e.g., `number: 10` in the below example. The annotations are inspired by Numeracy-600K and are in the form of `length` and `offset` from the beginning of the sentence.
```
{
'id': 1080801, 'UNIQUE_STORY_INDEX': '1080801', 'offset': 83, 'length': 2, 'magnitude': 0, 'comment': "Like all Type UB III submarines, UB-117 carried 10 torpedoes and was armed with a 10 cms deck gun. ''", 'number': 10
}
```
Please refer to https://github.com/avi-jit/numeracy-literacy for more details.
### Data Splits
| | Tain | Dev | Test |
| ----- | :------: | :-----: | :----: |
| Input Sentences | 739,583 | 92,447 | 92,449|
## License
Provided under MIT License.
## Citation
```
@inproceedings{thawani-etal-2021-numeracy,
title = "Numeracy enhances the Literacy of Language Models",
author = "Thawani, Avijit and
Pujara, Jay and
Ilievski, Filip",
booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2021",
address = "Online and Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.emnlp-main.557",
pages = "6960--6967",
abstract = "Specialized number representations in NLP have shown improvements on numerical reasoning tasks like arithmetic word problems and masked number prediction. But humans also use numeracy to make better sense of world concepts, e.g., you can seat 5 people in your {`}room{'} but not 500. Does a better grasp of numbers improve a model{'}s understanding of other concepts and words? This paper studies the effect of using six different number encoders on the task of masked word prediction (MWP), as a proxy for evaluating literacy. To support this investigation, we develop Wiki-Convert, a 900,000 sentence dataset annotated with numbers and units, to avoid conflating nominal and ordinal number occurrences. We find a significant improvement in MWP for sentences containing numbers, that exponent embeddings are the best number encoders, yielding over 2 points jump in prediction accuracy over a BERT baseline, and that these enhanced literacy skills also generalize to contexts without annotated numbers. We release all code at https://git.io/JuZXn.",
}
```
Thanks to [@avi-jit](https://github.com/avi-jit) for adding this dataset. | [
-0.6124573349952698,
-0.7253351211547852,
0.023531349375844002,
0.15114693343639374,
-0.3312743902206421,
-0.09293999522924423,
-0.3511984348297119,
-0.23364661633968353,
0.2666594684123993,
0.5345041155815125,
-0.5365423560142517,
-0.8076961040496826,
-0.5946784615516663,
0.41616156697273... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
snoop2head/commoncrawl_sampled_gpt2-xl | snoop2head | 2022-08-04T12:28:33Z | 103 | 0 | null | [
"region:us"
] | 2022-08-04T12:28:33Z | 2022-08-03T04:46:04.000Z | 2022-08-03T04:46:04 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622263669967651,
0.43461522459983826,
-0.52829909324646,
0.7012971639633179,
0.7915719747543335,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104475975036621,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
valhalla/pokemon-dataset | valhalla | 2022-10-02T15:58:40Z | 103 | 2 | null | [
"region:us"
] | 2022-10-02T15:58:40Z | 2022-10-02T14:42:44.000Z | 2022-10-02T14:42:44 | Entry not found | [
-0.32276472449302673,
-0.22568407654762268,
0.8622258901596069,
0.4346148371696472,
-0.5282984972000122,
0.7012965679168701,
0.7915717363357544,
0.07618629932403564,
0.7746022939682007,
0.2563222646713257,
-0.785281777381897,
-0.22573848068714142,
-0.9104482531547546,
0.5715669393539429,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
tomekkorbak/detoxify-pile-chunk3-1800000-1850000 | tomekkorbak | 2022-10-05T00:01:14Z | 103 | 0 | null | [
"region:us"
] | 2022-10-05T00:01:14Z | 2022-10-05T00:01:05.000Z | 2022-10-05T00:01:05 | Entry not found | [
-0.32276472449302673,
-0.22568407654762268,
0.8622258901596069,
0.4346148371696472,
-0.5282984972000122,
0.7012965679168701,
0.7915717363357544,
0.07618629932403564,
0.7746022939682007,
0.2563222646713257,
-0.785281777381897,
-0.22573848068714142,
-0.9104482531547546,
0.5715669393539429,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Multimodal-Fatima/Caltech101_not_background_test_facebook_opt_350m_Visclues_ns_5647_random | Multimodal-Fatima | 2023-02-04T01:39:49Z | 103 | 0 | null | [
"region:us"
] | 2023-02-04T01:39:49Z | 2023-02-04T01:14:54.000Z | 2023-02-04T01:14:54 | ---
dataset_info:
features:
- name: id
dtype: int64
- name: image
dtype: image
- name: prompt
dtype: string
- name: true_label
dtype: string
- name: prediction
dtype: string
- name: scores
sequence: float64
splits:
- name: fewshot_1_bs_16
num_bytes: 86816473.125
num_examples: 5647
- name: fewshot_3_bs_16
num_bytes: 90734475.125
num_examples: 5647
download_size: 169654260
dataset_size: 177550948.25
---
# Dataset Card for "Caltech101_not_background_test_facebook_opt_350m_Visclues_ns_5647_random"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.5395519137382507,
-0.5365812182426453,
0.212997704744339,
0.46606549620628357,
-0.1888040155172348,
-0.16593141853809357,
0.06323457509279251,
0.0946476012468338,
0.9859159588813782,
0.5651798248291016,
-1.023379921913147,
-0.7579268217086792,
-0.2843175232410431,
-0.20635275542736053,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Kaludi/Customer-Support-Responses | Kaludi | 2023-03-27T23:11:45Z | 103 | 2 | null | [
"region:us"
] | 2023-03-27T23:11:45Z | 2023-03-27T23:11:14.000Z | 2023-03-27T23:11:14 | Entry not found | [
-0.32276472449302673,
-0.22568407654762268,
0.8622258901596069,
0.4346148371696472,
-0.5282984972000122,
0.7012965679168701,
0.7915717363357544,
0.07618629932403564,
0.7746022939682007,
0.2563222646713257,
-0.785281777381897,
-0.22573848068714142,
-0.9104482531547546,
0.5715669393539429,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
ChilleD/MultiArith | ChilleD | 2023-05-02T01:44:21Z | 103 | 3 | null | [
"region:us"
] | 2023-05-02T01:44:21Z | 2023-05-01T13:19:47.000Z | 2023-05-01T13:19:47 | Entry not found | [
-0.32276472449302673,
-0.22568407654762268,
0.8622258901596069,
0.4346148371696472,
-0.5282984972000122,
0.7012965679168701,
0.7915717363357544,
0.07618629932403564,
0.7746022939682007,
0.2563222646713257,
-0.785281777381897,
-0.22573848068714142,
-0.9104482531547546,
0.5715669393539429,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
rubend18/ChatGPT-Jailbreak-Prompts | rubend18 | 2023-08-24T18:24:29Z | 103 | 32 | null | [
"task_categories:question-answering",
"task_categories:text-generation",
"task_categories:fill-mask",
"task_categories:zero-shot-classification",
"task_categories:table-question-answering",
"size_categories:n<1K",
"language:en",
"language:aa",
"ChatGPT",
"JailbreakPrompts",
"LanguageModeling",
... | 2023-08-24T18:24:29Z | 2023-05-25T21:04:52.000Z | 2023-05-25T21:04:52 | ---
task_categories:
- question-answering
- text-generation
- fill-mask
- zero-shot-classification
- table-question-answering
language:
- en
- aa
tags:
- ChatGPT
- JailbreakPrompts
- LanguageModeling
- ArtificialIntelligence
- TextGeneration
- Dataset
- OpenAI
- Jailbreak
- Prompts
size_categories:
- n<1K
pretty_name: ChatGPT Jailbreak Prompts
---
# Dataset Card for Dataset Name
## Name
ChatGPT Jailbreak Prompts
## Dataset Description
- **Autor:** Rubén Darío Jaramillo
- **Email:** rubend18@hotmail.com
- **WhatsApp:** +593 93 979 6676
### Dataset Summary
ChatGPT Jailbreak Prompts is a complete collection of jailbreak related prompts for ChatGPT. This dataset is intended to provide a valuable resource for understanding and generating text in the context of jailbreaking in ChatGPT.
### Languages
[English] | [
-0.4001912474632263,
-0.7585203051567078,
0.007488417439162731,
0.3701089322566986,
-0.37303426861763,
0.25028273463249207,
-0.05145290121436119,
0.142907053232193,
0.5151838064193726,
0.486314982175827,
-1.0411086082458496,
-0.4735787510871887,
-0.584103524684906,
-0.17172789573669434,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
singletongue/wikipedia-utils | singletongue | 2023-05-29T03:41:54Z | 103 | 1 | null | [
"size_categories:10M<n<100M",
"language:ja",
"license:cc-by-sa-3.0",
"license:gfdl",
"region:us"
] | 2023-05-29T03:41:54Z | 2023-05-29T01:42:43.000Z | 2023-05-29T01:42:43 | ---
license:
- cc-by-sa-3.0
- gfdl
dataset_info:
- config_name: corpus-jawiki-20230403
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 3569619848
num_examples: 24387500
download_size: 1297833377
dataset_size: 3569619848
- config_name: corpus-jawiki-20230403-cirrus
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 4779055224
num_examples: 28018607
download_size: 1730081783
dataset_size: 4779055224
- config_name: corpus-jawiki-20230403-filtered-large
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 3027074884
num_examples: 20133720
download_size: 1092808039
dataset_size: 3027074884
- config_name: paragraphs-jawiki-20230403
features:
- name: id
dtype: string
- name: pageid
dtype: int64
- name: revid
dtype: int64
- name: paragraph_index
dtype: int64
- name: title
dtype: string
- name: section
dtype: string
- name: text
dtype: string
- name: html_tag
dtype: string
splits:
- name: train
num_bytes: 4417130987
num_examples: 9668476
download_size: 1489512230
dataset_size: 4417130987
- config_name: passages-c300-jawiki-20230403
features:
- name: id
dtype: int64
- name: pageid
dtype: int64
- name: revid
dtype: int64
- name: title
dtype: string
- name: section
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 3939431360
num_examples: 6639833
download_size: 1402596784
dataset_size: 3939431360
- config_name: passages-c400-jawiki-20230403
features:
- name: id
dtype: int64
- name: pageid
dtype: int64
- name: revid
dtype: int64
- name: title
dtype: string
- name: section
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 3868482519
num_examples: 5555583
download_size: 1393661115
dataset_size: 3868482519
- config_name: passages-para-jawiki-20230403
features:
- name: id
dtype: int64
- name: pageid
dtype: int64
- name: revid
dtype: int64
- name: title
dtype: string
- name: section
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 3751418134
num_examples: 9397066
download_size: 1296071247
dataset_size: 3751418134
language:
- ja
size_categories:
- 10M<n<100M
---
# Wikipedia-Utils: Preprocessed Wikipedia Texts for NLP
Preprocessed Wikipedia texts generated with the scripts in [singletongue/wikipedia-utils](https://github.com/singletongue/wikipedia-utils) repo.
For detailed information on how the texts are processed, please refer to the repo.
| [
-0.6333070993423462,
-0.34490254521369934,
0.33782249689102173,
0.11993243545293808,
-0.43249914050102234,
-0.05549974367022514,
-0.24407677352428436,
-0.21159234642982483,
0.6647798418998718,
0.8478162884712219,
-0.8947093486785889,
-0.8832418322563171,
-0.12554705142974854,
0.56469339132... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
GSQA/speech-alpaca-gpt4-unit | GSQA | 2023-08-09T15:29:24Z | 103 | 1 | null | [
"region:us"
] | 2023-08-09T15:29:24Z | 2023-08-08T18:13:35.000Z | 2023-08-08T18:13:35 | ---
dataset_info:
features:
- name: instruction
dtype: string
- name: input
dtype: string
- name: output
dtype: string
- name: speech_input
dtype: string
- name: input_speaker
dtype: string
- name: output_speaker
dtype: string
- name: mhubert_layer11_code1000_input_code
dtype: string
- name: mhubert_layer11_code1000_output_audio
dtype: string
- name: hubert_layer6_code100_input_code
dtype: string
- name: hubert_layer6_code100_output_audio
dtype: string
splits:
- name: train
num_bytes: 1718767489
num_examples: 51349
download_size: 654738368
dataset_size: 1718767489
---
# Dataset Card for "speech-alpaca-gpt4-unit"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.6966182589530945,
-0.5264903903007507,
0.27271708846092224,
0.23912757635116577,
-0.31346824765205383,
-0.007444132585078478,
0.05453827232122421,
-0.2586202323436737,
0.9256691932678223,
0.32220762968063354,
-0.7836543321609497,
-0.8005938529968262,
-0.6931031346321106,
-0.346363395452... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
argilla/emotion | argilla | 2023-08-23T06:37:14Z | 103 | 0 | null | [
"size_categories:10K<n<100K",
"rlfh",
"argilla",
"human-feedback",
"region:us"
] | 2023-08-23T06:37:14Z | 2023-08-23T06:33:42.000Z | 2023-08-23T06:33:42 | ---
size_categories: 10K<n<100K
tags:
- rlfh
- argilla
- human-feedback
---
# Dataset Card for emotion
This dataset has been created with [Argilla](https://docs.argilla.io).
As shown in the sections below, this dataset can be loaded into Argilla as explained in [Load with Argilla](#load-with-argilla), or used directly with the `datasets` library in [Load with `datasets`](#load-with-datasets).
## Dataset Description
- **Homepage:** https://argilla.io
- **Repository:** https://github.com/argilla-io/argilla
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
This dataset contains:
* A dataset configuration file conforming to the Argilla dataset format named `argilla.yaml`. This configuration file will be used to configure the dataset when using the `FeedbackDataset.from_huggingface` method in Argilla.
* Dataset records in a format compatible with HuggingFace `datasets`. These records will be loaded automatically when using `FeedbackDataset.from_huggingface` and can be loaded independently using the `datasets` library via `load_dataset`.
* The [annotation guidelines](#annotation-guidelines) that have been used for building and curating the dataset, if they've been defined in Argilla.
### Load with Argilla
To load with Argilla, you'll just need to install Argilla as `pip install argilla --upgrade` and then use the following code:
```python
import argilla as rg
ds = rg.FeedbackDataset.from_huggingface("argilla/emotion")
```
### Load with `datasets`
To load this dataset with `datasets`, you'll just need to install `datasets` as `pip install datasets --upgrade` and then use the following code:
```python
from datasets import load_dataset
ds = load_dataset("argilla/emotion")
```
### Supported Tasks and Leaderboards
This dataset can contain [multiple fields, questions and responses](https://docs.argilla.io/en/latest/guides/llms/conceptual_guides/data_model.html) so it can be used for different NLP tasks, depending on the configuration. The dataset structure is described in the [Dataset Structure section](#dataset-structure).
There are no leaderboards associated with this dataset.
### Languages
[More Information Needed]
## Dataset Structure
### Data in Argilla
The dataset is created in Argilla with: **fields**, **questions**, **suggestions**, and **guidelines**.
The **fields** are the dataset records themselves, for the moment just text fields are suppported. These are the ones that will be used to provide responses to the questions.
| Field Name | Title | Type | Required | Markdown |
| ---------- | ----- | ---- | -------- | -------- |
| text | Text | TextField | True | False |
The **questions** are the questions that will be asked to the annotators. They can be of different types, such as rating, text, single choice, or multiple choice.
| Question Name | Title | Type | Required | Description | Values/Labels |
| ------------- | ----- | ---- | -------- | ----------- | ------------- |
| label | Label | LabelQuestion | True | N/A | ['0', '1', '2', '3', '4', '5'] |
**✨ NEW** Additionally, we also have **suggestions**, which are linked to the existing questions, and so on, named appending "-suggestion" and "-suggestion-metadata" to those, containing the value/s of the suggestion and its metadata, respectively. So on, the possible values are the same as in the table above.
Finally, the **guidelines** are just a plain string that can be used to provide instructions to the annotators. Find those in the [annotation guidelines](#annotation-guidelines) section.
### Data Instances
An example of a dataset instance in Argilla looks as follows:
```json
{
"fields": {
"text": "i didnt feel humiliated"
},
"metadata": {
"split": "train"
},
"responses": [
{
"status": "submitted",
"values": {
"label": {
"value": "0"
}
}
}
],
"suggestions": []
}
```
While the same record in HuggingFace `datasets` looks as follows:
```json
{
"external_id": null,
"label": [
{
"status": "submitted",
"user_id": null,
"value": "0"
}
],
"label-suggestion": null,
"label-suggestion-metadata": {
"agent": null,
"score": null,
"type": null
},
"metadata": "{\"split\": \"train\"}",
"text": "i didnt feel humiliated"
}
```
### Data Fields
Among the dataset fields, we differentiate between the following:
* **Fields:** These are the dataset records themselves, for the moment just text fields are suppported. These are the ones that will be used to provide responses to the questions.
* **text** is of type `TextField`.
* **Questions:** These are the questions that will be asked to the annotators. They can be of different types, such as `RatingQuestion`, `TextQuestion`, `LabelQuestion`, `MultiLabelQuestion`, and `RankingQuestion`.
* **label** is of type `LabelQuestion` with the following allowed values ['0', '1', '2', '3', '4', '5'].
* **✨ NEW** **Suggestions:** As of Argilla 1.13.0, the suggestions have been included to provide the annotators with suggestions to ease or assist during the annotation process. Suggestions are linked to the existing questions, are always optional, and contain not just the suggestion itself, but also the metadata linked to it, if applicable.
* (optional) **label-suggestion** is of type `label_selection` with the following allowed values ['0', '1', '2', '3', '4', '5'].
Additionally, we also have one more field which is optional and is the following:
* **external_id:** This is an optional field that can be used to provide an external ID for the dataset record. This can be useful if you want to link the dataset record to an external resource, such as a database or a file.
### Data Splits
The dataset contains a single split, which is `train`.
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation guidelines
Argilla port of [dair-ai/emotion](https://huggingface.co/datasets/dair-ai/emotion).
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] | [
-0.8136851191520691,
-0.8654404878616333,
0.27801409363746643,
0.37218764424324036,
-0.29063963890075684,
-0.441890150308609,
-0.03354550525546074,
-0.5779585242271423,
0.740147054195404,
0.6744558811187744,
-0.8351464867591858,
-0.9178545475006104,
-0.6832461953163147,
0.3368529677391052,... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Nicolas-BZRD/Parallel_Global_Voices_English_French | Nicolas-BZRD | 2023-09-21T15:40:05Z | 103 | 0 | null | [
"task_categories:translation",
"size_categories:100K<n<1M",
"language:en",
"language:fr",
"license:cc-by-3.0",
"parallel",
"parallel data",
"region:us"
] | 2023-09-21T15:40:05Z | 2023-09-21T15:03:00.000Z | 2023-09-21T15:03:00 | ---
license: cc-by-3.0
dataset_info:
features:
- name: en
dtype: string
- name: fr
dtype: string
splits:
- name: train
num_bytes: 89720129
num_examples: 342060
download_size: 57746668
dataset_size: 89720129
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
task_categories:
- translation
language:
- en
- fr
tags:
- parallel
- parallel data
size_categories:
- 100K<n<1M
---
# Parallel Global Voices (English-French)
Parallel Global Voices EN-FR is a parallel corpus generated from the Global Voices multilingual group of websites (http://globalvoices.org/), where volunteers publish and translate news stories in more than 40 languages. The original content from the Global Voices websites is available by the authors and publishers under a Creative Commons Attribution license. The content was crawled in July-August 2015 by researchers at the NLP group of the Institute for Language and Speech Processing. Documents that are translations of each other were paired on the basis of their link information. After document pairing, segment alignments were automatically extracted. The results of the automatic alignment at document and segment level are distributed under a Creative Commons Attribution license.
### Attribution details
Parallel Global Voices (English - French) was created for the European Language Resources Coordination Action (ELRC) (http://lr-coordination.eu/) by researchers at the NLP group of the Institute for Language and Speech Processing (http://www.ilsp.gr/) with primary data copyrighted by Parallel Global Voices (https://globalvoices.org/) and is licensed under "CC-BY 3.0" (https://creativecommons.org/licenses/by/3.0/). | [
-0.639697790145874,
-0.18012647330760956,
0.5951387286186218,
0.6714238524436951,
-0.23394529521465302,
0.414888858795166,
-0.3672551214694977,
-0.7839034795761108,
0.30701717734336853,
0.514980673789978,
-0.7584450840950012,
-0.47743698954582214,
-0.472922146320343,
0.4951796531677246,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
ContextualAI/trivia_qa | ContextualAI | 2023-10-07T00:42:28Z | 103 | 1 | null | [
"region:us"
] | 2023-10-07T00:42:28Z | 2023-10-07T00:40:15.000Z | 2023-10-07T00:40:15 | ---
dataset_info:
features:
- name: target
dtype: string
- name: query
dtype: string
- name: gold_generation
sequence: string
splits:
- name: train
num_bytes: 29497317
num_examples: 78785
- name: dev
num_bytes: 3349643
num_examples: 8837
- name: test
num_bytes: 4316214
num_examples: 11313
download_size: 22579595
dataset_size: 37163174
---
# Dataset Card for "trivia_qa"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.5585974454879761,
-0.32892319560050964,
0.4128683805465698,
0.16084228456020355,
-0.2802015244960785,
0.2535649538040161,
0.4594157934188843,
-0.13603299856185913,
0.9791220426559448,
0.3879472017288208,
-0.6806502938270569,
-0.8291207551956177,
-0.2868947982788086,
-0.13887378573417664... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
sunjun/medqa_sj | sunjun | 2023-10-24T12:49:39Z | 103 | 0 | null | [
"region:us"
] | 2023-10-24T12:49:39Z | 2023-10-24T12:49:24.000Z | 2023-10-24T12:49:24 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: question
dtype: string
- name: answer
dtype: string
- name: options
struct:
- name: A
dtype: string
- name: B
dtype: string
- name: C
dtype: string
- name: D
dtype: string
- name: meta_info
dtype: string
- name: answer_idx
dtype: string
- name: metamap_phrases
sequence: string
- name: choices
sequence: string
- name: answer_index
dtype: int64
splits:
- name: train
num_bytes: 16555933
num_examples: 10178
- name: test
num_bytes: 2118034
num_examples: 1273
download_size: 9784195
dataset_size: 18673967
---
# Dataset Card for "medqa_sj"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.4727483093738556,
-0.15139169991016388,
0.385734498500824,
-0.019278855994343758,
-0.16871775686740875,
0.0632580816745758,
0.3433413803577423,
0.10234478116035461,
0.9393083453178406,
0.6102229952812195,
-0.7972927689552307,
-0.757050096988678,
-0.6623063683509827,
-0.10268877446651459... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
krishaamer/taiwanese-college-students | krishaamer | 2023-11-02T13:13:35Z | 103 | 0 | null | [
"license:apache-2.0",
"region:us"
] | 2023-11-02T13:13:35Z | 2023-10-31T02:56:27.000Z | 2023-10-31T02:56:27 | ---
license: apache-2.0
---
| [
-0.1285335123538971,
-0.1861683875322342,
0.6529128551483154,
0.49436232447624207,
-0.19319400191307068,
0.23607441782951355,
0.36072009801864624,
0.05056373029947281,
0.5793656706809998,
0.7400146722793579,
-0.650810182094574,
-0.23784008622169495,
-0.7102247476577759,
-0.0478255338966846... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
andreped/AeroPath | andreped | 2023-11-13T22:36:21Z | 103 | 4 | null | [
"task_categories:image-segmentation",
"size_categories:1B<n<10B",
"language:en",
"license:mit",
"medical",
"arxiv:2311.01138",
"region:us"
] | 2023-11-13T22:36:21Z | 2023-11-03T21:53:46.000Z | 2023-11-03T21:53:46 | ---
license: mit
task_categories:
- image-segmentation
language:
- en
tags:
- medical
pretty_name: AeroPath
size_categories:
- 1B<n<10B
---
This README file was generated on 03-11-2023 by David Bouget.
It was then updated on 07-11-2023 to add Hugging Face metadata by André Pedersen.
Last updated: 07-11-2023.
----------------------
ACCESSING THE DATASET
----------------------
To access the data, it is HIGHLY RECOMMENDED to use the Python API.
See this notebook on how to get started: https://colab.research.google.com/gist/andreped/6070d1d2914a9ce5847d4b3e687188b7/aeropath-load-dataset-example.ipynb
--------------------
GENERAL INFORMATION
--------------------
1. Title of Dataset: AeroPath
2. Publication and DOI: "AeroPath: An airway segmentation benchmark dataset with challenging pathology". https://arxiv.org/abs/2311.01138
3. Contact Information
Name: Erlend F. Hofstad
Institution: Medical Technology Department, SINTEF Digital, Trondheim
Email: erlend.hofstad@sintef.no
Website: https://www.sintef.no/en/all-employees/employee/erlend.hofstad/
4. Contributors: Karen-Helene Støverud, Haakon Olav Leira, Erlend F. Hofstad, Andre Pedersen, David Bouget, and Thomas Langø.
5. Kind of data: computed tomography angiography (CTA) scans and binary annotation masks, all stored as NifTI files (*.nii.gz).
6. Date of data collection/generation: .
7. Geographic location: Trondheim, Norway.
8. Funding sources: the Ministry of Health and Care Services of Norway through the Norwegian National Research Center for Minimally Invasive and Image-Guided Diagnostics and Therapy (MiDT) at St. Olavs hospital, Trondheim University Hospital, Trondheim, Norway. The research leading to these results has in addition received funding from the Norwegian Financial Mechanism 2014-2021 under the project RO- NO2019-0138, 19/2020 “Improving Cancer Diagnostics in Flexible Endoscopy using Artificial Intelligence and Medical Robotics” IDEAR, Contract No. 19/2020.
9. Description of dataset:
General description and ethics approvals: The dataset contains 27 computed tomography angiography
(CTA) scans, acquired using the Thorax Lung protocol at St. Olavs hospital (Trondheim, Norway). The included patients (nine women), aged 52 to 84 (median 70), were all undergoing diagnostic tests for lung cancer and had a wide range of pathologies including malignant tumors, sarcoidosis, and emphysema.
---------------------------
SHARING/ACCESS INFORMATION
---------------------------
1. Licenses/Restrictions:CC-BY 4.0 (See license.md).
2. Recommended citation: See citation recommended at https://github.com/raidionics/AeroPath.
---------------------
DATA & FILE OVERVIEW
---------------------
```
1. File List:
README.md
license.md
└── 1/
└── 1_CT_HR.nii.gz
└── 1_CT_HR_label_airways.nii.gz
└── 1_CT_HR_label_lungs.nii.gz
.
.
.
└── 27/
└── 27_CT_HR.nii.gz
└── 27_CT_HR_label_airways.nii.gz
└── 27_CT_HR_label_lungs.nii.gz
```
---------------------------
METHODOLOGICAL INFORMATION
---------------------------
1. Description of sources and methods used for collection/generation of data:
Dataset statistics
Overall, the CT scan dimensions are covering [487 : 512] × [441 : 512] × [241 : 829] voxels, and the trans-axial voxel size ranges [0.68 : 0.76] × [0.68 : 0.75] mm2 with a reconstructed slice thickness of [0.5 : 1.25] mm.
Annotation procedures
The annotation process for generating the ground truth was performed in three steps. First, the largest components (i.e., trachea and the first branches) were extracted based on a region growing, or a grow-cut method. Due to leakage, the region growing method did not yield satisfactory results in all cases. Therefore, for certain cases, the grow-cut method in 3D Slicer was used instead. In the second step, BronchiNet was employed to segment the smaller peripheral airways. In the third and final step, the segmentations were refined manually. Bronchial fragments and missed segments were connected, before false positives and fragments that could not be connected based on visual inspection were removed. All manual corrections were performed using the default segment editor in 3D Slicer. The manual correction was performed by a trained engineer, supervised by a pulmonologist. Finally, all annotations were verified on a case-by-case basis by a pulmonologist. The final annotations from the AeroPath segmentation included on average 128 ± 56 branches per CT scan. | [
-0.4793676733970642,
-0.5403580069541931,
0.5173532366752625,
-0.10319541394710541,
-0.4391578733921051,
-0.13081984221935272,
0.20502182841300964,
-0.5016775727272034,
0.6685490608215332,
0.5458431839942932,
-0.5511148571968079,
-0.5885005593299866,
-0.6600441932678223,
0.0579052306711673... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
lmqg/qg_koquad | lmqg | 2022-12-02T18:53:42Z | 102 | 3 | null | [
"task_categories:text-generation",
"task_ids:language-modeling",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:squad_es",
"language:ko",
"license:cc-by-4.0",
"question-generation",
"arxiv:2210.03992",
"region:us"
] | 2022-12-02T18:53:42Z | 2022-06-02T23:42:21.000Z | 2022-06-02T23:42:21 | ---
license: cc-by-4.0
pretty_name: KorQuAD for question generation
language: ko
multilinguality: monolingual
size_categories: 10K<n<100K
source_datasets: squad_es
task_categories:
- text-generation
task_ids:
- language-modeling
tags:
- question-generation
---
# Dataset Card for "lmqg/qg_korquad"
## Dataset Description
- **Repository:** [https://github.com/asahi417/lm-question-generation](https://github.com/asahi417/lm-question-generation)
- **Paper:** [https://arxiv.org/abs/2210.03992](https://arxiv.org/abs/2210.03992)
- **Point of Contact:** [Asahi Ushio](http://asahiushio.com/)
### Dataset Summary
This is a subset of [QG-Bench](https://github.com/asahi417/lm-question-generation/blob/master/QG_BENCH.md#datasets), a unified question generation benchmark proposed in
["Generative Language Models for Paragraph-Level Question Generation: A Unified Benchmark and Evaluation, EMNLP 2022 main conference"](https://arxiv.org/abs/2210.03992).
This is a modified version of [KorQuAD](https://huggingface.co/datasets/squad_kor_v1) for question generation (QG) task.
Since the original dataset only contains training/validation set, we manually sample test set from training set, which
has no overlap in terms of the paragraph with the training set.
### Supported Tasks and Leaderboards
* `question-generation`: The dataset is assumed to be used to train a model for question generation.
Success on this task is typically measured by achieving a high BLEU4/METEOR/ROUGE-L/BERTScore/MoverScore (see our paper for more in detail).
### Languages
Korean (ko)
## Dataset Structure
An example of 'train' looks as follows.
```
{
"question": "함수해석학이 주목하는 탐구는?",
"paragraph": "변화에 대한 이해와 묘사는 자연과학에 있어서 일반적인 주제이며, 미적분학은 변화를 탐구하는 강력한 도구로서 발전되었다. 함수는 변화하는 양을 묘사함에 있어서 중추적인 개념으로써 떠오르게 된다. 실수와 실변수로 구성된 함수의 엄밀한 탐구가 실해석학이라는 분야로 알려지게 되었고, 복소수에 대한 이와 같은 탐구분야는 복소해석학이라고 한다. 함수해석학은 함수의 공간(특히 무한차원)의 탐구에 주목한다. 함수해석학의 많은 응용분야 중 하나가 양자역학이다. 많은 문제들이 자연스럽게 양과 그 양의 변화율의 관계로 귀착되고, 이러한 문제들이 미분방정식으로 다루어진다. 자연의 많은 현상들이 동역학계로 기술될 수 있다. 혼돈 이론은 이러한 예측 불가능한 현상을 탐구하는 데 상당한 기여를 한다.",
"answer": "함수의 공간(특히 무한차원)의 탐구",
"sentence": "함수해석학은 함수의 공간(특히 무한차원)의 탐구 에 주목한다.",
"paragraph_sentence": '변화에 대한 이해와 묘사는 자연과학에 있어서 일반적인 주제이며, 미적분학은 변화를 탐구하는 강력한 도구로서 발전되었다. 함수는 변화하는 양을 묘사함에 있어서 중추적인 개념으로써 떠오르게 된다. 실수와 실변수로 구성된 함수의 엄밀한 탐구가 실해석학이라는 분야로 알려지게 되었고, 복소수에 대한 이와 같은 탐구 분야는 복소해석학이라고 한다. <hl> 함수해석학은 함수의 공간(특히 무한차원)의 탐구 에 주목한다. <hl> 함수해석학의 많은 응용분야 중 하나가 양자역학이다. 많은 문제들이 자연스럽게 양과 그 양의 변화율의 관계로 귀착되고, 이러한 문제들이 미분방정식으로 다루어진다. 자연의 많은 현상들이 동역학계로 기술될 수 있다. 혼돈 이론은 이러한 예측 불가능한 현상을 탐구하는 데 상당한 기여를 한다.',
"paragraph_answer": '변화에 대한 이해와 묘사는 자연과학에 있어서 일반적인 주제이며, 미적분학은 변화를 탐구하는 강력한 도구로서 발전되었다. 함수는 변화하는 양을 묘사함에 있어서 중추적인 개념으로써 떠오르게 된다. 실수와 실변수로 구성된 함수의 엄밀한 탐구가 실해석학이라는 분야로 알려지게 되었고, 복소수에 대한 이와 같은 탐구 분야는 복소해석학이라고 한다. 함수해석학은 <hl> 함수의 공간(특히 무한차원)의 탐구 <hl>에 주목한다. 함수해석학의 많은 응용분야 중 하나가 양자역학이다. 많은 문제들이 자연스럽게 양과 그 양의 변화율의 관계로 귀착되고, 이러한 문제들이 미분방정식으로 다루어진다. 자연의 많은 현상들이 동역학계로 기술될 수 있다. 혼돈 이론은 이러한 예측 불가능한 현상을 탐구하는 데 상당한 기여를 한다.',
"sentence_answer": "함수해석학은 <hl> 함수의 공간(특히 무한차원)의 탐구 <hl> 에 주목한다."
}
```
The data fields are the same among all splits.
- `question`: a `string` feature.
- `paragraph`: a `string` feature.
- `answer`: a `string` feature.
- `sentence`: a `string` feature.
- `paragraph_answer`: a `string` feature, which is same as the paragraph but the answer is highlighted by a special token `<hl>`.
- `paragraph_sentence`: a `string` feature, which is same as the paragraph but a sentence containing the answer is highlighted by a special token `<hl>`.
- `sentence_answer`: a `string` feature, which is same as the sentence but the answer is highlighted by a special token `<hl>`.
Each of `paragraph_answer`, `paragraph_sentence`, and `sentence_answer` feature is assumed to be used to train a question generation model,
but with different information. The `paragraph_answer` and `sentence_answer` features are for answer-aware question generation and
`paragraph_sentence` feature is for sentence-aware question generation.
## Data Splits
|train|validation|test |
|----:|---------:|----:|
|54556| 5766 |5766 |
## Citation Information
```
@inproceedings{ushio-etal-2022-generative,
title = "{G}enerative {L}anguage {M}odels for {P}aragraph-{L}evel {Q}uestion {G}eneration: {A} {U}nified {B}enchmark and {E}valuation",
author = "Ushio, Asahi and
Alva-Manchego, Fernando and
Camacho-Collados, Jose",
booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2022",
address = "Abu Dhabi, U.A.E.",
publisher = "Association for Computational Linguistics",
}
``` | [
-0.6775094270706177,
-1.0361353158950806,
0.40088582038879395,
0.5321476459503174,
-0.33987897634506226,
-0.19661478698253632,
0.20129288733005524,
-0.13577106595039368,
0.26493048667907715,
0.3702968955039978,
-0.6156187653541565,
-0.46658140420913696,
-0.360052227973938,
0.24508747458457... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
rungalileo/medical_transcription_40 | rungalileo | 2022-08-04T04:58:53Z | 102 | 5 | null | [
"region:us"
] | 2022-08-04T04:58:53Z | 2022-08-04T04:58:43.000Z | 2022-08-04T04:58:43 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622263669967651,
0.43461522459983826,
-0.52829909324646,
0.7012971639633179,
0.7915719747543335,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104475975036621,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
mesmalif/amazon-shoe-reviews | mesmalif | 2023-02-06T16:07:08Z | 102 | 0 | null | [
"region:us"
] | 2023-02-06T16:07:08Z | 2023-02-06T16:06:43.000Z | 2023-02-06T16:06:43 | ---
dataset_info:
features:
- name: marketplace
dtype: string
- name: customer_id
dtype: string
- name: review_id
dtype: string
- name: product_id
dtype: string
- name: product_parent
dtype: string
- name: product_title
dtype: string
- name: product_category
dtype: string
- name: labels
dtype: int64
- name: helpful_votes
dtype: int64
- name: total_votes
dtype: int64
- name: vine
dtype: int64
- name: verified_purchase
dtype: int64
- name: review_headline
dtype: string
- name: text
dtype: string
- name: review_date
dtype: string
splits:
- name: train
num_bytes: 34784832.6
num_examples: 90000
- name: test
num_bytes: 3864981.4
num_examples: 10000
download_size: 21283157
dataset_size: 38649814.0
---
# Dataset Card for "amazon-shoe-reviews"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.6160130500793457,
-0.1255297064781189,
0.1895919293165207,
0.43158987164497375,
-0.49590787291526794,
0.06720097362995148,
0.2993732690811157,
-0.3327762186527252,
0.7468741536140442,
0.3608717620372772,
-0.8918061852455139,
-0.8492857217788696,
-0.27535679936408997,
-0.3471758961677551... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
DynamicSuperb/DialogueActClassification_DailyTalk | DynamicSuperb | 2023-08-02T08:51:06Z | 102 | 0 | null | [
"region:us"
] | 2023-08-02T08:51:06Z | 2023-07-29T13:23:06.000Z | 2023-07-29T13:23:06 | ---
dataset_info:
features:
- name: file
dtype: string
- name: audio
dtype: audio
- name: instruction
dtype: string
- name: label
dtype: string
splits:
- name: test
num_bytes: 1379347051.504
num_examples: 4758
download_size: 1292848993
dataset_size: 1379347051.504
---
# Dataset Card for "DailyTalk_DialogueActClassification"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.3290974497795105,
-0.3524266481399536,
0.1527412235736847,
0.29033708572387695,
0.05254284292459488,
0.13679149746894836,
0.2204015702009201,
-0.3623763918876648,
0.6999845504760742,
0.49675697088241577,
-0.9749246835708618,
-0.9842048287391663,
-0.4589545428752899,
-0.5111706256866455,... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
mmnga/wikipedia-ja-20230720-1k | mmnga | 2023-09-26T04:24:04Z | 102 | 1 | null | [
"region:us"
] | 2023-09-26T04:24:04Z | 2023-09-26T04:24:00.000Z | 2023-09-26T04:24:00 | ---
dataset_info:
features:
- name: curid
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 2746008.4742813315
num_examples: 1024
download_size: 1593280
dataset_size: 2746008.4742813315
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "wikipedia-ja-20230720-1k"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.9118562340736389,
-0.25714415311813354,
0.12209151685237885,
0.25064608454704285,
-0.35784217715263367,
-0.3396075367927551,
0.2222978174686432,
-0.15873266756534576,
1.0750213861465454,
0.4785316586494446,
-1.0346235036849976,
-0.6941404938697815,
-0.49017608165740967,
-0.0431596487760... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
MemGPT/MSC-Self-Instruct | MemGPT | 2023-11-02T07:40:08Z | 102 | 7 | null | [
"license:apache-2.0",
"arxiv:2310.08560",
"region:us"
] | 2023-11-02T07:40:08Z | 2023-10-11T02:51:50.000Z | 2023-10-11T02:51:50 | ---
license: apache-2.0
---
MemGPT
===
This is the self-instruct dataset of MSC conversations used for MemGPT paper. For more information please refer to memgpt.ai
The [MSC dataset](https://parl.ai/projects/msc/) is a multi-round human conversations. In this dataset, our goal is to come up with a conversation opener, that is personalized to the user by referencing topics from the previous conversations.
These were generated while evaluating [MemGPT](https://arxiv.org/abs/2310.08560). | [
-0.5102867484092712,
-0.7855409979820251,
0.310076504945755,
-0.161484956741333,
-0.00491524301469326,
0.22704534232616425,
0.2115609496831894,
-0.20768211781978607,
0.365509569644928,
0.5182206034660339,
-1.1684118509292603,
-0.38103267550468445,
-0.22681495547294617,
0.1370438039302826,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
solomonk/reddit_mental_health_posts | solomonk | 2022-01-11T15:40:01Z | 101 | 9 | null | [
"region:us"
] | 2022-01-11T15:40:01Z | 2022-03-02T23:29:22.000Z | 2022-03-02T23:29:22 | # Reddit posts about mental health
## files
- adhd.csv from r/adhd
- aspergers.csv from r/aspergers
- depression.csv from r/depression
- ocd.csv from r/ocd
- ptsd.csv from r/ptsd
## fields
- author
- body
- created_utc
- id
- num_comments
- score
- subreddit
- title
- upvote_ratio
- url
for more details about theses fields [Praw Submission](https://praw.readthedocs.io/en/latest/code_overview/models/submission.html).
| [
-0.2937159240245819,
-0.6502050757408142,
0.8956801295280457,
0.45319312810897827,
-0.1706322282552719,
0.30137890577316284,
0.18322375416755676,
-0.305054247379303,
0.7659636735916138,
0.32743728160858154,
-0.8720706105232239,
-0.6761786341667175,
-0.7125566005706787,
0.3574896454811096,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
adithya7/xlel_wd | adithya7 | 2022-07-13T07:46:57Z | 101 | 1 | null | [
"annotations_creators:found",
"language_creators:found",
"multilinguality:multilingual",
"size_categories:1M<n<10M",
"source_datasets:original",
"language:af",
"language:ar",
"language:be",
"language:bg",
"language:bn",
"language:ca",
"language:cs",
"language:da",
"language:de",
"languag... | 2022-07-13T07:46:57Z | 2022-04-22T02:50:11.000Z | 2022-04-22T02:50:11 | ---
annotations_creators:
- found
language_creators:
- found
language:
- af
- ar
- be
- bg
- bn
- ca
- cs
- da
- de
- el
- en
- es
- fa
- fi
- fr
- he
- hi
- hu
- id
- it
- ja
- ko
- ml
- mr
- ms
- nl
- 'no'
- pl
- pt
- ro
- ru
- si
- sk
- sl
- sr
- sv
- sw
- ta
- te
- th
- tr
- uk
- vi
- zh
license:
- cc-by-4.0
multilinguality:
- multilingual
pretty_name: XLEL-WD is a multilingual event linking dataset. This dataset contains
mention references in multilingual Wikipedia/Wikinews articles to event items from
Wikidata. The descriptions for Wikidata event items are taken from the corresponding
Wikipedia articles.
size_categories:
- 1M<n<10M
source_datasets:
- original
task_categories: []
task_ids: []
---
# Dataset Card for XLEL-WD
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** <https://github.com/adithya7/xlel-wd>
- **Repository:** <https://github.com/adithya7/xlel-wd>
- **Paper:** <https://arxiv.org/abs/2204.06535>
- **Leaderboard:** N/A
- **Point of Contact:** Adithya Pratapa
### Dataset Summary
XLEL-WD is a multilingual event linking dataset. This dataset repo contains mention references in multilingual Wikipedia/Wikinews articles to event items from Wikidata.
The descriptions for Wikidata event items were collected from the corresponding Wikipedia articles. Download the event dictionary from [adithya7/xlel_wd_dictionary](https://huggingface.co/datasets/adithya7/xlel_wd_dictionary).
### Supported Tasks and Leaderboards
This dataset can be used for the task of event linking. There are two variants of the task, multilingual and crosslingual.
- Multilingual linking: mention and the event descriptions are in the same language.
- Crosslingual linking: the event descriptions are only available in English.
### Languages
This dataset contains text from 44 languages. The language names and their ISO 639-1 codes are listed below. For details on the dataset distribution for each language, refer to the original paper.
| Language | Code | Language | Code | Language | Code | Language | Code |
| -------- | ---- | -------- | ---- | -------- | ---- | -------- | ---- |
| Afrikaans | af | Arabic | ar | Belarusian | be | Bulgarian | bg |
| Bengali | bn | Catalan | ca | Czech | cs | Danish | da |
| German | de | Greek | el | English | en | Spanish | es |
| Persian | fa | Finnish | fi | French | fr | Hebrew | he |
| Hindi | hi | Hungarian | hu | Indonesian | id | Italian | it |
| Japanese | ja | Korean | ko | Malayalam | ml | Marathi | mr |
| Malay | ms | Dutch | nl | Norwegian | no | Polish | pl |
| Portuguese | pt | Romanian | ro | Russian | ru | Sinhala | si |
| Slovak | sk | Slovene | sl | Serbian | sr | Swedish | sv |
| Swahili | sw | Tamil | ta | Telugu | te | Thai | th |
| Turkish | tr | Ukrainian | uk | Vietnamese | vi | Chinese | zh |
## Dataset Structure
### Data Instances
Each instance in the `train.jsonl`, `dev.jsonl` and `test.jsonl` files follow the below template.
```json
{
"context_left": "Minibaev's first major international medal came in the men's synchronized 10 metre platform event at the ",
"mention": "2010 European Championships",
"context_right": ".",
"context_lang": "en",
"label_id": "830917",
}
```
### Data Fields
| Field | Meaning |
| ----- | ------- |
| `mention` | text span of the mention |
| `context_left` | left paragraph context from the document |
| `context_right` | right paragraph context from the document |
| `context_lang` | language of the context (and mention) |
| `context_title` | document title of the mention (only Wikinews subset) |
| `context_date` | document publication date of the mention (only Wikinews subset) |
| `label_id` | Wikidata label ID for the event. E.g. 830917 refers to Q830917 from Wikidata. |
### Data Splits
The Wikipedia-based corpus has three splits. This is a zero-shot evaluation setup.
| | Train | Dev | Test | Total |
| ---- | :-----: | :---: | :----: | :-----: |
| Events | 8653 | 1090 | 1204 | 10947 |
| Event Sequences | 6758 | 844 | 846 | 8448 |
| Mentions | 1.44M | 165K | 190K | 1.8M |
| Languages | 44 | 44 | 44 | 44 |
The Wikinews-based evaluation set has two variants, one for cross-domain evaluation and another for zero-shot evaluation.
| | (Cross-domain) Test | (Zero-shot) Test |
| --- | :------------------: | :-----: |
| Events | 802 | 149 |
| Mentions | 2562 | 437 |
| Languages | 27 | 21 |
## Dataset Creation
### Curation Rationale
This dataset helps address the task of event linking. KB linking is extensively studied for entities, but its unclear if the same methodologies can be extended for linking mentions to events from KB. We use Wikidata as our KB, as it allows for linking mentions from multilingual Wikipedia and Wikinews articles.
### Source Data
#### Initial Data Collection and Normalization
First, we utilize spatial & temporal properties from Wikidata to identify event items. Second, we identify corresponding multilingual Wikipedia pages for each Wikidata event item. Third, we pool hyperlinks from multilingual Wikipedia & Wikinews articles to these event items.
#### Who are the source language producers?
The documents in XLEL-WD are written by Wikipedia and Wikinews contributors in respective languages.
### Annotations
#### Annotation process
This dataset was originally collected automatically from Wikipedia, Wikinews and Wikidata. It was post-processed to improve data quality.
#### Who are the annotators?
The annotations in XLEL-WD (hyperlinks from Wikipedia/Wikinews to Wikidata) are added the original Wiki contributors.
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
XLEL-WD v1.0.0 mostly caters to eventive nouns from Wikidata. It does not include any links to other event items from Wikidata such as disease outbreak (Q3241045), military offensive (Q2001676) and war (Q198).
## Additional Information
### Dataset Curators
The dataset was curated by Adithya Pratapa, Rishubh Gupta and Teruko Mitamura. The code for collecting the dataset is available at [Github:xlel-wd](https://github.com/adithya7/xlel-wd).
### Licensing Information
XLEL-WD dataset is released under [CC-BY-4.0 license](https://creativecommons.org/licenses/by/4.0/).
### Citation Information
```bib
@article{pratapa-etal-2022-multilingual,
title = {Multilingual Event Linking to Wikidata},
author = {Pratapa, Adithya and Gupta, Rishubh and Mitamura, Teruko},
publisher = {arXiv},
year = {2022},
url = {https://arxiv.org/abs/2204.06535},
}
```
### Contributions
Thanks to [@adithya7](https://github.com/adithya7) for adding this dataset.
| [
-0.5891097187995911,
-0.29959636926651,
0.16384224593639374,
-0.004035308491438627,
-0.19855883717536926,
-0.18772055208683014,
-0.3331548869609833,
-0.7496534585952759,
0.48012134432792664,
-0.0428524948656559,
-0.7974796295166016,
-0.7602944374084473,
-0.4698648452758789,
0.3307484984397... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
DFKI-SLT/cross_re | DFKI-SLT | 2023-01-19T09:19:12Z | 101 | 0 | null | [
"task_categories:text-classification",
"task_ids:multi-class-classification",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:extended|cross_ner",
"language:en",
"cross domain",
"ai",
"news",
"mus... | 2023-01-19T09:19:12Z | 2023-01-19T09:18:42.000Z | 2023-01-19T09:18:42 | ---
annotations_creators:
- expert-generated
language:
- en
language_creators:
- found
license: []
multilinguality:
- monolingual
pretty_name: CrossRE is a cross-domain dataset for relation extraction
size_categories:
- 10K<n<100K
source_datasets:
- extended|cross_ner
tags:
- cross domain
- ai
- news
- music
- literature
- politics
- science
task_categories:
- text-classification
task_ids:
- multi-class-classification
dataset_info:
- config_name: ai
features:
- name: doc_key
dtype: string
- name: sentence
sequence: string
- name: ner
sequence:
- name: id-start
dtype: int32
- name: id-end
dtype: int32
- name: entity-type
dtype: string
- name: relations
sequence:
- name: id_1-start
dtype: int32
- name: id_1-end
dtype: int32
- name: id_2-start
dtype: int32
- name: id_2-end
dtype: int32
- name: relation-type
dtype: string
- name: Exp
dtype: string
- name: Un
dtype: bool
- name: SA
dtype: bool
splits:
- name: train
num_bytes: 62411
num_examples: 100
- name: validation
num_bytes: 183717
num_examples: 350
- name: test
num_bytes: 217353
num_examples: 431
download_size: 508107
dataset_size: 463481
- config_name: literature
features:
- name: doc_key
dtype: string
- name: sentence
sequence: string
- name: ner
sequence:
- name: id-start
dtype: int32
- name: id-end
dtype: int32
- name: entity-type
dtype: string
- name: relations
sequence:
- name: id_1-start
dtype: int32
- name: id_1-end
dtype: int32
- name: id_2-start
dtype: int32
- name: id_2-end
dtype: int32
- name: relation-type
dtype: string
- name: Exp
dtype: string
- name: Un
dtype: bool
- name: SA
dtype: bool
splits:
- name: train
num_bytes: 62699
num_examples: 100
- name: validation
num_bytes: 246214
num_examples: 400
- name: test
num_bytes: 264450
num_examples: 416
download_size: 635130
dataset_size: 573363
- config_name: music
features:
- name: doc_key
dtype: string
- name: sentence
sequence: string
- name: ner
sequence:
- name: id-start
dtype: int32
- name: id-end
dtype: int32
- name: entity-type
dtype: string
- name: relations
sequence:
- name: id_1-start
dtype: int32
- name: id_1-end
dtype: int32
- name: id_2-start
dtype: int32
- name: id_2-end
dtype: int32
- name: relation-type
dtype: string
- name: Exp
dtype: string
- name: Un
dtype: bool
- name: SA
dtype: bool
splits:
- name: train
num_bytes: 69846
num_examples: 100
- name: validation
num_bytes: 261497
num_examples: 350
- name: test
num_bytes: 312165
num_examples: 399
download_size: 726956
dataset_size: 643508
- config_name: news
features:
- name: doc_key
dtype: string
- name: sentence
sequence: string
- name: ner
sequence:
- name: id-start
dtype: int32
- name: id-end
dtype: int32
- name: entity-type
dtype: string
- name: relations
sequence:
- name: id_1-start
dtype: int32
- name: id_1-end
dtype: int32
- name: id_2-start
dtype: int32
- name: id_2-end
dtype: int32
- name: relation-type
dtype: string
- name: Exp
dtype: string
- name: Un
dtype: bool
- name: SA
dtype: bool
splits:
- name: train
num_bytes: 49102
num_examples: 164
- name: validation
num_bytes: 77952
num_examples: 350
- name: test
num_bytes: 96301
num_examples: 400
download_size: 239763
dataset_size: 223355
- config_name: politics
features:
- name: doc_key
dtype: string
- name: sentence
sequence: string
- name: ner
sequence:
- name: id-start
dtype: int32
- name: id-end
dtype: int32
- name: entity-type
dtype: string
- name: relations
sequence:
- name: id_1-start
dtype: int32
- name: id_1-end
dtype: int32
- name: id_2-start
dtype: int32
- name: id_2-end
dtype: int32
- name: relation-type
dtype: string
- name: Exp
dtype: string
- name: Un
dtype: bool
- name: SA
dtype: bool
splits:
- name: train
num_bytes: 76004
num_examples: 101
- name: validation
num_bytes: 277633
num_examples: 350
- name: test
num_bytes: 295294
num_examples: 400
download_size: 726427
dataset_size: 648931
- config_name: science
features:
- name: doc_key
dtype: string
- name: sentence
sequence: string
- name: ner
sequence:
- name: id-start
dtype: int32
- name: id-end
dtype: int32
- name: entity-type
dtype: string
- name: relations
sequence:
- name: id_1-start
dtype: int32
- name: id_1-end
dtype: int32
- name: id_2-start
dtype: int32
- name: id_2-end
dtype: int32
- name: relation-type
dtype: string
- name: Exp
dtype: string
- name: Un
dtype: bool
- name: SA
dtype: bool
splits:
- name: train
num_bytes: 63876
num_examples: 103
- name: validation
num_bytes: 224402
num_examples: 351
- name: test
num_bytes: 249075
num_examples: 400
download_size: 594058
dataset_size: 537353
---
# Dataset Card for CrossRE
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Repository:** [CrossRE](https://github.com/mainlp/CrossRE)
- **Paper:** [CrossRE: A Cross-Domain Dataset for Relation Extraction](https://arxiv.org/abs/2210.09345)
### Dataset Summary
CrossRE is a new, freely-available crossdomain benchmark for RE, which comprises six distinct text domains and includes
multilabel annotations. It includes the following domains: news, politics, natural science, music, literature and
artificial intelligence. The semantic relations are annotated on top of CrossNER (Liu et al., 2021), a cross-domain
dataset for NER which contains domain-specific entity types.
The dataset contains 17 relation labels for the six domains: PART-OF, PHYSICAL, USAGE, ROLE, SOCIAL,
GENERAL-AFFILIATION, COMPARE, TEMPORAL, ARTIFACT, ORIGIN, TOPIC, OPPOSITE, CAUSE-EFFECT, WIN-DEFEAT, TYPEOF, NAMED, and
RELATED-TO.
For details, see the paper: https://arxiv.org/abs/2210.09345
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
The language data in CrossRE is in English (BCP-47 en)
## Dataset Structure
### Data Instances
#### news
- **Size of downloaded dataset files:** 0.24 MB
- **Size of the generated dataset:** 0.22 MB
An example of 'train' looks as follows:
```python
{
"doc_key": "news-train-1",
"sentence": ["EU", "rejects", "German", "call", "to", "boycott", "British", "lamb", "."],
"ner": [
{"id-start": 0, "id-end": 0, "entity-type": "organisation"},
{"id-start": 2, "id-end": 3, "entity-type": "misc"},
{"id-start": 6, "id-end": 7, "entity-type": "misc"}
],
"relations": [
{"id_1-start": 0, "id_1-end": 0, "id_2-start": 2, "id_2-end": 3, "relation-type": "opposite", "Exp": "rejects", "Un": False, "SA": False},
{"id_1-start": 2, "id_1-end": 3, "id_2-start": 6, "id_2-end": 7, "relation-type": "opposite", "Exp": "calls_for_boycot_of", "Un": False, "SA": False},
{"id_1-start": 2, "id_1-end": 3, "id_2-start": 6, "id_2-end": 7, "relation-type": "topic", "Exp": "", "Un": False, "SA": False}
]
}
```
#### politics
- **Size of downloaded dataset files:** 0.73 MB
- **Size of the generated dataset:** 0.65 MB
An example of 'train' looks as follows:
```python
{
"doc_key": "politics-train-1",
"sentence": ["Parties", "with", "mainly", "Eurosceptic", "views", "are", "the", "ruling", "United", "Russia", ",", "and", "opposition", "parties", "the", "Communist", "Party", "of", "the", "Russian", "Federation", "and", "Liberal", "Democratic", "Party", "of", "Russia", "."],
"ner": [
{"id-start": 8, "id-end": 9, "entity-type": "politicalparty"},
{"id-start": 15, "id-end": 20, "entity-type": "politicalparty"},
{"id-start": 22, "id-end": 26, "entity-type": "politicalparty"}
],
"relations": [
{"id_1-start": 8, "id_1-end": 9, "id_2-start": 15, "id_2-end": 20, "relation-type": "opposite", "Exp": "in_opposition", "Un": False, "SA": False},
{"id_1-start": 8, "id_1-end": 9, "id_2-start": 22, "id_2-end": 26, "relation-type": "opposite", "Exp": "in_opposition", "Un": False, "SA": False}
]
}
```
#### science
- **Size of downloaded dataset files:** 0.59 MB
- **Size of the generated dataset:** 0.54 MB
An example of 'train' looks as follows:
```python
{
"doc_key": "science-train-1",
"sentence": ["They", "may", "also", "use", "Adenosine", "triphosphate", ",", "Nitric", "oxide", ",", "and", "ROS", "for", "signaling", "in", "the", "same", "ways", "that", "animals", "do", "."],
"ner": [
{"id-start": 4, "id-end": 5, "entity-type": "chemicalcompound"},
{"id-start": 7, "id-end": 8, "entity-type": "chemicalcompound"},
{"id-start": 11, "id-end": 11, "entity-type": "chemicalcompound"}
],
"relations": []
}
```
#### music
- **Size of downloaded dataset files:** 0.73 MB
- **Size of the generated dataset:** 0.64 MB
An example of 'train' looks as follows:
```python
{
"doc_key": "music-train-1",
"sentence": ["In", "2003", ",", "the", "Stade", "de", "France", "was", "the", "primary", "site", "of", "the", "2003", "World", "Championships", "in", "Athletics", "."],
"ner": [
{"id-start": 4, "id-end": 6, "entity-type": "location"},
{"id-start": 13, "id-end": 17, "entity-type": "event"}
],
"relations": [
{"id_1-start": 13, "id_1-end": 17, "id_2-start": 4, "id_2-end": 6, "relation-type": "physical", "Exp": "", "Un": False, "SA": False}
]
}
```
#### literature
- **Size of downloaded dataset files:** 0.64 MB
- **Size of the generated dataset:** 0.57 MB
An example of 'train' looks as follows:
```python
{
"doc_key": "literature-train-1",
"sentence": ["In", "1351", ",", "during", "the", "reign", "of", "Emperor", "Toghon", "Temür", "of", "the", "Yuan", "dynasty", ",", "93rd-generation", "descendant", "Kong", "Huan", "(", "孔浣", ")", "'", "s", "2nd", "son", "Kong", "Shao", "(", "孔昭", ")", "moved", "from", "China", "to", "Korea", "during", "the", "Goryeo", ",", "and", "was", "received", "courteously", "by", "Princess", "Noguk", "(", "the", "Mongolian-born", "wife", "of", "the", "future", "king", "Gongmin", ")", "."],
"ner": [
{"id-start": 7, "id-end": 9, "entity-type": "person"},
{"id-start": 12, "id-end": 13, "entity-type": "country"},
{"id-start": 17, "id-end": 18, "entity-type": "writer"},
{"id-start": 20, "id-end": 20, "entity-type": "writer"},
{"id-start": 26, "id-end": 27, "entity-type": "writer"},
{"id-start": 29, "id-end": 29, "entity-type": "writer"},
{"id-start": 33, "id-end": 33, "entity-type": "country"},
{"id-start": 35, "id-end": 35, "entity-type": "country"},
{"id-start": 38, "id-end": 38, "entity-type": "misc"},
{"id-start": 45, "id-end": 46, "entity-type": "person"},
{"id-start": 49, "id-end": 50, "entity-type": "misc"},
{"id-start": 55, "id-end": 55, "entity-type": "person"}
],
"relations": [
{"id_1-start": 7, "id_1-end": 9, "id_2-start": 12, "id_2-end": 13, "relation-type": "role", "Exp": "", "Un": False, "SA": False},
{"id_1-start": 7, "id_1-end": 9, "id_2-start": 12, "id_2-end": 13, "relation-type": "temporal", "Exp": "", "Un": False, "SA": False},
{"id_1-start": 17, "id_1-end": 18, "id_2-start": 26, "id_2-end": 27, "relation-type": "social", "Exp": "family", "Un": False, "SA": False},
{"id_1-start": 20, "id_1-end": 20, "id_2-start": 17, "id_2-end": 18, "relation-type": "named", "Exp": "", "Un": False, "SA": False},
{"id_1-start": 26, "id_1-end": 27, "id_2-start": 33, "id_2-end": 33, "relation-type": "physical", "Exp": "", "Un": False, "SA": False},
{"id_1-start": 26, "id_1-end": 27, "id_2-start": 35, "id_2-end": 35, "relation-type": "physical", "Exp": "", "Un": False, "SA": False},
{"id_1-start": 26, "id_1-end": 27, "id_2-start": 38, "id_2-end": 38, "relation-type": "temporal", "Exp": "", "Un": False, "SA": False},
{"id_1-start": 26, "id_1-end": 27, "id_2-start": 45, "id_2-end": 46, "relation-type": "social", "Exp": "greeted_by", "Un": False, "SA": False},
{"id_1-start": 29, "id_1-end": 29, "id_2-start": 26, "id_2-end": 27, "relation-type": "named", "Exp": "", "Un": False, "SA": False},
{"id_1-start": 45, "id_1-end": 46, "id_2-start": 55, "id_2-end": 55, "relation-type": "social", "Exp": "marriage", "Un": False, "SA": False},
{"id_1-start": 49, "id_1-end": 50, "id_2-start": 45, "id_2-end": 46, "relation-type": "named", "Exp": "", "Un": False, "SA": False}
]
}
```
#### ai
- **Size of downloaded dataset files:** 0.51 MB
- **Size of the generated dataset:** 0.46 MB
An example of 'train' looks as follows:
```python
{
"doc_key": "ai-train-1",
"sentence": ["Popular", "approaches", "of", "opinion-based", "recommender", "system", "utilize", "various", "techniques", "including", "text", "mining", ",", "information", "retrieval", ",", "sentiment", "analysis", "(", "see", "also", "Multimodal", "sentiment", "analysis", ")", "and", "deep", "learning", "X.Y.", "Feng", ",", "H.", "Zhang", ",", "Y.J.", "Ren", ",", "P.H.", "Shang", ",", "Y.", "Zhu", ",", "Y.C.", "Liang", ",", "R.C.", "Guan", ",", "D.", "Xu", ",", "(", "2019", ")", ",", ",", "21", "(", "5", ")", ":", "e12957", "."],
"ner": [
{"id-start": 3, "id-end": 5, "entity-type": "product"},
{"id-start": 10, "id-end": 11, "entity-type": "field"},
{"id-start": 13, "id-end": 14, "entity-type": "task"},
{"id-start": 16, "id-end": 17, "entity-type": "task"},
{"id-start": 21, "id-end": 23, "entity-type": "task"},
{"id-start": 26, "id-end": 27, "entity-type": "field"},
{"id-start": 28, "id-end": 29, "entity-type": "researcher"},
{"id-start": 31, "id-end": 32, "entity-type": "researcher"},
{"id-start": 34, "id-end": 35, "entity-type": "researcher"},
{"id-start": 37, "id-end": 38, "entity-type": "researcher"},
{"id-start": 40, "id-end": 41, "entity-type": "researcher"},
{"id-start": 43, "id-end": 44, "entity-type": "researcher"},
{"id-start": 46, "id-end": 47, "entity-type": "researcher"},
{"id-start": 49, "id-end": 50, "entity-type": "researcher"}
],
"relations": [
{"id_1-start": 3, "id_1-end": 5, "id_2-start": 10, "id_2-end": 11, "relation-type": "part-of", "Exp": "", "Un": False, "SA": False},
{"id_1-start": 3, "id_1-end": 5, "id_2-start": 10, "id_2-end": 11, "relation-type": "usage", "Exp": "", "Un": False, "SA": False},
{"id_1-start": 3, "id_1-end": 5, "id_2-start": 13, "id_2-end": 14, "relation-type": "part-of", "Exp": "", "Un": False, "SA": False},
{"id_1-start": 3, "id_1-end": 5, "id_2-start": 13, "id_2-end": 14, "relation-type": "usage", "Exp": "", "Un": False, "SA": False},
{"id_1-start": 3, "id_1-end": 5, "id_2-start": 16, "id_2-end": 17, "relation-type": "part-of", "Exp": "", "Un": False, "SA": False},
{"id_1-start": 3, "id_1-end": 5, "id_2-start": 16, "id_2-end": 17, "relation-type": "usage", "Exp": "", "Un": False, "SA": False},
{"id_1-start": 3, "id_1-end": 5, "id_2-start": 26, "id_2-end": 27, "relation-type": "part-of", "Exp": "", "Un": False, "SA": False},
{"id_1-start": 3, "id_1-end": 5, "id_2-start": 26, "id_2-end": 27, "relation-type": "usage", "Exp": "", "Un": False, "SA": False},
{"id_1-start": 21, "id_1-end": 23, "id_2-start": 16, "id_2-end": 17, "relation-type": "part-of", "Exp": "", "Un": False, "SA": False},
{"id_1-start": 21, "id_1-end": 23, "id_2-start": 16, "id_2-end": 17, "relation-type": "type-of", "Exp": "", "Un": False, "SA": False}
]
}
```
### Data Fields
The data fields are the same among all splits.
- `doc_key`: the instance id of this sentence, a `string` feature.
- `sentence`: the list of tokens of this sentence, obtained with spaCy, a `list` of `string` features.
- `ner`: the list of named entities in this sentence, a `list` of `dict` features.
- `id-start`: the start index of the entity, a `int` feature.
- `id-end`: the end index of the entity, a `int` feature.
- `entity-type`: the type of the entity, a `string` feature.
- `relations`: the list of relations in this sentence, a `list` of `dict` features.
- `id_1-start`: the start index of the first entity, a `int` feature.
- `id_1-end`: the end index of the first entity, a `int` feature.
- `id_2-start`: the start index of the second entity, a `int` feature.
- `id_2-end`: the end index of the second entity, a `int` feature.
- `relation-type`: the type of the relation, a `string` feature.
- `Exp`: the explanation of the relation type assigned, a `string` feature.
- `Un`: uncertainty of the annotator, a `bool` feature.
- `SA`: existence of syntax ambiguity which poses a challenge for the annotator, a `bool` feature.
### Data Splits
#### Sentences
| | Train | Dev | Test | Total |
|--------------|---------|---------|---------|---------|
| news | 164 | 350 | 400 | 914 |
| politics | 101 | 350 | 400 | 851 |
| science | 103 | 351 | 400 | 854 |
| music | 100 | 350 | 399 | 849 |
| literature | 100 | 400 | 416 | 916 |
| ai | 100 | 350 | 431 | 881 |
| ------------ | ------- | ------- | ------- | ------- |
| total | 668 | 2,151 | 2,46 | 5,265 |
#### Relations
| | Train | Dev | Test | Total |
|--------------|---------|---------|---------|---------|
| news | 175 | 300 | 396 | 871 |
| politics | 502 | 1,616 | 1,831 | 3,949 |
| science | 355 | 1,340 | 1,393 | 3,088 |
| music | 496 | 1,861 | 2,333 | 4,690 |
| literature | 397 | 1,539 | 1,591 | 3,527 |
| ai | 350 | 1,006 | 1,127 | 2,483 |
| ------------ | ------- | ------- | ------- | ------- |
| total | 2,275 | 7,662 | 8,671 | 18,608 |
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@inproceedings{bassignana-plank-2022-crossre,
title = "Cross{RE}: A {C}ross-{D}omain {D}ataset for {R}elation {E}xtraction",
author = "Bassignana, Elisa and Plank, Barbara",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2022",
year = "2022",
publisher = "Association for Computational Linguistics"
}
```
### Contributions
Thanks to [@phucdev](https://github.com/phucdev) for adding this dataset. | [
-0.6342548131942749,
-0.34329184889793396,
0.2594912648200989,
0.1975899636745453,
-0.13758428394794464,
-0.2503579556941986,
-0.21896998584270477,
-0.3169461786746979,
0.6938539147377014,
0.5855292677879333,
-0.8754609823226929,
-1.1005918979644775,
-0.5864513516426086,
0.3960009813308716... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
RicardoRei/wmt-da-human-evaluation | RicardoRei | 2023-02-17T10:41:18Z | 101 | 0 | null | [
"size_categories:1M<n<10M",
"language:bn",
"language:cs",
"language:de",
"language:en",
"language:et",
"language:fi",
"language:fr",
"language:gu",
"language:ha",
"language:hi",
"language:is",
"language:ja",
"language:kk",
"language:km",
"language:lt",
"language:lv",
"language:pl",... | 2023-02-17T10:41:18Z | 2023-02-16T18:49:07.000Z | 2023-02-16T18:49:07 | ---
license: apache-2.0
size_categories:
- 1M<n<10M
language:
- bn
- cs
- de
- en
- et
- fi
- fr
- gu
- ha
- hi
- is
- ja
- kk
- km
- lt
- lv
- pl
- ps
- ru
- ta
- tr
- uk
- xh
- zh
- zu
tags:
- mt-evaluation
- WMT
- 41-lang-pairs
---
# Dataset Summary
This dataset contains all DA human annotations from previous WMT News Translation shared tasks.
The data is organised into 8 columns:
- lp: language pair
- src: input text
- mt: translation
- ref: reference translation
- score: z score
- raw: direct assessment
- annotators: number of annotators
- domain: domain of the input text (e.g. news)
- year: collection year
You can also find the original data for each year in the results section https://www.statmt.org/wmt{YEAR}/results.html e.g: for 2020 data: [https://www.statmt.org/wmt20/results.html](https://www.statmt.org/wmt20/results.html)
## Python usage:
```python
from datasets import load_dataset
dataset = load_dataset("RicardoRei/wmt-da-human-evaluation", split="train")
```
There is no standard train/test split for this dataset but you can easily split it according to year, language pair or domain. E.g. :
```python
# split by year
data = dataset.filter(lambda example: example["year"] == 2022)
# split by LP
data = dataset.filter(lambda example: example["lp"] == "en-de")
# split by domain
data = dataset.filter(lambda example: example["domain"] == "news")
```
Note that most data is from News domain.
## Citation Information
If you use this data please cite the WMT findings from previous years:
- [Findings of the 2017 Conference on Machine Translation (WMT17)](https://aclanthology.org/W17-4717.pdf)
- [Findings of the 2018 Conference on Machine Translation (WMT18)](https://aclanthology.org/W18-6401.pdf)
- [Findings of the 2019 Conference on Machine Translation (WMT19)](https://aclanthology.org/W19-5301.pdf)
- [Findings of the 2020 Conference on Machine Translation (WMT20)](https://aclanthology.org/2020.wmt-1.1.pdf)
- [Findings of the 2021 Conference on Machine Translation (WMT21)](https://aclanthology.org/2021.wmt-1.1.pdf)
- [Findings of the 2022 Conference on Machine Translation (WMT22)](https://aclanthology.org/2022.wmt-1.1.pdf) | [
-0.5778762698173523,
-0.4558636248111725,
0.4170019328594208,
0.06386835128068924,
-0.35689038038253784,
-0.03043598309159279,
-0.30399128794670105,
-0.5191088914871216,
0.4199773073196411,
0.5645882487297058,
-0.7085804343223572,
-0.767144501209259,
-0.7082072496414185,
0.701339602470398,... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
talgatzh/xsum-kk3 | talgatzh | 2023-11-02T07:37:59Z | 101 | 0 | xsum | [
"task_categories:summarization",
"task_ids:news-articles-summarization",
"annotations_creators:found",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:xsum",
"license:unknown",
"arxiv:1808.08745",
"region:us"
] | 2023-11-02T07:37:59Z | 2023-05-29T04:09:52.000Z | 2023-05-29T04:09:52 | ---
annotations_creators:
- found
language_creators:
- found
license:
- unknown
multilinguality:
- monolingual
pretty_name: Extreme Summarization (XSum)
paperswithcode_id: xsum
size_categories:
- 100K<n<1M
source_datasets:
- xsum
task_categories:
- summarization
task_ids:
- news-articles-summarization
train-eval-index:
- config: default
task: summarization
task_id: summarization
splits:
train_split: train
eval_split: test
col_mapping:
document: text
summary: target
metrics:
- type: rouge
name: Rouge
dataset_info:
features:
- name: document
dtype: string
- name: summary
dtype: string
- name: id
dtype: string
splits:
- name: train
num_bytes: 139159410
num_examples: 5
download_size: 139159410
dataset_size: 139159410
---
# Dataset Card for "xsum"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:** https://github.com/EdinburghNLP/XSum
- **Paper:** [Don't Give Me the Details, Just the Summary! Topic-Aware Convolutional Neural Networks for Extreme Summarization](https://arxiv.org/abs/1808.08745)
- **Point of Contact:** [Shashi Narayan](mailto:shashi.narayan@ed.ac.uk)
- **Size of downloaded dataset files:** 257.30 MB
- **Size of the generated dataset:** 532.26 MB
- **Total amount of disk used:** 789.56 MB
### Dataset Summary
Extreme Summarization (XSum) Dataset.
There are three features:
- document: Input news article.
- summary: One sentence summary of the article.
- id: BBC ID of the article.
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Dataset Structure
### Data Instances
#### default
- **Size of downloaded dataset files:** 257.30 MB
- **Size of the generated dataset:** 532.26 MB
- **Total amount of disk used:** 789.56 MB
An example of 'validation' looks as follows.
```
{
"document": "some-body",
"id": "29750031",
"summary": "some-sentence"
}
```
### Data Fields
The data fields are the same among all splits.
#### default
- `document`: a `string` feature.
- `summary`: a `string` feature.
- `id`: a `string` feature.
### Data Splits
| name |train |validation|test |
|-------|-----:|---------:|----:|
|default|204045| 11332|11334|
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@article{Narayan2018DontGM,
title={Don't Give Me the Details, Just the Summary! Topic-Aware Convolutional Neural Networks for Extreme Summarization},
author={Shashi Narayan and Shay B. Cohen and Mirella Lapata},
journal={ArXiv},
year={2018},
volume={abs/1808.08745}
}
```
### Contributions
Thanks to [@thomwolf](https://github.com/thomwolf), [@lewtun](https://github.com/lewtun), [@mariamabarham](https://github.com/mariamabarham), [@jbragg](https://github.com/jbragg), [@lhoestq](https://github.com/lhoestq), [@patrickvonplaten](https://github.com/patrickvonplaten) for adding this dataset. | [
-0.663652241230011,
-0.4952464699745178,
0.07540813088417053,
0.09419479221105576,
-0.3067127466201782,
-0.030784418806433678,
-0.46189042925834656,
-0.3561316728591919,
0.7019205689430237,
0.41925665736198425,
-0.7197752594947815,
-0.8820215463638306,
-0.6321481466293335,
-0.0353785045444... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
dkoterwa/kor-sts | dkoterwa | 2023-07-25T09:52:30Z | 101 | 1 | null | [
"license:cc-by-sa-4.0",
"region:us"
] | 2023-07-25T09:52:30Z | 2023-07-18T14:17:23.000Z | 2023-07-18T14:17:23 | ---
license: cc-by-sa-4.0
dataset_info:
features:
- name: id
dtype: int64
- name: genre
dtype: string
- name: sentence1
dtype: string
- name: sentence2
dtype: string
- name: score
dtype: float64
splits:
- name: train
num_bytes: 1034815
num_examples: 5691
- name: valid
num_bytes: 297254
num_examples: 1465
- name: test
num_bytes: 247409
num_examples: 1376
download_size: 837346
dataset_size: 1579478
---
# Korean Semantic Textual Similarity (KorSTS) Dataset
For a better dataset description, please visit this GitHub repository prepared by the authors of the article: [LINK](https://github.com/kakaobrain/kor-nlu-datasets) <br>
<br>
**This dataset was prepared by converting tsv files from this repository.** The idea was to share the dataset for broader audience. I am not an original author of it. <br>
Because of the specifity of read_csv method from Pandas library, there are couple of observations, which had to be deleted because of the formatting (54 in train, 35 in valid, and 1 in test)
Additionaly, **None values have been removed from the dataset** (5 from train, 1 from eval, and 3 from test)
**How to download**
```
from datasets import load_dataset
data = load_dataset("dkoterwa/kor-sts")
```
**If you use this dataset for research, please cite this paper:**
```
@article{ham2020kornli,
title={KorNLI and KorSTS: New Benchmark Datasets for Korean Natural Language Understanding},
author={Ham, Jiyeon and Choe, Yo Joong and Park, Kyubyong and Choi, Ilji and Soh, Hyungjoon},
journal={arXiv preprint arXiv:2004.03289},
year={2020}
}
``` | [
-0.09573756903409958,
-0.5464634895324707,
0.4694134294986725,
0.4098213315010071,
-0.640561580657959,
-0.11206850409507751,
-0.5138458609580994,
-0.21528971195220947,
0.022932017222046852,
0.8515675067901611,
-0.5655948519706726,
-0.904026210308075,
-0.3291109502315521,
0.5826250314712524... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Jaehun/DIMSUM | Jaehun | 2023-08-09T22:05:01Z | 101 | 0 | null | [
"license:apache-2.0",
"region:us"
] | 2023-08-09T22:05:01Z | 2023-08-09T21:27:28.000Z | 2023-08-09T21:27:28 | ---
license: apache-2.0
---
| [
-0.1285335123538971,
-0.1861683875322342,
0.6529128551483154,
0.49436232447624207,
-0.19319400191307068,
0.23607441782951355,
0.36072009801864624,
0.05056373029947281,
0.5793656706809998,
0.7400146722793579,
-0.650810182094574,
-0.23784008622169495,
-0.7102247476577759,
-0.0478255338966846... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Kerenfuentes/holistic_bias | Kerenfuentes | 2023-09-29T21:18:24Z | 101 | 0 | null | [
"region:us"
] | 2023-09-29T21:18:24Z | 2023-09-22T21:53:16.000Z | 2023-09-22T21:53:16 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622263669967651,
0.43461522459983826,
-0.52829909324646,
0.7012971639633179,
0.7915719747543335,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104475975036621,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
celsowm/bbc_news_ptbr | celsowm | 2023-11-04T20:34:46Z | 101 | 0 | null | [
"region:us"
] | 2023-11-04T20:34:46Z | 2023-11-04T20:34:33.000Z | 2023-11-04T20:34:33 | ---
dataset_info:
features:
- name: categoria
dtype: string
- name: titulo
dtype: string
- name: texto
dtype: string
- name: data
dtype: string
- name: link
dtype: string
splits:
- name: train
num_bytes: 69873414
num_examples: 8637
download_size: 40455060
dataset_size: 69873414
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "bbc_news_ptbr"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.6291471123695374,
-0.2055429071187973,
0.17913399636745453,
0.5531268119812012,
-0.6811325550079346,
0.1215117871761322,
0.26200759410858154,
-0.04661404713988304,
0.6274608373641968,
0.3676926791667938,
-0.5657452344894409,
-0.908919095993042,
-0.8609550595283508,
-0.2589509189128876,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
cociweb/CV_15_1-hu | cociweb | 2023-11-19T21:17:31Z | 101 | 0 | null | [
"region:us"
] | 2023-11-19T21:17:31Z | 2023-11-18T02:11:26.000Z | 2023-11-18T02:11:26 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622264862060547,
0.43461528420448303,
-0.52829909324646,
0.7012971639633179,
0.7915720343589783,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104477167129517,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
albertxu/CrosswordQA | albertxu | 2022-10-29T23:45:36Z | 100 | 4 | null | [
"task_categories:question-answering",
"task_ids:open-domain-qa",
"annotations_creators:no-annotation",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1M<n<10M",
"language:en",
"license:unknown",
"region:us"
] | 2022-10-29T23:45:36Z | 2022-04-26T08:05:14.000Z | 2022-04-26T08:05:14 | ---
annotations_creators:
- no-annotation
language_creators:
- found
language:
- en
license:
- unknown
multilinguality:
- monolingual
size_categories:
- 1M<n<10M
task_categories:
- question-answering
task_ids:
- open-domain-qa
---
# Dataset Card for CrosswordQA
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** [Needs More Information]
- **Repository:** https://github.com/albertkx/Berkeley-Crossword-Solver
- **Paper:** [Needs More Information]
- **Leaderboard:** [Needs More Information]
- **Point of Contact:** [Albert Xu](mailto:albertxu@usc.edu) and [Eshaan Pathak](mailto:eshaanpathak@berkeley.edu)
### Dataset Summary
The CrosswordQA dataset is a set of over 6 million clue-answer pairs scraped from the New York Times and many other crossword publishers. The dataset was created to train the Berkeley Crossword Solver's QA model. See our paper for more information. Answers are automatically segmented (e.g., BUZZLIGHTYEAR -> Buzz Lightyear), and thus may occasionally be segmented incorrectly.
### Supported Tasks and Leaderboards
[Needs More Information]
### Languages
[Needs More Information]
## Dataset Structure
### Data Instances
```
{
"id": 0,
"clue": "Clean-up target",
"answer": "mess"
}
```
### Data Fields
[Needs More Information]
### Data Splits
[Needs More Information]
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
[Needs More Information]
## Considerations for Using the Data
### Social Impact of Dataset
[Needs More Information]
### Discussion of Biases
[Needs More Information]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
[Needs More Information]
### Licensing Information
[Needs More Information]
### Citation Information
[Needs More Information] | [
-0.46315938234329224,
-0.5324562788009644,
0.17888957262039185,
0.04529067501425743,
-0.23886734247207642,
0.22813349962234497,
0.06290022283792496,
-0.379134863615036,
0.5872047543525696,
0.5672553777694702,
-0.9540436267852783,
-0.8991508483886719,
-0.6091581583023071,
0.2615278661251068... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
autoevaluate/autoeval-staging-eval-project-xsum-c7d88063-10885461 | autoevaluate | 2022-07-15T09:10:49Z | 100 | 0 | null | [
"autotrain",
"evaluation",
"region:us"
] | 2022-07-15T09:10:49Z | 2022-07-14T10:15:03.000Z | 2022-07-14T10:15:03 | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- xsum
eval_info:
task: summarization
model: pszemraj/long-t5-tglobal-base-16384-book-summary
metrics: []
dataset_name: xsum
dataset_config: default
dataset_split: test
col_mapping:
text: document
target: summary
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: pszemraj/long-t5-tglobal-base-16384-book-summary
* Dataset: xsum
* Config: default
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model. | [
-0.3905618190765381,
-0.10135039687156677,
0.2201768308877945,
0.1528370976448059,
-0.21394094824790955,
0.006501603405922651,
0.015043826773762703,
-0.4776385724544525,
0.216539204120636,
0.35567599534988403,
-1.0105594396591187,
-0.3386812210083008,
-0.6304478049278259,
-0.07952541857957... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Dahoas/code-review-instruct-critique-revision-python | Dahoas | 2023-01-08T15:22:19Z | 100 | 7 | null | [
"region:us"
] | 2023-01-08T15:22:19Z | 2023-01-08T15:22:14.000Z | 2023-01-08T15:22:14 | Entry not found | [
-0.32276487350463867,
-0.22568444907665253,
0.8622263073921204,
0.43461570143699646,
-0.5282988548278809,
0.7012969255447388,
0.7915717363357544,
0.07618642598390579,
0.7746027112007141,
0.25632190704345703,
-0.7852815389633179,
-0.22573848068714142,
-0.910447895526886,
0.5715675354003906,... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Nan-Do/instructional_code-search-net-python | Nan-Do | 2023-05-20T05:09:44Z | 100 | 9 | null | [
"task_categories:conversational",
"task_categories:text-generation",
"task_categories:text2text-generation",
"language:en",
"license:apache-2.0",
"Python",
"Code generation",
"Instruction Response",
"region:us"
] | 2023-05-20T05:09:44Z | 2023-05-20T04:50:17.000Z | 2023-05-20T04:50:17 | ---
dataset_info:
features:
- name: INSTRUCTION
dtype: string
- name: RESPONSE
dtype: string
- name: SOURCE
dtype: string
splits:
- name: train
num_bytes: 451473573
num_examples: 418545
download_size: 172777462
dataset_size: 451473573
license: apache-2.0
task_categories:
- conversational
- text-generation
- text2text-generation
language:
- en
tags:
- Python
- Code generation
- Instruction Response
pretty_name: Instructional Python Dataset
---
# Dataset Card for "instructional_code-search-net-python"
## Dataset Description
- **Homepage:** None
- **Repository:** https://huggingface.co/datasets/Nan-Do/instructional_code-search-net-python
- **Paper:** None
- **Leaderboard:** None
- **Point of Contact:** [@Nan-Do](https://github.com/Nan-Do)
### Dataset Summary
This is an instructional dataset for Python.
The dataset contains two different kind of tasks:
- Given a piece of code generate a description of what it does.
- Given a description generate a piece of code that fulfils the description.
### Languages
The dataset is in English.
### Data Splits
There are no splits.
## Dataset Creation
May of 2023
### Curation Rationale
This dataset was created to improve the coding capabilities of LLMs.
### Source Data
The summarized version of the code-search-net dataset can be found at https://huggingface.co/datasets/Nan-Do/code-search-net-python
### Annotations
The dataset includes an instruction and response columns.
#### Annotation process
The annotation procedure was done using templates and NLP techniques to generate human-like instructions and responses.
A sample notebook of the process can be found at https://github.com/Nan-Do/OpenAssistantInstructionResponsePython
The annontations have been cleaned to make sure there are no repetitions and/or meaningless summaries.
### Licensing Information
Apache 2.0 | [
-0.26874011754989624,
-0.6273700594902039,
-0.09740263968706131,
0.43668898940086365,
0.008591614663600922,
-0.21018482744693756,
-0.3280617892742157,
-0.04405722767114639,
0.48720577359199524,
0.43255695700645447,
-0.6147064566612244,
-0.6714191436767578,
-0.3718508183956146,
0.3570694327... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
tiedong/goat | tiedong | 2023-05-25T22:14:53Z | 100 | 19 | null | [
"task_categories:question-answering",
"size_categories:1M<n<10M",
"language:en",
"license:apache-2.0",
"region:us"
] | 2023-05-25T22:14:53Z | 2023-05-25T22:07:47.000Z | 2023-05-25T22:07:47 | ---
license: apache-2.0
task_categories:
- question-answering
language:
- en
size_categories:
- 1M<n<10M
---
# Dataset Card for Dataset Name
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
The dataset.json file contains ~1.7 million synthetic data for arithmetic tasks, generated by dataset.ipynb.
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] | [
-0.33465322852134705,
-0.5361275672912598,
0.058111678808927536,
0.396895170211792,
-0.17939792573451996,
0.24582967162132263,
-0.12744326889514923,
-0.2796799838542938,
0.583544135093689,
0.7536358833312988,
-0.9472559094429016,
-1.0444525480270386,
-0.631312906742096,
0.2815050184726715,... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
clarin-knext/msmarco-pl | clarin-knext | 2023-06-07T08:22:03Z | 100 | 0 | null | [
"language:pl",
"arxiv:2305.19840",
"region:us"
] | 2023-06-07T08:22:03Z | 2023-06-06T22:02:28.000Z | 2023-06-06T22:02:28 | ---
language:
- pl
---
Part of **BEIR-PL: Zero Shot Information Retrieval Benchmark for the Polish Language**.
Link to arxiv: https://arxiv.org/pdf/2305.19840.pdf
Contact: konrad.wojtasik@pwr.edu.pl | [
-0.22099174559116364,
-0.9029766321182251,
0.5094639658927917,
0.23541955649852753,
-0.3185206949710846,
-0.14919018745422363,
-0.1667395383119583,
-0.49629175662994385,
-0.0189601331949234,
0.41122597455978394,
-0.5503101348876953,
-0.6913571357727051,
-0.41661807894706726,
-0.04830449074... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
vlsp-2023-vllm/truthful_qa | vlsp-2023-vllm | 2023-09-30T05:13:12Z | 100 | 0 | null | [
"region:us"
] | 2023-09-30T05:13:12Z | 2023-09-29T19:37:14.000Z | 2023-09-29T19:37:14 | ---
dataset_info:
features:
- name: question
dtype: string
- name: mc1_targets
struct:
- name: choices
sequence: string
- name: labels
sequence: int64
- name: mc2_targets
struct:
- name: choices
sequence: string
- name: labels
sequence: int64
splits:
- name: validation
num_bytes: 816354
num_examples: 785
download_size: 304272
dataset_size: 816354
configs:
- config_name: default
data_files:
- split: validation
path: data/validation-*
---
# TruthfulQA (Vietnamese translation version)
## Install
To install `lm-eval` from the github repository main branch, run:
```bash
git clone https://github.com/hieunguyen1053/lm-evaluation-harness
cd lm-evaluation-harness
pip install -e .
```
## Basic Usage
> **Note**: When reporting results from eval harness, please include the task versions (shown in `results["versions"]`) for reproducibility. This allows bug fixes to tasks while also ensuring that previously reported scores are reproducible. See the [Task Versioning](#task-versioning) section for more info.
### Hugging Face `transformers`
To evaluate a model hosted on the [HuggingFace Hub](https://huggingface.co/models) (e.g. vlsp-2023-vllm/hoa-1b4) on `truthful_qa` you can use the following command:
```bash
python main.py \
--model hf-causal \
--model_args pretrained=vlsp-2023-vllm/hoa-1b4 \
--tasks truthfulqa_vi \
--device cuda:0
```
Additional arguments can be provided to the model constructor using the `--model_args` flag. Most notably, this supports the common practice of using the `revisions` feature on the Hub to store partially trained checkpoints, or to specify the datatype for running a model:
```bash
python main.py \
--model hf-causal \
--model_args pretrained=vlsp-2023-vllm/hoa-1b4,revision=step100000,dtype="float" \
--tasks truthfulqa_vi \
--device cuda:0
```
To evaluate models that are loaded via `AutoSeq2SeqLM` in Huggingface, you instead use `hf-seq2seq`. *To evaluate (causal) models across multiple GPUs, use `--model hf-causal-experimental`*
> **Warning**: Choosing the wrong model may result in erroneous outputs despite not erroring. | [
-0.2973500192165375,
-0.8863674402236938,
0.6936702728271484,
0.4651067554950714,
-0.05466810613870621,
-0.12122027575969696,
0.07362058013677597,
-0.14530721306800842,
0.025225631892681122,
0.4478841722011566,
-0.5827287435531616,
-0.3658176064491272,
-0.4927823841571808,
0.04293011873960... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
sunjun/medmcqa_sj | sunjun | 2023-10-24T12:51:09Z | 100 | 0 | null | [
"region:us"
] | 2023-10-24T12:51:09Z | 2023-10-24T12:49:59.000Z | 2023-10-24T12:49:59 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: id
dtype: string
- name: question
dtype: string
- name: cop
dtype:
class_label:
names:
'0': a
'1': b
'2': c
'3': d
- name: choice_type
dtype: string
- name: exp
dtype: string
- name: subject_name
dtype: string
- name: topic_name
dtype: string
- name: options
struct:
- name: A
dtype: string
- name: B
dtype: string
- name: C
dtype: string
- name: D
dtype: string
- name: answer_idx
dtype: string
- name: answer
dtype: string
- name: choices
sequence: string
- name: answer_index
dtype: int64
splits:
- name: train
num_bytes: 155300377
num_examples: 182822
- name: test
num_bytes: 2810502
num_examples: 4183
download_size: 102706169
dataset_size: 158110879
---
# Dataset Card for "medmcqa_sj"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.4551635682582855,
-0.10381617397069931,
0.46992963552474976,
-0.014889946207404137,
-0.1967320591211319,
0.1526883989572525,
0.38397571444511414,
0.17258557677268982,
0.8896759152412415,
0.6184781789779663,
-0.8648161888122559,
-0.7817819714546204,
-0.7141456007957458,
-0.15048949420452... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
coastalcph/fm-updates-llama-7b | coastalcph | 2023-11-21T16:56:04Z | 100 | 0 | null | [
"region:us"
] | 2023-11-21T16:56:04Z | 2023-11-13T11:04:59.000Z | 2023-11-13T11:04:59 | ---
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
- split: validation
path: data/validation-*
dataset_info:
features:
- name: query
struct:
- name: label
dtype: string
- name: objects
list:
- name: aliases
sequence: string
- name: label
dtype: string
- name: qid
dtype: string
- name: qid
dtype: string
- name: rel_id
dtype: string
- name: relation
dtype: string
- name: prediction
struct:
- name: predictions
list:
- name: answer
dtype: string
- name: first_token_probability
dtype: float64
- name: per_token_probability
sequence: float64
- name: perplexity
dtype: float64
- name: query
dtype: string
- name: f1
dtype: float64
- name: relation
dtype: string
- name: type
dtype: string
- name: original_answer
dtype: string
- name: updates
sequence: string
splits:
- name: test
num_bytes: 440727.6757775683
num_examples: 480
- name: validation
num_bytes: 46827.315551366635
num_examples: 51
download_size: 380771
dataset_size: 487554.99132893496
---
# Dataset Card for "fm-updates-llama-7b"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.4057793617248535,
-0.13583813607692719,
0.3350795805454254,
0.6434223055839539,
-0.545891523361206,
0.1579245626926422,
0.37051519751548767,
-0.352188378572464,
0.8197469711303711,
0.4788656234741211,
-0.9382208585739136,
-0.806239128112793,
-0.7618522047996521,
0.06844309717416763,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
nlpconnect/dpr-nq-reader-v2 | nlpconnect | 2022-01-02T16:39:51Z | 99 | 0 | null | [
"region:us"
] | 2022-01-02T16:39:51Z | 2022-03-02T23:29:22.000Z | 2022-03-02T23:29:22 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622264862060547,
0.43461528420448303,
-0.52829909324646,
0.7012971639633179,
0.7915720343589783,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104477167129517,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Tevatron/beir | Tevatron | 2022-07-08T00:17:30Z | 99 | 0 | null | [
"region:us"
] | 2022-07-08T00:17:30Z | 2022-06-07T05:59:24.000Z | 2022-06-07T05:59:24 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622263669967651,
0.43461522459983826,
-0.52829909324646,
0.7012971639633179,
0.7915719747543335,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104475975036621,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
okite97/news-data | okite97 | 2022-08-25T10:36:01Z | 99 | 2 | null | [
"task_categories:text-classification",
"task_ids:topic-classification",
"task_ids:multi-class-classification",
"annotations_creators:other",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:en",
"license:afl-3.0",
"region... | 2022-08-25T10:36:01Z | 2022-07-28T09:10:22.000Z | 2022-07-28T09:10:22 | ---
annotations_creators:
- other
language:
- 'en'
language_creators:
- found
license:
- afl-3.0
multilinguality:
- monolingual
pretty_name: News Dataset
size_categories:
- 1K<n<10K
source_datasets:
- original
tags: []
task_categories:
- text-classification
task_ids:
- topic-classification
- multi-class-classification
---
# Dataset Card for news-data
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Dataset Curators](#dataset-curators)
### Dataset Summary
The News Dataset is an English-language dataset containing just over 4k unique news articles scrapped from AriseTv- One of the most popular news television in Nigeria.
### Supported Tasks and Leaderboards
It supports news article classification into different categories.
### Languages
English
## Dataset Structure
### Data Instances
'''
{'Title': 'Nigeria: APC Yet to Zone Party Positions Ahead of Convention'
'Excerpt': 'The leadership of the All Progressives Congress (APC), has denied reports that it had zoned some party positions ahead of'
'Category': 'politics'
'labels': 2}
'''
### Data Fields
* Title: a string containing the title of a news title as shown
* Excerpt: a string containing a short extract from the body of the news
* Category: a string that tells the category of an example (string label)
* labels: integer telling the class of an example (label)
### Data Splits
| Dataset Split | Number of instances in split |
| ----------- | ----------- |
| Train | 4,594 |
| Paragraph | 811 |
## Dataset Creation
### Source Data
#### Initial Data Collection and Normalization
The code for the dataset creation at *https://github.com/chimaobi-okite/NLP-Projects-Competitions/blob/main/NewsCategorization/Data/NewsDataScraping.ipynb*. The examples were scrapped from
<https://www.arise.tv/>
### Annotations
#### Annotation process
The annotation is based on the news category in the [arisetv](https://www.arise.tv) website
#### Who are the annotators?
Journalists at arisetv
## Considerations for Using the Data
### Social Impact of Dataset
The purpose of this dataset is to help develop models that can classify news articles into categories.
This task is useful for efficiently presenting information given a large quantity of text. It should be made clear that any summarizations produced by models trained on this dataset are reflective of the language used in the articles, but are in fact automatically generated.
### Discussion of Biases
This data is biased towards news happenings in Nigeria but the model built using it can as well classify news from other parts of the world
with a slight degradation in performance.
### Dataset Curators
The dataset is created by people at arise but was scrapped by [@github-chimaobi-okite](https://github.com/chimaobi-okite/)
| [
-0.49703484773635864,
-0.6100584268569946,
-0.050625868141651154,
0.4002438485622406,
-0.469468355178833,
0.028954969719052315,
-0.33212849497795105,
-0.32966017723083496,
0.5925865769386292,
0.45626384019851685,
-0.6212974190711975,
-0.8861837983131409,
-0.5927592515945435,
0.333291947841... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
polinaeterna/lila_camera_traps | polinaeterna | 2023-01-18T12:10:17Z | 99 | 0 | null | [
"task_categories:image-classification",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:10M<n<100M",
"source_datasets:original",
"language:en",
"license:other",
"biodiversity",
"camera trap data",
"wildlife monitoring... | 2023-01-18T12:10:17Z | 2023-01-18T12:10:16.000Z | 2023-01-18T12:10:16 | ---
annotations_creators:
- expert-generated
license:
- other
language_creators:
- expert-generated
language:
- en
multilinguality:
- monolingual
size_categories:
- 10M<n<100M
source_datasets:
- original
task_categories:
- image-classification
tags:
- biodiversity
- camera trap data
- wildlife monitoring
pretty_name: LILA Camera Traps
duplicated_from: society-ethics/lila_camera_traps
---
# Dataset Card for LILA
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Usage](#dataset-usage)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://lila.science/
- **Repository:** N/A
- **Paper:** N/A
- **Leaderboard:** N/A
- **Point of Contact:** [info@lila.science](info@lila.science)
### Dataset Summary
LILA Camera Traps is an aggregate data set of images taken by camera traps, which are devices that automatically (e.g. via motion detection) capture images of wild animals to help ecological research.
This data set is the first time when disparate camera trap data sets have been aggregated into a single training environment with a single [taxonomy](https://lila.science/taxonomy-mapping-for-camera-trap-data-sets/).
This data set consists of only camera trap image data sets, whereas the broader [LILA](lila.science/) website also has other data sets related to biology and conservation, intended as a resource for both machine learning (ML) researchers and those that want to harness ML for this topic.
See below for information about each specific dataset that LILA contains:
<details>
<summary> Caltech Camera Traps </summary>
This data set contains 243,100 images from 140 camera locations in the Southwestern United States, with labels for 21 animal categories (plus empty), primarily at the species level (for example, the most common labels are opossum, raccoon, and coyote), and approximately 66,000 bounding box annotations. Approximately 70% of images are labeled as empty.
More information about this data set is available [here](https://beerys.github.io/CaltechCameraTraps/).
This data set is released under the [Community Data License Agreement (permissive variant)](https://cdla.io/permissive-1-0/).
For questions about this data set, contact caltechcameratraps@gmail.com.
If you use this data set, please cite the associated manuscript:
```bibtex
@inproceedings{DBLP:conf/eccv/BeeryHP18,
author = {Sara Beery and
Grant Van Horn and
Pietro Perona},
title = {Recognition in Terra Incognita},
booktitle = {Computer Vision - {ECCV} 2018 - 15th European Conference, Munich,
Germany, September 8-14, 2018, Proceedings, Part {XVI}},
pages = {472--489},
year = {2018},
crossref = {DBLP:conf/eccv/2018-16},
url = {https://doi.org/10.1007/978-3-030-01270-0\_28},
doi = {10.1007/978-3-030-01270-0\_28},
timestamp = {Mon, 08 Oct 2018 17:08:07 +0200},
biburl = {https://dblp.org/rec/bib/conf/eccv/BeeryHP18},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
</details>
<details>
<summary> ENA24 </summary>
This data set contains approximately 10,000 camera trap images representing 23 classes from Eastern North America, with bounding boxes on each image. The most common classes are “American Crow”, “American Black Bear”, and “Dog”.
This data set is released under the [Community Data License Agreement (permissive variant)](https://cdla.io/permissive-1-0/).
Please cite this manuscript if you use this data set:
```bibtex
@article{yousif2019dynamic,
title={Dynamic Programming Selection of Object Proposals for Sequence-Level Animal Species Classification in the Wild},
author={Yousif, Hayder and Kays, Roland and He, Zhihai},
journal={IEEE Transactions on Circuits and Systems for Video Technology},
year={2019},
publisher={IEEE}
}
```
For questions about this data set, contact [Hayder Yousif](hyypp5@mail.missouri.edu).
</details>
<details>
<summary> Missouri Camera Traps </summary>
This data set contains approximately 25,000 camera trap images representing 20 species (for example, the most common labels are red deer, mouflon, and white-tailed deer). Images within each sequence share the same species label (even though the animal may not have been recorded in all the images in the sequence). Around 900 bounding boxes are included. These are very challenging sequences with highly cluttered and dynamic scenes. Spatial resolutions of the images vary from 1920 × 1080 to 2048 × 1536. Sequence lengths vary from 3 to more than 300 frames.
This data set is released under the [Community Data License Agreement (permissive variant)](https://cdla.io/permissive-1-0/).
If you use this data set, please cite the associated manuscript:
```bibtex
@article{zhang2016animal,
title={Animal detection from highly cluttered natural scenes using spatiotemporal object region proposals and patch verification},
author={Zhang, Zhi and He, Zhihai and Cao, Guitao and Cao, Wenming},
journal={IEEE Transactions on Multimedia},
volume={18},
number={10},
pages={2079--2092},
year={2016},
publisher={IEEE}
}
```
For questions about this data set, contact [Hayder Yousif](hyypp5@mail.missouri.edu) and [Zhi Zhang](zzbhf@mail.missouri.edu).
</details>
<details>
<summary> North American Camera Trap Images (NACTI) </summary>
This data set contains 3.7M camera trap images from five locations across the United States, with labels for 28 animal categories, primarily at the species level (for example, the most common labels are cattle, boar, and red deer). Approximately 12% of images are labeled as empty. We have also added bounding box annotations to 8892 images (mostly vehicles and birds).
This data set is released under the [Community Data License Agreement (permissive variant)](https://cdla.io/permissive-1-0/).
Please cite this manuscript if you use this data set:
```bibtex
@article{tabak2019machine,
title={Machine learning to classify animal species in camera trap images: Applications in ecology},
author={Tabak, Michael A and Norouzzadeh, Mohammad S and Wolfson, David W and Sweeney, Steven J and VerCauteren, Kurt C and Snow, Nathan P and Halseth, Joseph M and Di Salvo, Paul A and Lewis, Jesse S and White, Michael D and others},
journal={Methods in Ecology and Evolution},
volume={10},
number={4},
pages={585--590},
year={2019},
publisher={Wiley Online Library}
}
```
For questions about this data set, contact [northamericancameratrapimages@gmail.com](northamericancameratrapimages@gmail.com).
</details>
<details>
<summary> WCS Camera Traps </summary>
This data set contains approximately 1.4M camera trap images representing around 675 species from 12 countries, making it one of the most diverse camera trap data sets available publicly. Data were provided by the [Wildlife Conservation Society](https://www.wcs.org/). The most common classes are tayassu pecari (peccary), meleagris ocellata (ocellated turkey), and bos taurus (cattle). A complete list of classes and associated image counts is available here. Approximately 50% of images are empty. We have also added approximately 375,000 bounding box annotations to approximately 300,000 of those images, which come from sequences covering almost all locations.
Sequences are inferred from timestamps, so may not strictly represent bursts. Images were labeled at a combination of image and sequence level, so – as is the case with most camera trap data sets – empty images may be labeled as non-empty (if an animal was present in one frame of a sequence but not in others). Images containing humans are referred to in metadata, but are not included in the data files. You can find more information about the data set [on the LILA website](https://lila.science/datasets/wcscameratraps).
This data set is released under the [Community Data License Agreement (permissive variant)](https://cdla.io/permissive-1-0/).
</details>
<details>
<summary> Wellington Camera Traps </summary>
This data set contains 270,450 images from 187 camera locations in Wellington, New Zealand. The cameras (Bushnell 119537, 119476, and 119436) recorded sequences of three images when triggered. Each sequence was labelled by citizen scientists and/or professional ecologists from Victoria University of Wellington into 17 classes: 15 animal categories (for example, the most common labels are bird, cat, and hedgehog), empty, and unclassifiable. Approximately 17% of images are labeled as empty. Images within each sequence share the same species label (even though the animal may not have been recorded in all three images).
If you use this data set, please cite the associated manuscript:
```bibtex
@article{anton2018monitoring,
title={Monitoring the mammalian fauna of urban areas using remote cameras and citizen science},
author={Anton, Victor and Hartley, Stephen and Geldenhuis, Andre and Wittmer, Heiko U},
journal={Journal of Urban Ecology},
volume={4},
number={1},
pages={juy002},
year={2018},
publisher={Oxford University Press}
}
```
This data set is released under the [Community Data License Agreement (permissive variant)](https://cdla.io/permissive-1-0/).
For questions about this data set, contact [Victor Anton](vykanton@gmail.com).
</details>
<details>
<summary> Island Conservation Camera Traps </summary>
This data set contains approximately 123,000 camera trap images from 123 camera locations from 7 islands in 6 countries. Data were provided by Island Conservation during projects conducted to prevent the extinction of threatened species on islands.
The most common classes are rabbit, rat, petrel, iguana, cat, goat, and pig, with both rat and cat represented between multiple island sites representing significantly different ecosystems (tropical forest, dry forest, and temperate forests). Additionally, this data set represents data from locations and ecosystems that, to our knowledge, are not well represented in publicly available datasets including >1,000 images each of iguanas, petrels, and shearwaters. A complete list of classes and associated image counts is available here. Approximately 60% of the images are empty. We have also included approximately 65,000 bounding box annotations for about 50,000 images.
In general cameras were dispersed across each project site to detect the presence of invasive vertebrate species that threaten native island species. Cameras were set to capture bursts of photos for each motion detection event (between three and eight photos) with a set delay between events (10 to 30 seconds) to minimize the number of photos. Images containing humans are referred to in metadata, but are not included in the data files.
For questions about this data set, contact [David Will](david.will@islandconservation.org) at Island Conservation.
This data set is released under the [Community Data License Agreement (permissive variant)](https://cdla.io/permissive-1-0/).
The original data set included a “human” class label; for privacy reasons, we have removed those images from this version of the data set. Those labels are still present in the metadata. If those images are important to your work, contact us; in some cases it will be possible to release those images under an alternative license.
</details>
<details>
<summary> Channel Islands Camera Traps </summary>
This data set contains 246,529 camera trap images from 73 camera locations in the Channel Islands, California. All animals are annotated with bounding boxes. Data were provided by The Nature Conservancy. Animals are classified as rodent1 (82914), fox (48150), bird (11099), skunk (1071), or other (159). 114,949 images (47%) are empty. All images of rats were taken on islands already known to have rat populations.
If you use these data in a publication or report, please use the following citation:
The Nature Conservancy (2021): Channel Islands Camera Traps 1.0. The Nature Conservancy. Dataset.
For questions about this data set, contact [Nathaniel Rindlaub](nathaniel.rindlaub@TNC.ORG) at The Nature Conservancy.
This data set is released under the [Community Data License Agreement (permissive variant)](https://cdla.io/permissive-1-0/).
The original data set included a “human” class label; for privacy reasons, we have removed those images from this version of the data set. Those labels are still present in the metadata.
</details>
<details>
<summary> Idaho Camera Traps </summary>
This data set contains approximately 1.5 million camera trap images from Idaho. Labels are provided for 62 categories, most of which are animal classes (“deer”, “elk”, and “cattle” are the most common animal classes), but labels also include some state indicators (e.g. “snow on lens”, “foggy lens”). Approximately 70.5% of images are labeled as empty. Annotations were assigned to image sequences, rather than individual images, so annotations are meaningful only at the sequence level.
The metadata contains references to images containing humans, but these have been removed from the dataset (along with images containing vehicles and domestic dogs).
Images were provided by the Idaho Department of Fish and Game. No representations or warranties are made regarding the data, including but not limited to warranties of non-infringement or fitness for a particular purpose. Some information shared under this agreement may not have undergone quality assurance procedures and should be considered provisional. Images may not be sold in any format, but may be used for scientific publications. Please acknowledge the Idaho Department of Fish and Game when using images for publication or scientific communication.
</details>
<details>
<summary> Snapshot Serengeti </summary>
This data set contains approximately 2.65M sequences of camera trap images, totaling 7.1M images, from seasons one through eleven of the [Snapshot Serengeti project](https://snapshotserengeti.org/) -- the flagship project of the Snapshot Safari network. Using the same camera trapping protocols at every site, Snapshot Safari members are collecting standardized data from many protected areas in Africa, which allows for cross-site comparisons to assess the efficacy of conservation and restoration programs. Serengeti National Park in Tanzania is best known for the massive annual migrations of wildebeest and zebra that drive the cycling of its dynamic ecosystem.
Labels are provided for 61 categories, primarily at the species level (for example, the most common labels are wildebeest, zebra, and Thomson’s gazelle). Approximately 76% of images are labeled as empty. A full list of species and associated image counts is available [here](https://lilablobssc.blob.core.windows.net/snapshotserengeti-v-2-0/SnapshotSerengeti_S1-11_v2.1.species_list.csv). We have also added approximately 150,000 bounding box annotations to approximately 78,000 of those images.
The images and species-level labels are described in more detail in the associated manuscript:
```bibtex
@misc{dryad_5pt92,
title = {Data from: Snapshot Serengeti, high-frequency annotated camera trap images of 40 mammalian species in an African savanna},
author = {Swanson, AB and Kosmala, M and Lintott, CJ and Simpson, RJ and Smith, A and Packer, C},
year = {2015},
journal = {Scientific Data},
URL = {https://doi.org/10.5061/dryad.5pt92},
doi = {doi:10.5061/dryad.5pt92},
publisher = {Dryad Digital Repository}
}
```
For questions about this data set, contact [Sarah Huebner](huebn090@umn.edu) at the University of Minnesota.
This data set is released under the [Community Data License Agreement (permissive variant)](https://cdla.io/permissive-1-0/).
</details>
<details>
<summary> Snapshot Karoo </summary>
This data set contains 14889 sequences of camera trap images, totaling 38074 images, from the [Snapshot Karoo](https://www.zooniverse.org/projects/shuebner729/snapshot-karoo) project, part of the Snapshot Safari network. Using the same camera trapping protocols at every site, Snapshot Safari members are collecting standardized data from many protected areas in Africa, which allows for cross-site comparisons to assess the efficacy of conservation and restoration programs. Karoo National Park, located in the arid Nama Karoo biome of South Africa, is defined by its endemic vegetation and mountain landscapes. Its unique topographical gradient has led to a surprising amount of biodiversity, with 58 mammals and more than 200 bird species recorded, as well as a multitude of reptilian species.
Labels are provided for 38 categories, primarily at the species level (for example, the most common labels are gemsbokoryx, hartebeestred, and kudu). Approximately 83.02% of images are labeled as empty. A full list of species and associated image counts is available [here](https://lilablobssc.blob.core.windows.net/snapshot-safari/KAR/SnapshotKaroo_S1_v1.0.species_list.csv).
For questions about this data set, contact [Sarah Huebner](huebn090@umn.edu) at the University of Minnesota.
This data set is released under the [Community Data License Agreement (permissive variant)](https://cdla.io/permissive-1-0/).
</details>
<details>
<summary> Snapshot Kgalagadi </summary>
This data set contains 3611 sequences of camera trap images, totaling 10222 images, from the [Snapshot Kgalagadi](https://www.zooniverse.org/projects/shuebner729/snapshot-kgalagadi/) project, part of the Snapshot Safari network. Using the same camera trapping protocols at every site, Snapshot Safari members are collecting standardized data from many protected areas in Africa, which allows for cross-site comparisons to assess the efficacy of conservation and restoration programs. The Kgalagadi Transfrontier Park stretches from the Namibian border across South Africa and into Botswana, covering a landscape commonly referred to as the Kalahari – an arid savanna. This region is of great interest to help us understand how animals cope with extreme temperatures at both ends of the scale.
Labels are provided for 31 categories, primarily at the species level (for example, the most common labels are gemsbokoryx, birdother, and ostrich). Approximately 76.14% of images are labeled as empty. A full list of species and associated image counts is available [here](https://lilablobssc.blob.core.windows.net/snapshot-safari/KGA/SnapshotKgalagadi_S1_v1.0.species_list.csv).
For questions about this data set, contact [Sarah Huebner](huebn090@umn.edu) at the University of Minnesota.
This data set is released under the [Community Data License Agreement (permissive variant)](https://cdla.io/permissive-1-0/).
</details>
<details>
<summary> Snapshot Enonkishu </summary>
This data set contains 13301 sequences of camera trap images, totaling 28544 images, from the [Snapshot Enonkishu](https://www.zooniverse.org/projects/aguthmann/snapshot-enonkishu) project, part of the Snapshot Safari network. Using the same camera trapping protocols at every site, Snapshot Safari members are collecting standardized data from many protected areas in Africa, which allows for cross-site comparisons to assess the efficacy of conservation and restoration programs. Enonkishu Conservancy is located on the northern boundary of the Mara-Serengeti ecosystem in Kenya, and is managed by a consortium of stakeholders and land-owning Maasai families. Their aim is to promote coexistence between wildlife and livestock in order to encourage regenerative grazing and build stability in the Mara conservancies.
Labels are provided for 39 categories, primarily at the species level (for example, the most common labels are impala, warthog, and zebra). Approximately 64.76% of images are labeled as empty. A full list of species and associated image counts is available [here](https://lilablobssc.blob.core.windows.net/snapshot-safari/ENO/SnapshotEnonkishu_S1_v1.0.species_list.csv).
For questions about this data set, contact [Sarah Huebner](huebn090@umn.edu) at the University of Minnesota.
This data set is released under the [Community Data License Agreement (permissive variant)](https://cdla.io/permissive-1-0/).
</details>
<details>
<summary> Snapshot Camdeboo </summary>
This data set contains 12132 sequences of camera trap images, totaling 30227 images, from the [Snapshot Camdeboo](https://www.zooniverse.org/projects/shuebner729/snapshot-camdeboo) project, part of the Snapshot Safari network. Using the same camera trapping protocols at every site, Snapshot Safari members are collecting standardized data from many protected areas in Africa, which allows for cross-site comparisons to assess the efficacy of conservation and restoration programs. Camdeboo National Park, South Africa is crucial habitat for many birds on a global scale, with greater than fifty endemic and near-endemic species and many migratory species.
Labels are provided for 43 categories, primarily at the species level (for example, the most common labels are kudu, springbok, and ostrich). Approximately 43.74% of images are labeled as empty. A full list of species and associated image counts is available [here](https://lilablobssc.blob.core.windows.net/snapshot-safari/CDB/SnapshotCamdeboo_S1_v1.0.species_list.csv).
For questions about this data set, contact [Sarah Huebner](huebn090@umn.edu) at the University of Minnesota.
This data set is released under the [Community Data License Agreement (permissive variant)](https://cdla.io/permissive-1-0/).
</details>
<details>
<summary> Snapshot Mountain Zebra </summary>
This data set contains 71688 sequences of camera trap images, totaling 73034 images, from the [Snapshot Mountain Zebra](https://www.zooniverse.org/projects/meredithspalmer/snapshot-mountain-zebra/) project, part of the Snapshot Safari network. Using the same camera trapping protocols at every site, Snapshot Safari members are collecting standardized data from many protected areas in Africa, which allows for cross-site comparisons to assess the efficacy of conservation and restoration programs. Mountain Zebra National Park is located in the Eastern Cape of South Africa in a transitional area between several distinct biomes, which means it is home to many endemic species. As the name suggests, this park contains the largest remnant population of Cape Mountain zebras, ~700 as of 2019 and increasing steadily every year.
Labels are provided for 54 categories, primarily at the species level (for example, the most common labels are zebramountain, kudu, and springbok). Approximately 91.23% of images are labeled as empty. A full list of species and associated image counts is available [here](https://lilablobssc.blob.core.windows.net/snapshot-safari/MTZ/SnapshotMountainZebra_S1_v1.0.species_list.csv).
For questions about this data set, contact [Sarah Huebner](huebn090@umn.edu) at the University of Minnesota.
This data set is released under the [Community Data License Agreement (permissive variant)](https://cdla.io/permissive-1-0/).
</details>
<details>
<summary> Snapshot Kruger </summary>
This data set contains 4747 sequences of camera trap images, totaling 10072 images, from the [Snapshot Kruger](https://www.zooniverse.org/projects/shuebner729/snapshot-kruger) project, part of the Snapshot Safari network. Using the same camera trapping protocols at every site, Snapshot Safari members are collecting standardized data from many protected areas in Africa, which allows for cross-site comparisons to assess the efficacy of conservation and restoration programs. Kruger National Park, South Africa has been a refuge for wildlife since its establishment in 1898, and it houses one of the most diverse wildlife assemblages remaining in Africa. The Snapshot Safari grid was established in 2018 as part of a research project assessing the impacts of large mammals on plant life as boundary fences were removed and wildlife reoccupied areas of previous extirpation.
Labels are provided for 46 categories, primarily at the species level (for example, the most common labels are impala, elephant, and buffalo). Approximately 61.60% of images are labeled as empty. A full list of species and associated image counts is available [here](https://lilablobssc.blob.core.windows.net/snapshot-safari/KRU/SnapshotKruger_S1_v1.0.species_list.csv).
For questions about this data set, contact [Sarah Huebner](huebn090@umn.edu) at the University of Minnesota.
This data set is released under the [Community Data License Agreement (permissive variant)](https://cdla.io/permissive-1-0/).
</details>
<details>
<summary> SWG Camera Traps </summary>
This data set contains 436,617 sequences of camera trap images from 982 locations in Vietnam and Lao, totaling 2,039,657 images. Labels are provided for 120 categories, primarily at the species level (for example, the most common labels are “Eurasian Wild Pig”, “Large-antlered Muntjac”, and “Unidentified Murid”). Approximately 12.98% of images are labeled as empty. A full list of species and associated image counts is available here. 101,659 bounding boxes are provided on 88,135 images.
This data set is provided by the Saola Working Group; providers include:
- IUCN SSC Asian Wild Cattle Specialist Group’s Saola Working Group (SWG)
- Asian Arks
- Wildlife Conservation Society (Lao)
- WWF Lao
- Integrated Conservation of Biodiversity and Forests project, Lao (ICBF)
- Center for Environment and Rural Development, Vinh University, Vietnam
If you use these data in a publication or report, please use the following citation:
SWG (2021): Northern and Central Annamites Camera Traps 2.0. IUCN SSC Asian Wild Cattle Specialist Group’s Saola Working Group. Dataset.
For questions about this data set, contact saolawg@gmail.com.
This data set is released under the [Community Data License Agreement (permissive variant)](https://cdla.io/permissive-1-0/).
</details>
<details>
<summary> Orinoquia Camera Traps </summary>
This data set contains 104,782 images collected from a 50-camera-trap array deployed from January to July 2020 within the private natural reserves El Rey Zamuro (31 km2) and Las Unamas (40 km2), located in the Meta department in the Orinoquía region in central Colombia. We deployed cameras using a stratified random sampling design across forest core area strata. Cameras were spaced 1 km apart from one another, located facing wildlife trails, and deployed with no bait. Images were stored and reviewed by experts using the Wildlife Insights platform.
This data set contains 51 classes, predominantly mammals such as the collared peccary, black agouti, spotted paca, white-lipped peccary, lowland tapir, and giant anteater. Approximately 20% of images are empty.
The main purpose of the study is to understand how humans, wildlife, and domestic animals interact in multi-functional landscapes (e.g., agricultural livestock areas with native forest remnants). However, this data set was also used to review model performance of AI-powered platforms – Wildlife Insights (WI), MegaDetector (MD), and Machine Learning for Wildlife Image Classification (MLWIC2). We provide a demonstration of the use of WI, MD, and MLWIC2 and R code for evaluating model performance of these platforms in the accompanying [GitHub repository](https://github.com/julianavelez1/Processing-Camera-Trap-Data-Using-AI).
If you use these data in a publication or report, please use the following citation:
```bibtex
@article{velez2022choosing,
title={Choosing an Appropriate Platform and Workflow for Processing Camera Trap Data using Artificial Intelligence},
author={V{\'e}lez, Juliana and Castiblanco-Camacho, Paula J and Tabak, Michael A and Chalmers, Carl and Fergus, Paul and Fieberg, John},
journal={arXiv preprint arXiv:2202.02283},
year={2022}
}
```
For questions about this data set, contact [Juliana Velez Gomez](julianavelezgomez@gmail.com).
This data set is released under the [Community Data License Agreement (permissive variant)](https://cdla.io/permissive-1-0/).
</details>
### Supported Tasks and Leaderboards
No leaderboards exist for LILA.
### Languages
The [LILA taxonomy](https://lila.science/taxonomy-mapping-for-camera-trap-data-sets/) is provided in English.
## Dataset Structure
### Data Instances
The data annotations are provided in [COCO Camera Traps](https://github.com/Microsoft/CameraTraps/blob/master/data_management/README.md#coco-cameratraps-format) format.
All of the datasets share a common category taxonomy, which is defined on the [LILA website](https://lila.science/taxonomy-mapping-for-camera-trap-data-sets/).
### Data Fields
Different datasets may have slightly varying fields, which include:
`file_name`: the file name \
`width` and `height`: the dimensions of the image \
`study`: which research study the image was collected as part of \
`location` : the name of the location at which the image was taken \
`annotations`: information about image annotation, which includes the taxonomy information, bounding box/boxes (`bbox`/`bboxes`) if any, as well as any other annotation information. \
`image` : the `path` to download the image and any other information that is available, e.g. its size in `bytes`.
### Data Splits
This dataset does not have a predefined train/test split.
## Dataset Creation
### Curation Rationale
The datasets that constitute LILA have been provided by the organizations, projects and researchers who collected them.
### Source Data
#### Initial data collection and normalization
N/A
#### Who are the source language producers?
N/A
### Annotations
#### Annotation process
Each dataset has been annotated by the members of the project/organization that provided it.
#### Who are the annotators?
The annotations have been provided by domain experts in fields such as biology and ecology.
### Personal and Sensitive Information
Some of the original data sets included a “human” class label; for privacy reasons, these images were removed. Those labels are still present in the metadata. If those images are important to your work, contact the [LILA maintainers](mailto:info@lila.science), since in some cases it will be possible to release those images under an alternative license.
## Considerations for Using the Data
### Social Impact of Dataset
Machine learning depends on labeled data, but accessing such data in biology and conservation is a challenge. Consequently, everyone benefits when labeled data is made available. Biologists and conservation scientists benefit by having data to train on, and free hosting allows teams to multiply the impact of their data (we suggest listing this benefit in grant proposals that fund data collection). ML researchers benefit by having data to experiment with.
### Discussion of Biases
These datasets do not represent global diversity, but are examples of local ecosystems and animals.
### Other Known Limitations
N/A
## Additional Information
### Working with Taxonomies
All the taxonomy categories are saved as ClassLabels, which can be converted to strings as needed. Strings can likewise be converted to integers as needed, to filter the dataset. In the example below we filter the "Caltech Camera Traps" dataset to find all the entries with a "felis catus" as the species for the first annotation.
```python
dataset = load_dataset("society-ethics/lila_camera_traps", "Caltech Camera Traps", split="train")
taxonomy = dataset.features["annotations"].feature["taxonomy"]
# Filters to show only cats
cats = dataset.filter(lambda x: x["annotations"]["taxonomy"][0]["species"] == taxonomy["species"].str2int("felis catus"))
```
The original common names have been saved with their taxonomy mappings in this repository in `common_names_to_tax.json`. These can be used, for example, to map from a taxonomy combination to a common name to help make queries more legible. Note, however, that there is a small number of duplicate common names with different taxonomy values which you will need to disambiguate.
The following example loads the first "sea turtle" in the "Island Conservation Camera Traps" dataset.
```python
LILA_COMMON_NAMES_TO_TAXONOMY = pd.read_json("https://huggingface.co/datasets/society-ethics/lila_camera_traps/raw/main/data/common_names_to_tax.json", lines=True).set_index("common_name")
dataset = load_dataset("society-ethics/lila_camera_traps", "Island Conservation Camera Traps", split="train")
taxonomy = dataset.features["annotations"].feature["taxonomy"]
sea_turtle = LILA_COMMON_NAMES_TO_TAXONOMY.loc["sea turtle"].to_dict()
sea_turtle = {k: taxonomy[k].str2int(v) if v is not None else v for k, v in sea_turtle.items()} # Map to ClassLabel integers
sea_turtle_dataset = ds.filter(lambda x: x["annotations"]["taxonomy"][0] == sea_turtle)
```
The example below selects a random item from the dataset, and then maps from the taxonomy to a common name:
```python
LILA_COMMON_NAMES_TO_TAXONOMY = pd.read_json("https://huggingface.co/datasets/society-ethics/lila_camera_traps/raw/main/data/common_names_to_tax.json", lines=True).set_index("common_name")
dataset = load_dataset("society-ethics/lila_camera_traps", "Caltech Camera Traps", split="train")
taxonomy = dataset.features["annotations"].feature["taxonomy"]
random_entry = dataset.shuffle()[0]
filter_taxonomy = random_entry["annotations"]["taxonomy"][0]
filter_keys = list(map(lambda x: (x[0], taxonomy[x[0]].int2str(x[1])), filter(lambda x: x[1] is not None, list(filter_taxonomy.items()))))
if len(filter_keys) > 0:
print(LILA_COMMON_NAMES_TO_TAXONOMY[np.logical_and.reduce([
LILA_COMMON_NAMES_TO_TAXONOMY[k] == v for k,v in filter_keys
])])
else:
print("No common name found for the item.")
```
### Dataset Curators
LILA BC is maintained by a working group that includes representatives from Ecologize, Zooniverse, the Evolving AI Lab, Snapshot Safari, and Microsoft AI for Earth. Hosting on Microsoft Azure is provided by Microsoft AI for Earth.
### Licensing Information
Many, but not all, LILA data sets were released under the [Community Data License Agreement (permissive variant)](https://cdla.io/permissive-1-0/). Check the details of the specific dataset you are using in its section above.
### Citation Information
Citations for each dataset (if they exist) are provided in its section above.
### Contributions
Thanks to [@NimaBoscarino](https://github.com/NimaBoscarino/) for adding this dataset.
| [
-0.7000450491905212,
-0.10423082113265991,
-0.06334371119737625,
-0.06548391282558441,
-0.4287833571434021,
-0.0174214206635952,
0.014209735207259655,
-0.5571922659873962,
0.31996580958366394,
0.16542422771453857,
-0.3390839397907257,
-0.869920551776886,
-0.5988680124282837,
0.523845195770... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
vocabtrimmer/mc4_validation | vocabtrimmer | 2023-03-02T13:33:54Z | 99 | 0 | null | [
"region:us"
] | 2023-03-02T13:33:54Z | 2023-03-02T09:20:16.000Z | 2023-03-02T09:20:16 | # MC4: only validation split
This contains the validation set of [mc4](https://huggingface.co/datasets/mc4), to reduce the amount of the files at downloading the validation split of the mc4 data.
| [
-0.8132328391075134,
-0.16589286923408508,
0.07842617481946945,
0.47660407423973083,
-0.38631731271743774,
0.40921783447265625,
0.41631338000297546,
0.06234851852059364,
0.4196835160255432,
1.0063046216964722,
-1.1752996444702148,
-0.49957725405693054,
-0.25502657890319824,
0.3608762621879... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
pushpdeep/fake_news_combined | pushpdeep | 2023-04-10T18:59:26Z | 99 | 0 | null | [
"license:apache-2.0",
"region:us"
] | 2023-04-10T18:59:26Z | 2023-03-09T06:04:04.000Z | 2023-03-09T06:04:04 | ---
license: apache-2.0
---
**Label Description**
0 : Fake,
1 : Real | [
-0.07611598819494247,
-0.8619408011436462,
0.13053208589553833,
0.855178952217102,
-0.3986198306083679,
0.23232440650463104,
0.6503798365592957,
-0.751811683177948,
1.0822944641113281,
0.8141992092132568,
-0.7786707282066345,
-0.08340713381767273,
-0.7597532272338867,
-0.16565322875976562,... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
0-hero/OIG-small-chip2 | 0-hero | 2023-03-16T20:10:19Z | 99 | 6 | null | [
"task_categories:conversational",
"task_categories:text2text-generation",
"language:en",
"region:us"
] | 2023-03-16T20:10:19Z | 2023-03-16T17:59:26.000Z | 2023-03-16T17:59:26 | ---
dataset_info:
features:
- name: user
dtype: string
- name: chip2
dtype: string
splits:
- name: train
num_bytes: 82154419
num_examples: 210289
download_size: 51736759
dataset_size: 82154419
task_categories:
- conversational
- text2text-generation
language:
- en
---
# Dataset Card for "OIG-small-chip2"
OIG-small-chip2 dataset from https://laion.ai/blog/oig-dataset/ <br>
Original Dataset - https://github.com/LAION-AI/Open-Instruction-Generalist | [
-0.19557879865169525,
-0.3138026297092438,
0.2230536937713623,
-0.40285155177116394,
-0.30411288142204285,
-0.19748136401176453,
0.11600800603628159,
-0.028171392157673836,
0.5554925799369812,
0.601954996585846,
-0.6368187069892883,
-0.4560818672180176,
-0.20547474920749664,
-0.54731124639... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
suolyer/pile_uspto | suolyer | 2023-03-27T02:09:02Z | 99 | 1 | null | [
"license:apache-2.0",
"region:us"
] | 2023-03-27T02:09:02Z | 2023-03-26T16:40:08.000Z | 2023-03-26T16:40:08 | ---
license: apache-2.0
---
| [
-0.12853392958641052,
-0.18616779148578644,
0.6529127955436707,
0.49436280131340027,
-0.19319361448287964,
0.23607419431209564,
0.36072003841400146,
0.050563063472509384,
0.579365611076355,
0.7400140762329102,
-0.6508104205131531,
-0.23783954977989197,
-0.7102249264717102,
-0.0478260256350... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
skrishna/coin_flip | skrishna | 2023-05-12T12:58:10Z | 99 | 0 | null | [
"license:mit",
"region:us"
] | 2023-05-12T12:58:10Z | 2023-05-10T04:54:00.000Z | 2023-05-10T04:54:00 | ---
license: mit
---
| [
-0.12853392958641052,
-0.18616779148578644,
0.6529127955436707,
0.49436280131340027,
-0.19319361448287964,
0.23607419431209564,
0.36072003841400146,
0.050563063472509384,
0.579365611076355,
0.7400140762329102,
-0.6508104205131531,
-0.23783954977989197,
-0.7102249264717102,
-0.0478260256350... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
CollectiveCognition/chats-data-2023-09-27 | CollectiveCognition | 2023-09-28T00:40:51Z | 99 | 16 | null | [
"license:mit",
"region:us"
] | 2023-09-28T00:40:51Z | 2023-09-28T00:39:17.000Z | 2023-09-28T00:39:17 | ---
license: mit
---
# Dataset Card for "Collective Cognition ChatGPT Conversations"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
### Dataset Summary
The "Collective Cognition ChatGPT Conversations" dataset is a collection of chat logs between users and the ChatGPT model. These conversations have been shared by users on the "Collective Cognition" website. The dataset provides insights into user interactions with language models and can be utilized for multiple purposes, including training, research, and analysis.
Visit Collective Cognition to browse or contribute chats here: https://collectivecognition.ai
### Supported Tasks
- **Language Model Finetuning**: Training through behavior cloning and knowledge instilling.
- **Dialogue Understanding**: Analyzing the flow and structure of conversations.
- **Sentiment Analysis**: Understanding the sentiment or emotion in the dialogues.
- **Topic Modeling**: Determining the topics discussed within the chats.
### Languages
The primary language of the dataset is English, but any language chat may be present in the dataset as users share more chats.
## Dataset Structure
### Data Instances
A typical data instance includes a chat log with a title, model name, whether the chat used custom instructions (currently not included if so), and the content of the message with the role of the sender (either "human" or "gpt").
The dataset is formatted to conform with ShareGPT standard.
This is an example entry in the dataset, noting that turns are variable and can be many or as few as two:
```json
{
"title": "Interest Formulas Clarification",
"model_name": "GPT-4",
"custom_instruction": false,
"conversations": [
{
"from": "human",
"value": "Is this correct? The formula for calculating simple interest (also known as Compound Interest) is given by I = P * r^n, where \n- I represents the final amount of money, also called principal or capital. This value will keep on increasing at a fixed rate per unit time denoted by 'r' and number of times denoted by 'n'. The initial investment represented by 'P', while the percentage rate expressed as an annual fraction, i.e., p/100 representing the percentage to be paid annually. In other words it\u2019s how much you earn in one year out of your original $100 invested with a 5% annual return compounded monthly."
},
{
"from": "assistant",
"value": "I'm sorry, but there seems to be some confusion in your question. The formula given and the definitions of the variables seem to be mixed up between simple interest and compound interest. Let me clarify:\n\n1. Simple Interest: This type of interest is calculated only on the original amount (principal) that you invested or loaned. The formula for simple interest is I = PRT, where:\n - I is the interest\n - P is the principal amount (initial investment)\n - R is the annual interest rate (in decimal form, so a 5% rate would be 0.05)\n - T is the time the money is invested or borrowed for, in years.\n\n2. Compound Interest: This type of interest is calculated on the initial principal and also on the accumulated interest of previous periods of a deposit or loan. The formula for compound interest is A = P(1 + r/n)^(nt), where:\n - A is the amount of money accumulated after n years, including interest.\n - P is the principal amount (the initial amount of money)\n - r is the annual interest rate (in decimal form, so a 5% rate would be 0.05)\n - n is the number of times that interest is compounded per year\n - t is the number of years the money is invested or borrowed for.\n\nIn both cases, the interest rate is usually expressed as an annual rate, regardless of the length of the term."
}
]
}
```
### Data Splits
Currently, the dataset is not divided into specific splits (train, test, validation).
## Dataset Creation
### Curation Rationale
The dataset was curated to provide insights into how users interact with language models and to contribute to the broader NLP community's resources.
### Source Data
The data originates from user contributions on the "Collective Cognition" website.
### Personal and Sensitive Information
All chats uploaded to the Collective Cognition website are made public, and are uploaded as a new dataset periodically. If you would like to have your chat removed, please email admin@collectivecognition.ai
## Considerations for Using the Data
### Social Impact of Dataset
The dataset offers a glimpse into the interaction dynamics between humans and AI models. It can be instrumental for researchers studying human-AI collaboration.
### Discussion of Biases
There might be biases in the dataset based on the types of users contributing chat logs and the topics they discuss with ChatGPT, particularly centered around what users may utilize ChatGPT for the most.
### Other Known Limitations
The dataset is dependent on the voluntary contributions of users. Hence, it might not represent the entire spectrum of interactions that users have with ChatGPT.
## Additional Information
### Licensing Information
MIT | [
-0.3608556091785431,
-0.9406598806381226,
0.19270405173301697,
0.38227805495262146,
-0.03379739448428154,
0.12425978481769562,
-0.2086908221244812,
-0.25429487228393555,
0.27907347679138184,
0.48307517170906067,
-0.6713663339614868,
-0.679639458656311,
-0.7008354067802429,
-0.1428916305303... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
lucas-meyer/asr_af | lucas-meyer | 2023-10-16T20:51:26Z | 99 | 0 | null | [
"region:us"
] | 2023-10-16T20:51:26Z | 2023-10-10T17:08:46.000Z | 2023-10-10T17:08:46 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
dataset_info:
features:
- name: audio
dtype: audio
- name: transcription
dtype: string
splits:
- name: train
num_bytes: 1134983208.472
num_examples: 2723
- name: validation
num_bytes: 398459352.0
num_examples: 447
- name: test
num_bytes: 467308235.0
num_examples: 476
download_size: 2232381103
dataset_size: 2000750795.472
---
# Dataset Card for "asr_af"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.6394525766372681,
-0.22186583280563354,
-0.03091612085700035,
0.3604971170425415,
-0.1732577383518219,
0.07119112461805344,
0.32815101742744446,
-0.25729241967201233,
0.8152730464935303,
0.4050059914588928,
-0.7718592286109924,
-0.6147649884223938,
-0.7240868210792542,
-0.08099608123302... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
davebulaval/CSMD | davebulaval | 2023-11-21T18:38:13Z | 99 | 0 | null | [
"task_categories:text-classification",
"task_categories:text2text-generation",
"multilinguality:monolingual",
"multilinguality:aligned",
"size_categories:1K<n<10K",
"source_datasets:original",
"source_datasets:extended|other-turkcorpus,other-asset,other-questeval,other-simplicity_da,other-simp_da",
"l... | 2023-11-21T18:38:13Z | 2023-11-13T15:38:03.000Z | 2023-11-13T15:38:03 | ---
license:
- cc-by-4.0
multilinguality:
- monolingual
- aligned
task_categories:
- text-classification
- text2text-generation
source_datasets:
- original
- >-
extended|other-turkcorpus,other-asset,other-questeval,other-simplicity_da,other-simp_da
language:
- en
tags:
- simplification-evaluation
- meaning-evaluation
pretty_name: CSMD
size_categories:
- 1K<n<10K
dataset_info:
- config_name: meaning
features:
- name: original
dtype: string
- name: simplification
dtype: string
- name: label
dtype: float64
splits:
- name: train
num_bytes: 251558
num_examples: 853
- name: dev
num_bytes: 27794
num_examples: 95
- name: test
num_bytes: 117686
num_examples: 407
download_size: 397038
dataset_size: 1355
- config_name: meaning_with_data_augmentation
features:
- name: original
dtype: string
- name: simplification
dtype: string
- name: label
dtype: float64
splits:
- name: train
num_bytes: 1151604
num_examples: 2560
- name: dev
num_bytes: 120991
num_examples: 285
- name: test
num_bytes: 540844
num_examples: 1220
download_size: 1813439
dataset_size: 4065
- config_name: meaning_holdout_identical
features:
- name: original
dtype: string
- name: simplification
dtype: string
- name: label
dtype: float64
splits:
- name: test
num_bytes: 89866
num_examples: 359
download_size: 89866
dataset_size: 359
- config_name: meaning_holdout_unrelated
features:
- name: original
dtype: string
- name: simplification
dtype: string
- name: label
dtype: float64
splits:
- name: test
num_bytes: 247835
num_examples: 359
download_size: 247835
dataset_size: 359
config_names:
- meaning
- meaning_with_data_augmentation
- meaning_holdout_identical
- meaning_holdout_unrelated
viewer: true
configs:
- config_name: meaning
data_files:
- split: train
path: "train.tsv"
- split: dev
path: "dev.tsv"
- split: test
path: "test.tsv"
- config_name: meaning_with_data_augmentation
data_files:
- split: train
path: "train_da.tsv"
- split: dev
path: "dev_da.tsv"
- split: test
path: "test_da.tsv"
- config_name: meaning_holdout_identical
data_files:
- split: test
path: "identical.tsv"
- config_name: meaning_holdout_unrelated
data_files:
- split: test
path: "unrelated.tsv"
---
# Dataset Card for "Continuous Scale Meaning Dataset" (CSMD)
CSMD was created for [MeaningBERT: Assessing Meaning Preservation Between Sentences](https://www.frontiersin.org/articles/10.3389/frai.2023.1223924/full).
It contains 1,355 English text simplification meaning preservation annotations. Meaning preservation measures how well the meaning of the output text corresponds to the meaning of the source ([Saggion, 2017](https://link.springer.com/book/10.1007/978-3-031-02166-4)).
The annotations were taken from the following four datasets:
- [ASSET](https://aclanthology.org/2020.acl-main.424/)
- [QuestEVal](https://arxiv.org/abs/2104.07560),
- [SimpDa_2022](https://aclanthology.org/2023.acl-long.905.pdf) and,
- [Simplicity-DA](https://direct.mit.edu/coli/article/47/4/861/106930/The-Un-Suitability-of-Automatic-Evaluation-Metrics).
It contains a data augmentation subset of 1,355 identical sentence triplets and 1,355 unrelated sentence triplets (See the "Sanity Checks" section (3.3.) in our [article](https://www.frontiersin.org/articles/10.3389/frai.2023.1223924/full)).
It also contains two holdout subsets of 359 identical sentence triplets and 359 unrelated sentence triples (See the "MeaningBERT" section (3.4.) in our [article](https://www.frontiersin.org/articles/10.3389/frai.2023.1223924/full)).
## Dataset Structure
### Data Instances
- `Meaning` configuration: an instance consists of 1,355 meaning preservation triplets (Document, simplification, label).
- `meaning_with_data_augmentation` configuration: an instance consists of 1,355 meaning preservation triplets (Document, simplification, label) along with 1,355 data augmentation triplets (Document, Document, 1) and 1,355 data augmentation triplets (Document, Unrelated Document, 0) (See the sanity checks in our [article](https://www.frontiersin.org/articles/10.3389/frai.2023.1223924/full)).
- `meaning_holdout_identical` configuration: an instance consists of 359 meaning holdout preservation identical triplets (Document, Document, 1) based on the ASSET Simplification dataset.
- `meaning_holdout_unrelated` configuration: an instance consists of 359 meaning holdout preservation unrelated triplets (Document, Unrelated Document, 0) based on the ASSET Simplification dataset.
### Data Fields
- `original`: an original sentence from the source datasets.
- `simplification`: a simplification of the original obtained by an automated system or a human.
- `label`: a meaning preservation rating between 0 and 100.
### Data Splits
The split statistics of CSMD are given below.
| | Train | Dev | Test | Total |
| ------ | ------ | ------ | ---- | ----- |
| Meaning | 853 | 95 | 407 | 1,355 |
| Meaning With Data Augmentation | 2,560 | 285 | 1,220 | 4,065 |
| Meaning Holdout Identical | NA | NA | 359 | 359 |
| Meaning Holdout Unrelated | NA | NA | 359 | 359 |
All the splits are randomly split using a 60-10-30 split with the seed `42`.
# Citation Information
```
@ARTICLE{10.3389/frai.2023.1223924,
AUTHOR={Beauchemin, David and Saggion, Horacio and Khoury, Richard},
TITLE={{MeaningBERT: Assessing Meaning Preservation Between Sentences}},
JOURNAL={Frontiers in Artificial Intelligence},
VOLUME={6},
YEAR={2023},
URL={https://www.frontiersin.org/articles/10.3389/frai.2023.1223924},
DOI={10.3389/frai.2023.1223924},
ISSN={2624-8212},
}
``` | [
-0.2329549640417099,
-0.6545138359069824,
0.424311101436615,
0.12029629945755005,
-0.5322820544242859,
-0.21960903704166412,
-0.3754905164241791,
-0.17025750875473022,
0.36825767159461975,
0.3739098608493805,
-0.6268345713615417,
-0.6681399345397949,
-0.605327308177948,
0.4094330370426178,... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
shivkumarganesh/CoLA | shivkumarganesh | 2021-10-30T19:53:06Z | 98 | 1 | null | [
"region:us"
] | 2021-10-30T19:53:06Z | 2022-03-02T23:29:22.000Z | 2022-03-02T23:29:22 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622263669967651,
0.43461522459983826,
-0.52829909324646,
0.7012971639633179,
0.7915719747543335,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104475975036621,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
autoevaluate/autoeval-staging-eval-project-xsum-f0ba0c18-12915729 | autoevaluate | 2022-08-11T14:48:21Z | 98 | 0 | null | [
"autotrain",
"evaluation",
"region:us"
] | 2022-08-11T14:48:21Z | 2022-08-11T13:19:00.000Z | 2022-08-11T13:19:00 | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- xsum
eval_info:
task: summarization
model: google/pegasus-cnn_dailymail
metrics: ['bleu']
dataset_name: xsum
dataset_config: default
dataset_split: test
col_mapping:
text: document
target: summary
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: google/pegasus-cnn_dailymail
* Dataset: xsum
* Config: default
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@xarymast](https://huggingface.co/xarymast) for evaluating this model. | [
-0.513755202293396,
-0.1860233098268509,
0.08932293951511383,
0.049464527517557144,
-0.24726508557796478,
-0.1409507691860199,
0.001565498998388648,
-0.3844585716724396,
0.3674141764640808,
0.3707839250564575,
-1.113452672958374,
-0.1975010186433792,
-0.7178342342376709,
-0.158036723732948... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
joelniklaus/EU_Wikipedias | joelniklaus | 2023-03-21T15:44:18Z | 98 | 1 | null | [
"task_categories:fill-mask",
"annotations_creators:other",
"language_creators:found",
"multilinguality:multilingual",
"size_categories:10M<n<100M",
"source_datasets:original",
"language:bg",
"language:cs",
"language:da",
"language:de",
"language:el",
"language:en",
"language:es",
"language... | 2023-03-21T15:44:18Z | 2022-11-20T01:31:51.000Z | 2022-11-20T01:31:51 | ---
annotations_creators:
- other
language_creators:
- found
language:
- bg
- cs
- da
- de
- el
- en
- es
- et
- fi
- fr
- ga
- hr
- hu
- it
- lt
- lv
- mt
- nl
- pl
- pt
- ro
- sk
- sl
- sv
license:
- cc-by-4.0
multilinguality:
- multilingual
paperswithcode_id: null
pretty_name: "EUWikipedias: A dataset of Wikipedias in the EU languages"
size_categories:
- 10M<n<100M
source_datasets:
- original
task_categories:
- fill-mask
---
# Dataset Card for EUWikipedias: A dataset of Wikipedias in the EU languages
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:** [Joel Niklaus](mailto:joel.niklaus.2@bfh.ch)
### Dataset Summary
Wikipedia dataset containing cleaned articles of all languages.
The datasets are built from the Wikipedia dump
(https://dumps.wikimedia.org/) with one split per language. Each example
contains the content of one full Wikipedia article with cleaning to strip
markdown and unwanted sections (references, etc.).
### Supported Tasks and Leaderboards
The dataset supports the tasks of fill-mask.
### Languages
The following languages are supported:
bg, cs, da, de, el, en, es, et, fi, fr, ga, hr, hu, it, lt, lv, mt, nl, pl, pt, ro, sk, sl, sv
## Dataset Structure
It is structured in the following format: {date}/{language}_{shard}.jsonl.xz
At the moment only the date '20221120' is supported.
Use the dataset like this:
```python
from datasets import load_dataset
dataset = load_dataset('joelito/EU_Wikipedias', date="20221120", language="de", split='train', streaming=True)
```
### Data Instances
The file format is jsonl.xz and there is one split available (`train`).
| Source | Size (MB) | Words | Documents | Words/Document |
|:-------------|------------:|-----------:|------------:|-----------------:|
| 20221120.all | 86034 | 9506846949 | 26481379 | 359 |
| 20221120.bg | 1261 | 88138772 | 285876 | 308 |
| 20221120.cs | 1904 | 189580185 | 513851 | 368 |
| 20221120.da | 679 | 74546410 | 286864 | 259 |
| 20221120.de | 11761 | 1191919523 | 2740891 | 434 |
| 20221120.el | 1531 | 103504078 | 215046 | 481 |
| 20221120.en | 26685 | 3192209334 | 6575634 | 485 |
| 20221120.es | 6636 | 801322400 | 1583597 | 506 |
| 20221120.et | 538 | 48618507 | 231609 | 209 |
| 20221120.fi | 1391 | 115779646 | 542134 | 213 |
| 20221120.fr | 9703 | 1140823165 | 2472002 | 461 |
| 20221120.ga | 72 | 8025297 | 57808 | 138 |
| 20221120.hr | 555 | 58853753 | 198746 | 296 |
| 20221120.hu | 1855 | 167732810 | 515777 | 325 |
| 20221120.it | 5999 | 687745355 | 1782242 | 385 |
| 20221120.lt | 409 | 37572513 | 203233 | 184 |
| 20221120.lv | 269 | 25091547 | 116740 | 214 |
| 20221120.mt | 29 | 2867779 | 5030 | 570 |
| 20221120.nl | 3208 | 355031186 | 2107071 | 168 |
| 20221120.pl | 3608 | 349900622 | 1543442 | 226 |
| 20221120.pt | 3315 | 389786026 | 1095808 | 355 |
| 20221120.ro | 1017 | 111455336 | 434935 | 256 |
| 20221120.sk | 506 | 49612232 | 238439 | 208 |
| 20221120.sl | 543 | 58858041 | 178472 | 329 |
| 20221120.sv | 2560 | 257872432 | 2556132 | 100 |
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
This dataset has been created by downloading the wikipedias using [olm/wikipedia](https://huggingface.co/datasets/olm/wikipedia) for the 24 EU languages.
For more information about the creation of the dataset please refer to prepare_wikipedias.py
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
```
TODO add citation
```
### Contributions
Thanks to [@JoelNiklaus](https://github.com/joelniklaus) for adding this dataset.
| [
-0.7257739901542664,
-0.4449039101600647,
0.18636870384216309,
0.1390242725610733,
-0.17482468485832214,
-0.17094485461711884,
-0.46108001470565796,
-0.3728812336921692,
0.727664053440094,
0.374350368976593,
-0.7472763657569885,
-0.7702208161354065,
-0.5002747178077698,
0.3103136122226715,... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Cohere/miracl-zh-queries-22-12 | Cohere | 2023-02-06T11:55:33Z | 98 | 22 | null | [
"task_categories:text-retrieval",
"task_ids:document-retrieval",
"annotations_creators:expert-generated",
"multilinguality:multilingual",
"language:zh",
"license:apache-2.0",
"region:us"
] | 2023-02-06T11:55:33Z | 2023-01-31T13:38:51.000Z | 2023-01-31T13:38:51 | ---
annotations_creators:
- expert-generated
language:
- zh
multilinguality:
- multilingual
size_categories: []
source_datasets: []
tags: []
task_categories:
- text-retrieval
license:
- apache-2.0
task_ids:
- document-retrieval
---
# MIRACL (zh) embedded with cohere.ai `multilingual-22-12` encoder
We encoded the [MIRACL dataset](https://huggingface.co/miracl) using the [cohere.ai](https://txt.cohere.ai/multilingual/) `multilingual-22-12` embedding model.
The query embeddings can be found in [Cohere/miracl-zh-queries-22-12](https://huggingface.co/datasets/Cohere/miracl-zh-queries-22-12) and the corpus embeddings can be found in [Cohere/miracl-zh-corpus-22-12](https://huggingface.co/datasets/Cohere/miracl-zh-corpus-22-12).
For the orginal datasets, see [miracl/miracl](https://huggingface.co/datasets/miracl/miracl) and [miracl/miracl-corpus](https://huggingface.co/datasets/miracl/miracl-corpus).
Dataset info:
> MIRACL 🌍🙌🌏 (Multilingual Information Retrieval Across a Continuum of Languages) is a multilingual retrieval dataset that focuses on search across 18 different languages, which collectively encompass over three billion native speakers around the world.
>
> The corpus for each language is prepared from a Wikipedia dump, where we keep only the plain text and discard images, tables, etc. Each article is segmented into multiple passages using WikiExtractor based on natural discourse units (e.g., `\n\n` in the wiki markup). Each of these passages comprises a "document" or unit of retrieval. We preserve the Wikipedia article title of each passage.
## Embeddings
We compute for `title+" "+text` the embeddings using our `multilingual-22-12` embedding model, a state-of-the-art model that works for semantic search in 100 languages. If you want to learn more about this model, have a look at [cohere.ai multilingual embedding model](https://txt.cohere.ai/multilingual/).
## Loading the dataset
In [miracl-zh-corpus-22-12](https://huggingface.co/datasets/Cohere/miracl-zh-corpus-22-12) we provide the corpus embeddings. Note, depending on the selected split, the respective files can be quite large.
You can either load the dataset like this:
```python
from datasets import load_dataset
docs = load_dataset(f"Cohere/miracl-zh-corpus-22-12", split="train")
```
Or you can also stream it without downloading it before:
```python
from datasets import load_dataset
docs = load_dataset(f"Cohere/miracl-zh-corpus-22-12", split="train", streaming=True)
for doc in docs:
docid = doc['docid']
title = doc['title']
text = doc['text']
emb = doc['emb']
```
## Search
Have a look at [miracl-zh-queries-22-12](https://huggingface.co/datasets/Cohere/miracl-zh-queries-22-12) where we provide the query embeddings for the MIRACL dataset.
To search in the documents, you must use **dot-product**.
And then compare this query embeddings either with a vector database (recommended) or directly computing the dot product.
A full search example:
```python
# Attention! For large datasets, this requires a lot of memory to store
# all document embeddings and to compute the dot product scores.
# Only use this for smaller datasets. For large datasets, use a vector DB
from datasets import load_dataset
import torch
#Load documents + embeddings
docs = load_dataset(f"Cohere/miracl-zh-corpus-22-12", split="train")
doc_embeddings = torch.tensor(docs['emb'])
# Load queries
queries = load_dataset(f"Cohere/miracl-zh-queries-22-12", split="dev")
# Select the first query as example
qid = 0
query = queries[qid]
query_embedding = torch.tensor(queries['emb'])
# Compute dot score between query embedding and document embeddings
dot_scores = torch.mm(query_embedding, doc_embeddings.transpose(0, 1))
top_k = torch.topk(dot_scores, k=3)
# Print results
print("Query:", query['query'])
for doc_id in top_k.indices[0].tolist():
print(docs[doc_id]['title'])
print(docs[doc_id]['text'])
```
You can get embeddings for new queries using our API:
```python
#Run: pip install cohere
import cohere
co = cohere.Client(f"{api_key}") # You should add your cohere API Key here :))
texts = ['my search query']
response = co.embed(texts=texts, model='multilingual-22-12')
query_embedding = response.embeddings[0] # Get the embedding for the first text
```
## Performance
In the following table we compare the cohere multilingual-22-12 model with Elasticsearch version 8.6.0 lexical search (title and passage indexed as independent fields). Note that Elasticsearch doesn't support all languages that are part of the MIRACL dataset.
We compute nDCG@10 (a ranking based loss), as well as hit@3: Is at least one relevant document in the top-3 results. We find that hit@3 is easier to interpret, as it presents the number of queries for which a relevant document is found among the top-3 results.
Note: MIRACL only annotated a small fraction of passages (10 per query) for relevancy. Especially for larger Wikipedias (like English), we often found many more relevant passages. This is know as annotation holes. Real nDCG@10 and hit@3 performance is likely higher than depicted.
| Model | cohere multilingual-22-12 nDCG@10 | cohere multilingual-22-12 hit@3 | ES 8.6.0 nDCG@10 | ES 8.6.0 acc@3 |
|---|---|---|---|---|
| miracl-ar | 64.2 | 75.2 | 46.8 | 56.2 |
| miracl-bn | 61.5 | 75.7 | 49.2 | 60.1 |
| miracl-de | 44.4 | 60.7 | 19.6 | 29.8 |
| miracl-en | 44.6 | 62.2 | 30.2 | 43.2 |
| miracl-es | 47.0 | 74.1 | 27.0 | 47.2 |
| miracl-fi | 63.7 | 76.2 | 51.4 | 61.6 |
| miracl-fr | 46.8 | 57.1 | 17.0 | 21.6 |
| miracl-hi | 50.7 | 62.9 | 41.0 | 48.9 |
| miracl-id | 44.8 | 63.8 | 39.2 | 54.7 |
| miracl-ru | 49.2 | 66.9 | 25.4 | 36.7 |
| **Avg** | 51.7 | 67.5 | 34.7 | 46.0 |
Further languages (not supported by Elasticsearch):
| Model | cohere multilingual-22-12 nDCG@10 | cohere multilingual-22-12 hit@3 |
|---|---|---|
| miracl-fa | 44.8 | 53.6 |
| miracl-ja | 49.0 | 61.0 |
| miracl-ko | 50.9 | 64.8 |
| miracl-sw | 61.4 | 74.5 |
| miracl-te | 67.8 | 72.3 |
| miracl-th | 60.2 | 71.9 |
| miracl-yo | 56.4 | 62.2 |
| miracl-zh | 43.8 | 56.5 |
| **Avg** | 54.3 | 64.6 |
| [
-0.6337466239929199,
-0.7860816121101379,
0.36231154203414917,
0.19228751957416534,
-0.09558746963739395,
-0.08747368305921555,
-0.31093522906303406,
-0.48202770948410034,
0.5483473539352417,
0.21191829442977905,
-0.605864942073822,
-1.0422202348709106,
-0.6617672443389893,
0.3027292490005... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
sedthh/tv_dialogue | sedthh | 2023-03-16T13:44:59Z | 98 | 7 | null | [
"task_categories:conversational",
"task_categories:text2text-generation",
"task_categories:text-generation",
"size_categories:1K<n<10K",
"language:en",
"license:mit",
"OpenAssistant",
"transcripts",
"subtitles",
"television",
"region:us"
] | 2023-03-16T13:44:59Z | 2023-03-13T20:33:06.000Z | 2023-03-13T20:33:06 | ---
dataset_info:
features:
- name: TEXT
dtype: string
- name: METADATA
dtype: string
- name: SOURCE
dtype: string
splits:
- name: train
num_bytes: 211728118
num_examples: 2781
download_size: 125187885
dataset_size: 211728118
license: mit
task_categories:
- conversational
- text2text-generation
- text-generation
language:
- en
tags:
- OpenAssistant
- transcripts
- subtitles
- television
pretty_name: TV and Movie dialogue and transcript corpus
size_categories:
- 1K<n<10K
---
# Dataset Card for "tv_dialogue"
This dataset contains transcripts for famous movies and TV shows from multiple sources.
An example dialogue would be:
```
[PERSON 1] Hello
[PERSON 2] Hello Person 2!
How's it going?
(they are both talking)
[PERSON 1] I like being an example
on Huggingface!
They are examples on Huggingface.
CUT OUT TO ANOTHER SCENCE
We are somewhere else
[PERSON 1 (v.o)] I wonder where we are?
```
All dialogues were processed to follow this format. Each row is a single episode / movie (**2781** rows total)
following the [OpenAssistant](https://open-assistant.io/) format. The METADATA column contains dditional information as a JSON string.
## Dialogue only, with some information on the scene
| Show | Number of scripts | Via | Source |
|----|----|---|---|
| Friends | 236 episodes | https://github.com/emorynlp/character-mining | friends/emorynlp |
| The Office | 186 episodes | https://www.kaggle.com/datasets/nasirkhalid24/the-office-us-complete-dialoguetranscript | office/nasirkhalid24 |
| Marvel Cinematic Universe | 18 movies | https://www.kaggle.com/datasets/pdunton/marvel-cinematic-universe-dialogue | marvel/pdunton |
| Doctor Who | 306 episodes | https://www.kaggle.com/datasets/jeanmidev/doctor-who | drwho/jeanmidev |
| Star Trek | 708 episodes | http://www.chakoteya.net/StarTrek/index.html based on https://github.com/GJBroughton/Star_Trek_Scripts/ | statrek/chakoteya |
## Actual transcripts with detailed information on the scenes
| Show | Number of scripts | Via | Source |
|----|----|---|---|
| Top Movies | 919 movies | https://imsdb.com/ | imsdb |
| Top Movies | 171 movies | https://www.dailyscript.com/ | dailyscript |
| Stargate SG-1 | 18 episodes | https://imsdb.com/ | imsdb |
| South Park | 129 episodes | https://imsdb.com/ | imsdb |
| Knight Rider | 80 episodes | http://www.knightriderarchives.com/ | knightriderarchives | | [
-0.34119871258735657,
-0.32503584027290344,
0.409292995929718,
-0.03177507594227791,
-0.448593407869339,
0.04016835615038872,
0.06285105645656586,
0.40689823031425476,
0.635241687297821,
0.6390470266342163,
-0.9063788652420044,
-0.8080466389656067,
-0.5873968601226807,
0.26800429821014404,... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
the-coorporation/the_squad_qg | the-coorporation | 2023-04-23T16:59:58Z | 98 | 0 | null | [
"language:en",
"license:wtfpl",
"region:us"
] | 2023-04-23T16:59:58Z | 2023-04-09T11:16:54.000Z | 2023-04-09T11:16:54 | ---
license: wtfpl
dataset_info:
- config_name: v2
features:
- name: context
dtype: string
- name: questions
dtype: string
splits:
- name: train
num_bytes: 20328952
num_examples: 18877
- name: validation
num_bytes: 1419411
num_examples: 1204
download_size: 24163282
dataset_size: 21748363
- config_name: v1
features:
- name: context
dtype: string
- name: questions
dtype: string
splits:
- name: train
num_bytes: 20391081
num_examples: 18891
- name: validation
num_bytes: 2389185
num_examples: 2067
download_size: 25308169
dataset_size: 22780266
language:
- en
pretty_name: The SQuAD QG Dataset
---
# The SQuAD QG Dataset
## Description
[Stanford Question Answering Dataset (SQuAD)](https://rajpurkar.github.io/SQuAD-explorer/) is a reading comprehension dataset, consisting of questions posed by crowdworkers on a set of Wikipedia articles, where the answer to every question is a segment of text, or span, from the corresponding reading passage, or the question might be unanswerable.
This modified version is aimed at question generation;
each entry only contains contexts and questions concatenated to a single string related to the specific context.
`The SQuAD` unites SQuAD 1.1 and 2.0 in two subsets each containing a `train` and `validation` split.
## Dataset Structure
### Data Instances
An example entry looks as follows:
```python
{
context: "This is a test context",
questions: ["Is this a test?", "Is this a test context?"]
}
```
### Data Fields
The dataset has the following fields:
* context: a string feature
* questions: a string feature
**NB:** The data fields are the same among all splits.
### Data Splits
| name | train | validation |
|------|-------|------------|
| v1 | 18891 | 2067 |
| v2 | 18877 | 1204 |
| [
-0.6018273234367371,
-0.862329363822937,
0.27908042073249817,
0.17315807938575745,
-0.14433011412620544,
0.17039504647254944,
0.15312106907367706,
-0.1659511923789978,
-0.007423562463372946,
0.3700215518474579,
-1.4206644296646118,
-0.4872923195362091,
-0.10382428765296936,
0.4209472835063... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
CyberHarem/yuudachi_kantaicollection | CyberHarem | 2023-09-17T17:16:35Z | 98 | 0 | null | [
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] | 2023-09-17T17:16:35Z | 2023-08-19T10:46:29.000Z | 2023-08-19T10:46:29 | ---
license: mit
task_categories:
- text-to-image
tags:
- art
- not-for-all-audiences
size_categories:
- n<1K
---
# Dataset of yuudachi_kantaicollection
This is the dataset of yuudachi_kantaicollection, containing 200 images and their tags.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
| Name | Images | Download | Description |
|:------------|---------:|:------------------------------------|:-------------------------------------------------------------------------|
| raw | 200 | [Download](dataset-raw.zip) | Raw data with meta information. |
| raw-stage3 | 486 | [Download](dataset-raw-stage3.zip) | 3-stage cropped raw data with meta information. |
| 384x512 | 200 | [Download](dataset-384x512.zip) | 384x512 aligned dataset. |
| 512x512 | 200 | [Download](dataset-512x512.zip) | 512x512 aligned dataset. |
| 512x704 | 200 | [Download](dataset-512x704.zip) | 512x704 aligned dataset. |
| 640x640 | 200 | [Download](dataset-640x640.zip) | 640x640 aligned dataset. |
| 640x880 | 200 | [Download](dataset-640x880.zip) | 640x880 aligned dataset. |
| stage3-640 | 486 | [Download](dataset-stage3-640.zip) | 3-stage cropped dataset with the shorter side not exceeding 640 pixels. |
| stage3-800 | 486 | [Download](dataset-stage3-800.zip) | 3-stage cropped dataset with the shorter side not exceeding 800 pixels. |
| stage3-1200 | 486 | [Download](dataset-stage3-1200.zip) | 3-stage cropped dataset with the shorter side not exceeding 1200 pixels. |
| [
-0.8261091113090515,
-0.20695610344409943,
0.315839946269989,
0.15786424279212952,
-0.2912816107273102,
-0.12323775887489319,
0.176622211933136,
-0.41695165634155273,
0.7730652093887329,
0.4700537323951721,
-0.9209973216056824,
-0.8033817410469055,
-0.5520067811012268,
0.20267756283283234,... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
ZenMoore/RoleBench | ZenMoore | 2023-11-23T11:09:22Z | 98 | 27 | null | [
"language:zh",
"language:en",
"license:apache-2.0",
"Role-Playing",
"Instruction",
"arxiv:2310.00746",
"region:us"
] | 2023-11-23T11:09:22Z | 2023-10-19T08:54:01.000Z | 2023-10-19T08:54:01 | ---
language:
- zh
- en
pretty_name: "RoleBench"
tags:
- Role-Playing
- Instruction
license: "apache-2.0"
---
# RoleBench
- Paper Title: RoleLLM: Benchmarking, Eliciting, and Enhancing Role-Playing Abilities of Large Language Models
- arXiv Link: https://arxiv.org/abs/2310.00746
- Github Repo: https://github.com/InteractiveNLP-Team/RoleLLM-public
Please read our paper for more details about this dataset.
TL;DR: We introduce RoleLLM, a role-playing framework of data construction and evaluation (RoleBench), as well as solutions for both closed-source and open-source models (RoleGPT, RoleLLaMA, RoleGLM). We also propose Context-Instruct for long-text knowledge extraction and role-specific knowledge injection.
---
# List of Roles

Abraham Lincoln, Alvy Singer, Andrew Detmer, Angel, Antonio Salieri, Bai Li (李白,Chinese), Benjamin Button, Blair Waldorf, Bruno Antony, Caden Cotard, Caesar, Coach Eric Taylor, Colonel Hans Landa, Colonel Nathan R. Jessep, Coriolanus, D_Artagnan, David Aames, Doctor Who, Dr. Frank N Furter, Dr. Hannibal Lecter, Emperor (《甄嬛传》皇帝,Chinese), Fei Zhang (张飞,Chinese), Fletcher Reede, Frank T.J. Mackey, Fred Flintstone, Freddy Krueger, Gaston, Gregory House, HAL 9000, Harvey Milk, Imperial Concubine Hua (《甄嬛传》华妃,Chinese), Jack, Jack Sparrow, Jack Torrance, Jackie Moon, James Bond, James Brown, James Carter, Jeff Spicoli, Jigsaw, Jim Morrison, John Coffey, John Dillinger, John Doe, John Keating, Jordan Belfort, Judge Dredd, Judy Hoops, Juno MacGuff, Karl Childers, Klaus Mikaelson, Leonard Shelby, Leroy Jethro Gibbs, Lestat de Lioncourt, Logan, Lucifer Morningstar, Lyn Cassady, Malcolm X, Mark Renton, Mary Sibley, Mater, Michael Scott, Murphy MacManus, Oliver Queen, Pat Solitano, Paul Conroy, Paul Vitti, Peter Parker, Po, Professor G.H. Dorr, Queen Catherine, Queen Elizabeth I, Rachel Lang, Randle McMurphy, Raylan Givens, Robert Angier, Rorschach, Seth, Sheldon Cooper, Sherlock Holmes, Shrek, Sonny, Stanley Ipkiss, Stephen Hawking, Stifler, The Dude, Theodore Twombly, Thor, Tom Ripley, Travis Bickle, Truman Capote, Tugg Speedman, Twilight Sparkle, Tyler Hawkins, Tyrion Lannister, Violet Weston, Wade Wilson, Walt Kowalski, Willie Soke, Wukong Sun (《西游记》孙悟空,Chinese).
---
# Non-Cherry-Picked Demonstrations




---
# Statistics


---
# Download
```bash
git lfs install
git clone https://huggingface.co/datasets/ZenMoore/RoleBench
```
```python
from datasets import load_dataset
dataset = load_dataset("ZenMoore/RoleBench")
```
---
# File Structure
- `instructions-eng`: Contains English Instructions (both general and role-specific ones). `nums.jsonl` indicates the number of role-specific instructions for each role, while `split_info.txt` records how many segments each role's script can be divided into during the Context-Instruct.
- `instructions-zh`: Similarly for Chinese.
- `profiles-eng`: Contains the description file `desc.json` for all roles, dialogue data files `profiles-eng-{role_name}.jsonl` for each role, and the script names in `scripts.json`.
- `profiles-zh`: Similarly for Chinese.
- `rolebench-eng/instruction-generalization`, `rolebench-eng/role-generalization`, and `rolebench-zh`: All contain two subfolders: `general` and `role_specific`. Each subfolder has training data, testing data, and the RoleGPT baseline results for comparison.
---
# License
Apache 2.0 License.
---
# Citation
Feel free to cite us if you like RoleBench and RoleLLM.
```bibtex
@article{wang2023rolellm,
title = {RoleLLM: Benchmarking, Eliciting, and Enhancing Role-Playing Abilities of Large Language Models},
author = {Zekun Moore Wang and Zhongyuan Peng and Haoran Que and Jiaheng Liu and Wangchunshu Zhou and Yuhan Wu and Hongcheng Guo and Ruitong Gan and Zehao Ni and Man Zhang and Zhaoxiang Zhang and Wanli Ouyang and Ke Xu and Wenhu Chen and Jie Fu and Junran Peng},
year = {2023},
journal = {arXiv preprint arXiv: 2310.00746}
}
```
```bibtex
@article{wang2023interactive,
title={Interactive Natural Language Processing},
author={Wang, Zekun and Zhang, Ge and Yang, Kexin and Shi, Ning and Zhou, Wangchunshu and Hao, Shaochun and Xiong, Guangzheng and Li, Yizhi and Sim, Mong Yuan and Chen, Xiuying and others},
journal={arXiv preprint arXiv:2305.13246},
year={2023}
}
``` | [
-0.48918822407722473,
-0.5275868773460388,
0.18675607442855835,
0.22847889363765717,
-0.04566371813416481,
-0.015585380606353283,
-0.3611946105957031,
-0.4559127688407898,
0.0560188964009285,
0.3925388753414154,
-0.479046493768692,
-0.7359186410903931,
-0.30544528365135193,
-0.084709331393... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
minnnnn/test | minnnnn | 2023-11-03T05:45:07Z | 98 | 0 | null | [
"region:us"
] | 2023-11-03T05:45:07Z | 2023-11-02T06:09:43.000Z | 2023-11-02T06:09:43 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622263669967651,
0.43461522459983826,
-0.52829909324646,
0.7012971639633179,
0.7915719747543335,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104475975036621,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
erbacher/PDEBench-1D-full | erbacher | 2023-11-09T21:03:53Z | 98 | 0 | null | [
"region:us"
] | 2023-11-09T21:03:53Z | 2023-11-09T17:11:51.000Z | 2023-11-09T17:11:51 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: parameters
dtype: string
- name: tensor
sequence:
sequence:
sequence: float32
- name: id
dtype: int64
- name: cls
dtype:
class_label:
names:
'0': 1D_Advection_Sols_beta0.1.hdf5
'1': 1D_Advection_Sols_beta0.2.hdf5
'2': 1D_Advection_Sols_beta0.4.hdf5
'3': 1D_Advection_Sols_beta0.7.hdf5
'4': 1D_Advection_Sols_beta1.0.hdf5
'5': 1D_Advection_Sols_beta2.0.hdf5
'6': 1D_Advection_Sols_beta4.0.hdf5
'7': 1D_Advection_Sols_beta7.0.hdf5
'8': 1D_Burgers_Sols_Nu0.001.hdf5
'9': 1D_Burgers_Sols_Nu0.002.hdf5
'10': 1D_Burgers_Sols_Nu0.004.hdf5
'11': 1D_Burgers_Sols_Nu0.01.hdf5
'12': 1D_Burgers_Sols_Nu0.02.hdf5
'13': 1D_Burgers_Sols_Nu0.04.hdf5
'14': 1D_Burgers_Sols_Nu0.1.hdf5
'15': 1D_Burgers_Sols_Nu0.2.hdf5
'16': 1D_Burgers_Sols_Nu0.4.hdf5
'17': 1D_Burgers_Sols_Nu1.0.hdf5
'18': 1D_Burgers_Sols_Nu2.0.hdf5
'19': 1D_Burgers_Sols_Nu4.0.hdf5
'20': 1D_CFD_Rand_Eta0.01_Zeta0.01_periodic_Train.hdf5-density
'21': 1D_CFD_Rand_Eta0.01_Zeta0.01_periodic_Train.hdf5-pressure
'22': 1D_CFD_Rand_Eta0.01_Zeta0.01_periodic_Train.hdf5-vx
'23': 1D_CFD_Rand_Eta0.1_Zeta0.1_periodic_Train.hdf5-density
'24': 1D_CFD_Rand_Eta0.1_Zeta0.1_periodic_Train.hdf5-pressure
'25': 1D_CFD_Rand_Eta0.1_Zeta0.1_periodic_Train.hdf5-vx
'26': 1D_CFD_Rand_Eta1.e-8_Zeta1.e-8_periodic_Train.hdf5-density
'27': 1D_CFD_Rand_Eta1.e-8_Zeta1.e-8_periodic_Train.hdf5-pressure
'28': 1D_CFD_Rand_Eta1.e-8_Zeta1.e-8_periodic_Train.hdf5-vx
'29': 1D_CFD_Rand_Eta1.e-8_Zeta1.e-8_trans_Train.hdf5-density
'30': 1D_CFD_Rand_Eta1.e-8_Zeta1.e-8_trans_Train.hdf5-pressure
'31': 1D_CFD_Rand_Eta1.e-8_Zeta1.e-8_trans_Train.hdf5-vx
'32': ReacDiff_Nu0.5_Rho1.0.hdf5
'33': ReacDiff_Nu0.5_Rho10.0.hdf5
'34': ReacDiff_Nu0.5_Rho2.0.hdf5
'35': ReacDiff_Nu0.5_Rho5.0.hdf5
'36': ReacDiff_Nu1.0_Rho1.0.hdf5
'37': ReacDiff_Nu1.0_Rho10.0.hdf5
'38': ReacDiff_Nu1.0_Rho2.0.hdf5
'39': ReacDiff_Nu1.0_Rho5.0.hdf5
'40': ReacDiff_Nu2.0_Rho1.0.hdf5
'41': ReacDiff_Nu2.0_Rho10.0.hdf5
'42': ReacDiff_Nu2.0_Rho2.0.hdf5
'43': ReacDiff_Nu2.0_Rho5.0.hdf5
'44': ReacDiff_Nu5.0_Rho1.0.hdf5
'45': ReacDiff_Nu5.0_Rho10.0.hdf5
'46': ReacDiff_Nu5.0_Rho2.0.hdf5
'47': ReacDiff_Nu5.0_Rho5.0.hdf5
splits:
- name: train
num_bytes: 99806420000
num_examples: 480000
download_size: 2640711523
dataset_size: 99806420000
---
# Dataset Card for "PDEBench-1D-full"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.8652849197387695,
-0.39321577548980713,
0.25525549054145813,
0.7069247961044312,
-0.501895546913147,
-0.14868563413619995,
0.25775638222694397,
-0.0011815320467576385,
1.0005879402160645,
0.659844696521759,
-1.0385286808013916,
-1.0808812379837036,
-0.49312591552734375,
-0.1694163978099... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
bh8648/esg1to3 | bh8648 | 2023-11-14T14:01:13Z | 98 | 0 | null | [
"region:us"
] | 2023-11-14T14:01:13Z | 2023-11-14T14:01:10.000Z | 2023-11-14T14:01:10 | ---
dataset_info:
features:
- name: Major Category
dtype: string
- name: Middle Category
dtype: string
- name: Small Category
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 690585
num_examples: 170
download_size: 339311
dataset_size: 690585
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "esg1to3"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.7196668982505798,
-0.36343637108802795,
0.4117600917816162,
0.24611857533454895,
-0.24931256473064423,
-0.16202162206172943,
0.33997440338134766,
-0.2391110360622406,
0.8879675269126892,
0.6013681888580322,
-0.9750511646270752,
-0.944072961807251,
-0.7041880488395691,
-0.122805215418338... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
jherng/xd-violence | jherng | 2023-11-20T16:41:58Z | 98 | 0 | null | [
"license:mit",
"region:us"
] | 2023-11-20T16:41:58Z | 2023-11-19T10:52:59.000Z | 2023-11-19T10:52:59 | ---
license: mit
dataset_info:
config_name: video
features:
- name: id
dtype: string
- name: path
dtype: string
- name: binary_target
dtype:
class_label:
names:
'0': Non-violence
'1': Violence
- name: multilabel_targets
sequence:
class_label:
names:
'0': Normal
'1': Fighting
'2': Shooting
'3': Riot
'4': Abuse
'5': Car accident
'6': Explosion
- name: frame_annotations
sequence:
- name: start
dtype: int32
- name: end
dtype: int32
splits:
- name: train
num_bytes: 780973
num_examples: 3942
- name: test
num_bytes: 118898
num_examples: 500
download_size: 79370542052
dataset_size: 899871
---
| [
-0.12853392958641052,
-0.18616779148578644,
0.6529127955436707,
0.49436280131340027,
-0.19319361448287964,
0.23607419431209564,
0.36072003841400146,
0.050563063472509384,
0.579365611076355,
0.7400140762329102,
-0.6508104205131531,
-0.23783954977989197,
-0.7102249264717102,
-0.0478260256350... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Pipper/sol_processed_data | Pipper | 2023-11-19T14:25:34Z | 98 | 0 | null | [
"region:us"
] | 2023-11-19T14:25:34Z | 2023-11-19T11:52:58.000Z | 2023-11-19T11:52:58 | ---
dataset_info:
features:
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
- name: labels
sequence: int64
splits:
- name: train
num_bytes: 12740019180
num_examples: 3814377
download_size: 1991408875
dataset_size: 12740019180
---
# Dataset Card for "sol_processed_data"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.3815871477127075,
-0.22640809416770935,
0.43827876448631287,
0.3754890561103821,
-0.08680693060159683,
-0.13164478540420532,
0.13123899698257446,
-0.1371801495552063,
0.6872445344924927,
0.7251284718513489,
-1.0462185144424438,
-0.8523760437965393,
-0.44025883078575134,
-0.1101282760500... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
mozilla-foundation/common_voice_7_0 | mozilla-foundation | 2023-07-29T16:00:09Z | 97 | 23 | common-voice | [
"task_categories:automatic-speech-recognition",
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"multilinguality:multilingual",
"source_datasets:extended|common_voice",
"license:cc0-1.0",
"arxiv:1912.06670",
"region:us"
] | 2023-07-29T16:00:09Z | 2022-03-02T23:29:22.000Z | 2022-03-02T23:29:22 | ---
annotations_creators:
- crowdsourced
language_creators:
- crowdsourced
license:
- cc0-1.0
multilinguality:
- multilingual
size_categories:
ab:
- 1K<n<10K
ar:
- 100K<n<1M
as:
- n<1K
az:
- n<1K
ba:
- 100K<n<1M
bas:
- 1K<n<10K
be:
- 100K<n<1M
bg:
- 1K<n<10K
br:
- 10K<n<100K
ca:
- 100K<n<1M
cnh:
- 1K<n<10K
cs:
- 10K<n<100K
cv:
- 10K<n<100K
cy:
- 100K<n<1M
de:
- 100K<n<1M
dv:
- 10K<n<100K
el:
- 10K<n<100K
en:
- 1M<n<10M
eo:
- 100K<n<1M
es:
- 100K<n<1M
et:
- 10K<n<100K
eu:
- 10K<n<100K
fa:
- 100K<n<1M
fi:
- 1K<n<10K
fr:
- 100K<n<1M
fy-NL:
- 10K<n<100K
ga-IE:
- 1K<n<10K
gl:
- 1K<n<10K
gn:
- 1K<n<10K
ha:
- 1K<n<10K
hi:
- 1K<n<10K
hsb:
- 1K<n<10K
hu:
- 10K<n<100K
hy-AM:
- 1K<n<10K
ia:
- 10K<n<100K
id:
- 10K<n<100K
it:
- 100K<n<1M
ja:
- 10K<n<100K
ka:
- 1K<n<10K
kab:
- 100K<n<1M
kk:
- 1K<n<10K
kmr:
- 10K<n<100K
ky:
- 10K<n<100K
lg:
- 10K<n<100K
lt:
- 10K<n<100K
lv:
- 1K<n<10K
mn:
- 10K<n<100K
mt:
- 10K<n<100K
nl:
- 10K<n<100K
or:
- 1K<n<10K
pa-IN:
- 1K<n<10K
pl:
- 100K<n<1M
pt:
- 10K<n<100K
rm-sursilv:
- 1K<n<10K
rm-vallader:
- 1K<n<10K
ro:
- 10K<n<100K
ru:
- 100K<n<1M
rw:
- 1M<n<10M
sah:
- 1K<n<10K
sk:
- 10K<n<100K
sl:
- 1K<n<10K
sr:
- n<1K
sv-SE:
- 10K<n<100K
ta:
- 100K<n<1M
th:
- 100K<n<1M
tr:
- 10K<n<100K
tt:
- 10K<n<100K
ug:
- 10K<n<100K
uk:
- 10K<n<100K
ur:
- 1K<n<10K
uz:
- n<1K
vi:
- 10K<n<100K
vot:
- n<1K
zh-CN:
- 10K<n<100K
zh-HK:
- 10K<n<100K
zh-TW:
- 10K<n<100K
source_datasets:
- extended|common_voice
paperswithcode_id: common-voice
pretty_name: Common Voice Corpus 7.0
language_bcp47:
- ab
- ar
- as
- az
- ba
- bas
- be
- bg
- br
- ca
- cnh
- cs
- cv
- cy
- de
- dv
- el
- en
- eo
- es
- et
- eu
- fa
- fi
- fr
- fy-NL
- ga-IE
- gl
- gn
- ha
- hi
- hsb
- hu
- hy-AM
- ia
- id
- it
- ja
- ka
- kab
- kk
- kmr
- ky
- lg
- lt
- lv
- mn
- mt
- nl
- or
- pa-IN
- pl
- pt
- rm-sursilv
- rm-vallader
- ro
- ru
- rw
- sah
- sk
- sl
- sr
- sv-SE
- ta
- th
- tr
- tt
- ug
- uk
- ur
- uz
- vi
- vot
- zh-CN
- zh-HK
- zh-TW
extra_gated_prompt: By clicking on “Access repository” below, you also agree to not
attempt to determine the identity of speakers in the Common Voice dataset.
task_categories:
- automatic-speech-recognition
---
# Dataset Card for Common Voice Corpus 7.0
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://commonvoice.mozilla.org/en/datasets
- **Repository:** https://github.com/common-voice/common-voice
- **Paper:** https://arxiv.org/abs/1912.06670
- **Leaderboard:** https://paperswithcode.com/dataset/common-voice
- **Point of Contact:** [Anton Lozhkov](mailto:anton@huggingface.co)
### Dataset Summary
The Common Voice dataset consists of a unique MP3 and corresponding text file.
Many of the 13905 recorded hours in the dataset also include demographic metadata like age, sex, and accent
that can help improve the accuracy of speech recognition engines.
The dataset currently consists of 11192 validated hours in 76 languages, but more voices and languages are always added.
Take a look at the [Languages](https://commonvoice.mozilla.org/en/languages) page to request a language or start contributing.
### Supported Tasks and Leaderboards
The results for models trained on the Common Voice datasets are available via the
[🤗 Speech Bench](https://huggingface.co/spaces/huggingface/hf-speech-bench)
### Languages
```
Abkhaz, Arabic, Armenian, Assamese, Azerbaijani, Basaa, Bashkir, Basque, Belarusian, Breton, Bulgarian, Catalan, Chinese (China), Chinese (Hong Kong), Chinese (Taiwan), Chuvash, Czech, Dhivehi, Dutch, English, Esperanto, Estonian, Finnish, French, Frisian, Galician, Georgian, German, Greek, Guarani, Hakha Chin, Hausa, Hindi, Hungarian, Indonesian, Interlingua, Irish, Italian, Japanese, Kabyle, Kazakh, Kinyarwanda, Kurmanji Kurdish, Kyrgyz, Latvian, Lithuanian, Luganda, Maltese, Mongolian, Odia, Persian, Polish, Portuguese, Punjabi, Romanian, Romansh Sursilvan, Romansh Vallader, Russian, Sakha, Serbian, Slovak, Slovenian, Sorbian, Upper, Spanish, Swedish, Tamil, Tatar, Thai, Turkish, Ukrainian, Urdu, Uyghur, Uzbek, Vietnamese, Votic, Welsh
```
## Dataset Structure
### Data Instances
A typical data point comprises the `path` to the audio file and its `sentence`.
Additional fields include `accent`, `age`, `client_id`, `up_votes`, `down_votes`, `gender`, `locale` and `segment`.
```python
{
'client_id': 'd59478fbc1ee646a28a3c652a119379939123784d99131b865a89f8b21c81f69276c48bd574b81267d9d1a77b83b43e6d475a6cfc79c232ddbca946ae9c7afc5',
'path': 'et/clips/common_voice_et_18318995.mp3',
'audio': {
'path': 'et/clips/common_voice_et_18318995.mp3',
'array': array([-0.00048828, -0.00018311, -0.00137329, ..., 0.00079346, 0.00091553, 0.00085449], dtype=float32),
'sampling_rate': 48000
},
'sentence': 'Tasub kokku saada inimestega, keda tunned juba ammust ajast saati.',
'up_votes': 2,
'down_votes': 0,
'age': 'twenties',
'gender': 'male',
'accent': '',
'locale': 'et',
'segment': ''
}
```
### Data Fields
`client_id` (`string`): An id for which client (voice) made the recording
`path` (`string`): The path to the audio file
`audio` (`dict`): A dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate. Note that when accessing the audio column: `dataset[0]["audio"]` the audio file is automatically decoded and resampled to `dataset.features["audio"].sampling_rate`. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the `"audio"` column, *i.e.* `dataset[0]["audio"]` should **always** be preferred over `dataset["audio"][0]`.
`sentence` (`string`): The sentence the user was prompted to speak
`up_votes` (`int64`): How many upvotes the audio file has received from reviewers
`down_votes` (`int64`): How many downvotes the audio file has received from reviewers
`age` (`string`): The age of the speaker (e.g. `teens`, `twenties`, `fifties`)
`gender` (`string`): The gender of the speaker
`accent` (`string`): Accent of the speaker
`locale` (`string`): The locale of the speaker
`segment` (`string`): Usually an empty field
### Data Splits
The speech material has been subdivided into portions for dev, train, test, validated, invalidated, reported and other.
The validated data is data that has been validated with reviewers and received upvotes that the data is of high quality.
The invalidated data is data has been invalidated by reviewers
and received downvotes indicating that the data is of low quality.
The reported data is data that has been reported, for different reasons.
The other data is data that has not yet been reviewed.
The dev, test, train are all data that has been reviewed, deemed of high quality and split into dev, test and train.
## Data Preprocessing Recommended by Hugging Face
The following are data preprocessing steps advised by the Hugging Face team. They are accompanied by an example code snippet that shows how to put them to practice.
Many examples in this dataset have trailing quotations marks, e.g _“the cat sat on the mat.“_. These trailing quotation marks do not change the actual meaning of the sentence, and it is near impossible to infer whether a sentence is a quotation or not a quotation from audio data alone. In these cases, it is advised to strip the quotation marks, leaving: _the cat sat on the mat_.
In addition, the majority of training sentences end in punctuation ( . or ? or ! ), whereas just a small proportion do not. In the dev set, **almost all** sentences end in punctuation. Thus, it is recommended to append a full-stop ( . ) to the end of the small number of training examples that do not end in punctuation.
```python
from datasets import load_dataset
ds = load_dataset("mozilla-foundation/common_voice_7_0", "en", use_auth_token=True)
def prepare_dataset(batch):
"""Function to preprocess the dataset with the .map method"""
transcription = batch["sentence"]
if transcription.startswith('"') and transcription.endswith('"'):
# we can remove trailing quotation marks as they do not affect the transcription
transcription = transcription[1:-1]
if transcription[-1] not in [".", "?", "!"]:
# append a full-stop to sentences that do not end in punctuation
transcription = transcription + "."
batch["sentence"] = transcription
return batch
ds = ds.map(prepare_dataset, desc="preprocess dataset")
```
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
The dataset consists of people who have donated their voice online. You agree to not attempt to determine the identity of speakers in the Common Voice dataset.
## Considerations for Using the Data
### Social Impact of Dataset
The dataset consists of people who have donated their voice online. You agree to not attempt to determine the identity of speakers in the Common Voice dataset.
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
Public Domain, [CC-0](https://creativecommons.org/share-your-work/public-domain/cc0/)
### Citation Information
```
@inproceedings{commonvoice:2020,
author = {Ardila, R. and Branson, M. and Davis, K. and Henretty, M. and Kohler, M. and Meyer, J. and Morais, R. and Saunders, L. and Tyers, F. M. and Weber, G.},
title = {Common Voice: A Massively-Multilingual Speech Corpus},
booktitle = {Proceedings of the 12th Conference on Language Resources and Evaluation (LREC 2020)},
pages = {4211--4215},
year = 2020
}
```
| [
-0.5343225598335266,
-0.7203505039215088,
0.1390673667192459,
0.44312307238578796,
-0.2845195531845093,
0.05087093263864517,
-0.5720617175102234,
-0.21642833948135376,
0.43637123703956604,
0.5597935318946838,
-0.7632845640182495,
-0.9892130494117737,
-0.45113489031791687,
0.248700708150863... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
strombergnlp/x-stance | strombergnlp | 2022-10-25T21:45:25Z | 97 | 1 | null | [
"task_categories:text-classification",
"task_ids:fact-checking",
"annotations_creators:crowdsourced",
"language_creators:found",
"multilinguality:multilingual",
"size_categories:10K<n<100K",
"language:de",
"language:fr",
"license:mit",
"stance-detection",
"arxiv:2003.08385",
"region:us"
] | 2022-10-25T21:45:25Z | 2022-05-18T09:55:43.000Z | 2022-05-18T09:55:43 | ---
annotations_creators:
- crowdsourced
language_creators:
- found
language:
- de
- fr
license:
- mit
multilinguality:
- multilingual
size_categories:
- 10K<n<100K
source_datasets: []
task_categories:
- text-classification
task_ids:
- fact-checking
pretty_name: X-Stance
tags:
- stance-detection
---
# Dataset Card for X-Stance
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Repository:** [https://github.com/ZurichNLP/xstance](https://github.com/ZurichNLP/xstance)
- **Paper:** [http://ceur-ws.org/Vol-2624/paper9.pdf](http://ceur-ws.org/Vol-2624/paper9.pdf), [https://arxiv.org/abs/2003.08385](https://arxiv.org/abs/2003.08385)
- **Point of Contact:** [Jannis Vamvas](https://twitter.com/j_vamvas)
### Dataset Summary
The x-stance dataset contains more than 150 political questions, and 67k comments written by candidates on those questions. The comments are partly German, partly French and Italian. The data have been extracted from the Swiss voting advice platform Smartvote.
### Languages
German, French/Italian
## Dataset Structure
### Data Instances
An example of 'train' looks as follows:
```
{
'id': '0',
'question': 'Eine Volksinitiative fordert, dass die Gesamtfläche der Bauzonen in der Schweiz für die nächsten 20 Jahre auf dem heutigen Stand begrenzt wird. Befürworten Sie dieses Anliegen?',
'comment': 'Eine fixe Grösse verbieten, ist das falsche Mittel', '
'label': 0
}
```
### Data Fields
- `id`: a 'string' feature.
- `question`: a 'string' expressing a claim/topic.
- `comment`: a 'string' to be classified for its stance to the source.
- `label`:
```
0: "AGAINST",
1: "FAVOR"
```
### Data Splits
|languages|name|instances|
|---------|----|----:|
|de|train|33850|
|de|validation|2871|
|de|test|11891|
|fr|train|11790|
|fr|validation|1055|
|fr|test|5814|
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[MIT License](https://github.com/ZurichNLP/xstance/blob/master/LICENSE)
### Citation Information
```
@article{vamvas2020x,
title={X-stance: A multilingual multi-target dataset for stance detection},
author={Vamvas, Jannis and Sennrich, Rico},
journal={arXiv preprint arXiv:2003.08385},
year={2020}
}
```
### Contributions
Thanks to [mkonxd](https://github.com/mkonxd), [leondz](https://github.com/leondz) for adding this dataset.
| [
-0.6682376265525818,
-0.25657206773757935,
0.3718867599964142,
0.029308229684829712,
-0.38536566495895386,
0.056730005890131,
-0.3621066212654114,
-0.17960405349731445,
0.7909737825393677,
0.4419556260108948,
-0.9583937525749207,
-1.2854630947113037,
-0.6587101817131042,
-0.029967539012432... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
autoevaluate/autoeval-staging-eval-project-cnn_dailymail-ca1f103f-12035606 | autoevaluate | 2022-07-28T20:34:23Z | 97 | 0 | null | [
"autotrain",
"evaluation",
"region:us"
] | 2022-07-28T20:34:23Z | 2022-07-28T19:57:31.000Z | 2022-07-28T19:57:31 | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- cnn_dailymail
eval_info:
task: summarization
model: nbroad/longt5-base-global-mediasum
metrics: []
dataset_name: cnn_dailymail
dataset_config: 3.0.0
dataset_split: test
col_mapping:
text: article
target: highlights
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: nbroad/longt5-base-global-mediasum
* Dataset: cnn_dailymail
* Config: 3.0.0
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@nbroad](https://huggingface.co/nbroad) for evaluating this model. | [
-0.5462170243263245,
-0.22532004117965698,
0.09130751341581345,
0.3168073892593384,
-0.2056165188550949,
-0.080366350710392,
-0.054284557700157166,
-0.43536055088043213,
0.30223920941352844,
0.3810060918331146,
-1.0066478252410889,
-0.24224010109901428,
-0.7507770657539368,
-0.004946003668... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
HighCWu/diffusiondb_2m_first_5k_canny | HighCWu | 2023-02-16T14:53:35Z | 97 | 4 | null | [
"task_categories:text-to-image",
"size_categories:1K<n<10K",
"language:en",
"license:openrail",
"region:us"
] | 2023-02-16T14:53:35Z | 2023-02-16T14:16:14.000Z | 2023-02-16T14:16:14 | ---
dataset_info:
features:
- name: image
dtype: image
- name: guide
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 3204091410
num_examples: 5000
download_size: 3203076374
dataset_size: 3204091410
license: openrail
task_categories:
- text-to-image
language:
- en
size_categories:
- 1K<n<10K
---
# Dataset Card for "diffusiondb_2m_first_5k_canny"
Process [diffusiondb 2m first 5k canny](https://huggingface.co/datasets/poloclub/diffusiondb) to edges by Canny algorithm.
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
| [
-0.5343776941299438,
-0.6555946469306946,
0.5417959690093994,
0.565165102481842,
-0.43303200602531433,
-0.17312441766262054,
0.3596858084201813,
-0.21125595271587372,
0.8446809649467468,
0.568814218044281,
-1.1919715404510498,
-0.9475821852684021,
-0.8713507056236267,
-0.3902601897716522,
... | null | null | null | null | null | null | null | null | null | null | null | null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.