id stringlengths 2 115 | lastModified stringlengths 24 24 | tags list | author stringlengths 2 42 ⌀ | description stringlengths 0 68.7k ⌀ | citation stringlengths 0 10.7k ⌀ | cardData null | likes int64 0 3.55k | downloads int64 0 10.1M | card stringlengths 0 1.01M |
|---|---|---|---|---|---|---|---|---|---|
facebook/voxpopuli | 2022-10-14T13:43:12.000Z | [
"task_categories:automatic-speech-recognition",
"multilinguality:multilingual",
"language:en",
"language:de",
"language:fr",
"language:es",
"language:pl",
"language:it",
"language:ro",
"language:hu",
"language:cs",
"language:nl",
"language:fi",
"language:hr",
"language:sk",
"language:sl",
"language:et",
"language:lt",
"license:cc0-1.0",
"license:other",
"arxiv:2101.00390",
"region:us"
] | facebook | A large-scale multilingual speech corpus for representation learning, semi-supervised learning and interpretation. | @inproceedings{wang-etal-2021-voxpopuli,
title = "{V}ox{P}opuli: A Large-Scale Multilingual Speech Corpus for Representation Learning,
Semi-Supervised Learning and Interpretation",
author = "Wang, Changhan and
Riviere, Morgane and
Lee, Ann and
Wu, Anne and
Talnikar, Chaitanya and
Haziza, Daniel and
Williamson, Mary and
Pino, Juan and
Dupoux, Emmanuel",
booktitle = "Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics
and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)",
month = aug,
year = "2021",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.acl-long.80",
doi = "10.18653/v1/2021.acl-long.80",
pages = "993--1003",
} | null | 23 | 3,166 | ---
annotations_creators: []
language:
- en
- de
- fr
- es
- pl
- it
- ro
- hu
- cs
- nl
- fi
- hr
- sk
- sl
- et
- lt
language_creators: []
license:
- cc0-1.0
- other
multilinguality:
- multilingual
pretty_name: VoxPopuli
size_categories: []
source_datasets: []
tags: []
task_categories:
- automatic-speech-recognition
task_ids: []
---
# Dataset Card for Voxpopuli
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://github.com/facebookresearch/voxpopuli
- **Repository:** https://github.com/facebookresearch/voxpopuli
- **Paper:** https://arxiv.org/abs/2101.00390
- **Point of Contact:** [changhan@fb.com](mailto:changhan@fb.com), [mriviere@fb.com](mailto:mriviere@fb.com), [annl@fb.com](mailto:annl@fb.com)
### Dataset Summary
VoxPopuli is a large-scale multilingual speech corpus for representation learning, semi-supervised learning and interpretation.
The raw data is collected from 2009-2020 [European Parliament event recordings](https://multimedia.europarl.europa.eu/en/home). We acknowledge the European Parliament for creating and sharing these materials.
This implementation contains transcribed speech data for 18 languages.
It also contains 29 hours of transcribed speech data of non-native English intended for research in ASR for accented speech (15 L2 accents)
### Example usage
VoxPopuli contains labelled data for 18 languages. To load a specific language pass its name as a config name:
```python
from datasets import load_dataset
voxpopuli_croatian = load_dataset("facebook/voxpopuli", "hr")
```
To load all the languages in a single dataset use "multilang" config name:
```python
voxpopuli_all = load_dataset("facebook/voxpopuli", "multilang")
```
To load a specific set of languages, use "multilang" config name and pass a list of required languages to `languages` parameter:
```python
voxpopuli_slavic = load_dataset("facebook/voxpopuli", "multilang", languages=["hr", "sk", "sl", "cs", "pl"])
```
To load accented English data, use "en_accented" config name:
```python
voxpopuli_accented = load_dataset("facebook/voxpopuli", "en_accented")
```
**Note that L2 English subset contains only `test` split.**
### Supported Tasks and Leaderboards
* automatic-speech-recognition: The dataset can be used to train a model for Automatic Speech Recognition (ASR). The model is presented with an audio file and asked to transcribe the audio file to written text. The most common evaluation metric is the word error rate (WER).
Accented English subset can also be used for research in ASR for accented speech (15 L2 accents)
### Languages
VoxPopuli contains labelled (transcribed) data for 18 languages:
| Language | Code | Transcribed Hours | Transcribed Speakers | Transcribed Tokens |
|:---:|:---:|:---:|:---:|:---:|
| English | En | 543 | 1313 | 4.8M |
| German | De | 282 | 531 | 2.3M |
| French | Fr | 211 | 534 | 2.1M |
| Spanish | Es | 166 | 305 | 1.6M |
| Polish | Pl | 111 | 282 | 802K |
| Italian | It | 91 | 306 | 757K |
| Romanian | Ro | 89 | 164 | 739K |
| Hungarian | Hu | 63 | 143 | 431K |
| Czech | Cs | 62 | 138 | 461K |
| Dutch | Nl | 53 | 221 | 488K |
| Finnish | Fi | 27 | 84 | 160K |
| Croatian | Hr | 43 | 83 | 337K |
| Slovak | Sk | 35 | 96 | 270K |
| Slovene | Sl | 10 | 45 | 76K |
| Estonian | Et | 3 | 29 | 18K |
| Lithuanian | Lt | 2 | 21 | 10K |
| Total | | 1791 | 4295 | 15M |
Accented speech transcribed data has 15 various L2 accents:
| Accent | Code | Transcribed Hours | Transcribed Speakers |
|:---:|:---:|:---:|:---:|
| Dutch | en_nl | 3.52 | 45 |
| German | en_de | 3.52 | 84 |
| Czech | en_cs | 3.30 | 26 |
| Polish | en_pl | 3.23 | 33 |
| French | en_fr | 2.56 | 27 |
| Hungarian | en_hu | 2.33 | 23 |
| Finnish | en_fi | 2.18 | 20 |
| Romanian | en_ro | 1.85 | 27 |
| Slovak | en_sk | 1.46 | 17 |
| Spanish | en_es | 1.42 | 18 |
| Italian | en_it | 1.11 | 15 |
| Estonian | en_et | 1.08 | 6 |
| Lithuanian | en_lt | 0.65 | 7 |
| Croatian | en_hr | 0.42 | 9 |
| Slovene | en_sl | 0.25 | 7 |
## Dataset Structure
### Data Instances
```python
{
'audio_id': '20180206-0900-PLENARY-15-hr_20180206-16:10:06_5',
'language': 11, # "hr"
'audio': {
'path': '/home/polina/.cache/huggingface/datasets/downloads/extracted/44aedc80bb053f67f957a5f68e23509e9b181cc9e30c8030f110daaedf9c510e/train_part_0/20180206-0900-PLENARY-15-hr_20180206-16:10:06_5.wav',
'array': array([-0.01434326, -0.01055908, 0.00106812, ..., 0.00646973], dtype=float32),
'sampling_rate': 16000
},
'raw_text': '',
'normalized_text': 'poast genitalnog sakaenja ena u europi tek je jedna od manifestacija takve tetne politike.',
'gender': 'female',
'speaker_id': '119431',
'is_gold_transcript': True,
'accent': 'None'
}
```
### Data Fields
* `audio_id` (string) - id of audio segment
* `language` (datasets.ClassLabel) - numerical id of audio segment
* `audio` (datasets.Audio) - a dictionary containing the path to the audio, the decoded audio array, and the sampling rate. In non-streaming mode (default), the path points to the locally extracted audio. In streaming mode, the path is the relative path of an audio inside its archive (as files are not downloaded and extracted locally).
* `raw_text` (string) - original (orthographic) audio segment text
* `normalized_text` (string) - normalized audio segment transcription
* `gender` (string) - gender of speaker
* `speaker_id` (string) - id of speaker
* `is_gold_transcript` (bool) - ?
* `accent` (string) - type of accent, for example "en_lt", if applicable, else "None".
### Data Splits
All configs (languages) except for accented English contain data in three splits: train, validation and test. Accented English `en_accented` config contains only test split.
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
The raw data is collected from 2009-2020 [European Parliament event recordings](https://multimedia.europarl.europa.eu/en/home)
#### Initial Data Collection and Normalization
The VoxPopuli transcribed set comes from aligning the full-event source speech audio with the transcripts for plenary sessions. Official timestamps
are available for locating speeches by speaker in the full session, but they are frequently inaccurate, resulting in truncation of the speech or mixture
of fragments from the preceding or the succeeding speeches. To calibrate the original timestamps,
we perform speaker diarization (SD) on the full-session audio using pyannote.audio (Bredin et al.2020) and adopt the nearest SD timestamps (by L1 distance to the original ones) instead for segmentation.
Full-session audios are segmented into speech paragraphs by speaker, each of which has a transcript available.
The speech paragraphs have an average duration of 197 seconds, which leads to significant. We hence further segment these paragraphs into utterances with a
maximum duration of 20 seconds. We leverage speech recognition (ASR) systems to force-align speech paragraphs to the given transcripts.
The ASR systems are TDS models (Hannun et al., 2019) trained with ASG criterion (Collobert et al., 2016) on audio tracks from in-house deidentified video data.
The resulting utterance segments may have incorrect transcriptions due to incomplete raw transcripts or inaccurate ASR force-alignment.
We use the predictions from the same ASR systems as references and filter the candidate segments by a maximum threshold of 20% character error rate(CER).
#### Who are the source language producers?
Speakers are participants of the European Parliament events, many of them are EU officials.
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
Gender speakers distribution is imbalanced, percentage of female speakers is mostly lower than 50% across languages, with the minimum of 15% for the Lithuanian language data.
VoxPopuli includes all available speeches from the 2009-2020 EP events without any selections on the topics or speakers.
The speech contents represent the standpoints of the speakers in the EP events, many of which are EU officials.
### Other Known Limitations
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
The dataset is distributet under CC0 license, see also [European Parliament's legal notice](https://www.europarl.europa.eu/legal-notice/en/) for the raw data.
### Citation Information
Please cite this paper:
```bibtex
@inproceedings{wang-etal-2021-voxpopuli,
title = "{V}ox{P}opuli: A Large-Scale Multilingual Speech Corpus for Representation Learning, Semi-Supervised Learning and Interpretation",
author = "Wang, Changhan and
Riviere, Morgane and
Lee, Ann and
Wu, Anne and
Talnikar, Chaitanya and
Haziza, Daniel and
Williamson, Mary and
Pino, Juan and
Dupoux, Emmanuel",
booktitle = "Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)",
month = aug,
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.acl-long.80",
pages = "993--1003",
}
```
### Contributions
Thanks to [@polinaeterna](https://github.com/polinaeterna) for adding this dataset.
|
Chris1/cityscapes | 2022-11-03T19:06:29.000Z | [
"region:us"
] | Chris1 | null | null | null | 1 | 3,165 | Entry not found |
reuters21578 | 2023-08-30T17:35:01.000Z | [
"language:en",
"license:other",
"region:us"
] | null | The Reuters-21578 dataset is one of the most widely used data collections for text
categorization research. It is collected from the Reuters financial newswire service in 1987. | @article{APTE94,
author = {Chidanand Apt{\'{e}} and Fred Damerau and Sholom M. Weiss},
title = {Automated Learning of Decision Rules for Text Categorization},
journal = {ACM Transactions on Information Systems},
year = {1994},
note = {To appear.}
}
@inproceedings{APTE94b,
author = {Chidanand Apt{\'{e}} and Fred Damerau and Sholom M. Weiss},
title = {Toward Language Independent Automated Learning of Text Categorization Models},
booktitle = {sigir94},
year = {1994},
note = {To appear.}
}
@inproceedings{HAYES8},
author = {Philip J. Hayes and Peggy M. Anderson and Irene B. Nirenburg and
Linda M. Schmandt},
title = {{TCS}: A Shell for Content-Based Text Categorization},
booktitle = {IEEE Conference on Artificial Intelligence Applications},
year = {1990}
}
@inproceedings{HAYES90b,
author = {Philip J. Hayes and Steven P. Weinstein},
title = {{CONSTRUE/TIS:} A System for Content-Based Indexing of a
Database of News Stories},
booktitle = {Second Annual Conference on Innovative Applications of
Artificial Intelligence},
year = {1990}
}
@incollection{HAYES92 ,
author = {Philip J. Hayes},
title = {Intelligent High-Volume Text Processing using Shallow,
Domain-Specific Techniques},
booktitle = {Text-Based Intelligent Systems},
publisher = {Lawrence Erlbaum},
address = {Hillsdale, NJ},
year = {1992},
editor = {Paul S. Jacobs}
}
@inproceedings{LEWIS91c ,
author = {David D. Lewis},
title = {Evaluating Text Categorization},
booktitle = {Proceedings of Speech and Natural Language Workshop},
year = {1991},
month = {feb},
organization = {Defense Advanced Research Projects Agency},
publisher = {Morgan Kaufmann},
pages = {312--318}
}
@phdthesis{LEWIS91d,
author = {David Dolan Lewis},
title = {Representation and Learning in Information Retrieval},
school = {Computer Science Dept.; Univ. of Massachusetts; Amherst, MA 01003},
year = 1992},
note = {Technical Report 91--93.}
}
@inproceedings{LEWIS91e,
author = {David D. Lewis},
title = {Data Extraction as Text Categorization: An Experiment with
the {MUC-3} Corpus},
booktitle = {Proceedings of the Third Message Understanding Evaluation
and Conference},
year = {1991},
month = {may},
organization = {Defense Advanced Research Projects Agency},
publisher = {Morgan Kaufmann},
address = {Los Altos, CA}
}
@inproceedings{LEWIS92b,
author = {David D. Lewis},
title = {An Evaluation of Phrasal and Clustered Representations on a Text
Categorization Task},
booktitle = {Fifteenth Annual International ACM SIGIR Conference on
Research and Development in Information Retrieval},
year = {1992},
pages = {37--50}
}
@inproceedings{LEWIS92d ,
author = {David D. Lewis and Richard M. Tong},
title = {Text Filtering in {MUC-3} and {MUC-4}},
booktitle = {Proceedings of the Fourth Message Understanding Conference ({MUC-4})},
year = {1992},
month = {jun},
organization = {Defense Advanced Research Projects Agency},
publisher = {Morgan Kaufmann},
address = {Los Altos, CA}
}
@inproceedings{LEWIS92e,
author = {David D. Lewis},
title = {Feature Selection and Feature Extraction for Text Categorization},
booktitle = {Proceedings of Speech and Natural Language Workshop},
year = {1992},
month = {feb} ,
organization = {Defense Advanced Research Projects Agency},
publisher = {Morgan Kaufmann},
pages = {212--217}
}
@inproceedings{LEWIS94b,
author = {David D. Lewis and Marc Ringuette},
title = {A Comparison of Two Learning Algorithms for Text Categorization},
booktitle = {Symposium on Document Analysis and Information Retrieval},
year = {1994},
organization = {ISRI; Univ. of Nevada, Las Vegas},
address = {Las Vegas, NV},
month = {apr},
pages = {81--93}
}
@article{LEWIS94d,
author = {David D. Lewis and Philip J. Hayes},
title = {Guest Editorial},
journal = {ACM Transactions on Information Systems},
year = {1994},
volume = {12},
number = {3},
pages = {231},
month = {jul}
}
@article{SPARCKJONES76,
author = {K. {Sparck Jones} and C. J. {van Rijsbergen}},
title = {Information Retrieval Test Collections},
journal = {Journal of Documentation},
year = {1976},
volume = {32},
number = {1},
pages = {59--75}
}
@book{WEISS91,
author = {Sholom M. Weiss and Casimir A. Kulikowski},
title = {Computer Systems That Learn},
publisher = {Morgan Kaufmann},
year = {1991},
address = {San Mateo, CA}
} | null | 6 | 3,146 | ---
language:
- en
license: other
paperswithcode_id: reuters-21578
pretty_name: Reuters-21578 Text Categorization Collection
dataset_info:
- config_name: ModApte
features:
- name: text
dtype: string
- name: text_type
dtype: string
- name: topics
sequence: string
- name: lewis_split
dtype: string
- name: cgis_split
dtype: string
- name: old_id
dtype: string
- name: new_id
dtype: string
- name: places
sequence: string
- name: people
sequence: string
- name: orgs
sequence: string
- name: exchanges
sequence: string
- name: date
dtype: string
- name: title
dtype: string
splits:
- name: test
num_bytes: 2971653
num_examples: 3299
- name: train
num_bytes: 9161179
num_examples: 9603
- name: unused
num_bytes: 948244
num_examples: 722
download_size: 8150596
dataset_size: 13081076
- config_name: ModHayes
features:
- name: text
dtype: string
- name: text_type
dtype: string
- name: topics
sequence: string
- name: lewis_split
dtype: string
- name: cgis_split
dtype: string
- name: old_id
dtype: string
- name: new_id
dtype: string
- name: places
sequence: string
- name: people
sequence: string
- name: orgs
sequence: string
- name: exchanges
sequence: string
- name: date
dtype: string
- name: title
dtype: string
splits:
- name: test
num_bytes: 948244
num_examples: 722
- name: train
num_bytes: 19071106
num_examples: 20856
download_size: 8150596
dataset_size: 20019350
- config_name: ModLewis
features:
- name: text
dtype: string
- name: text_type
dtype: string
- name: topics
sequence: string
- name: lewis_split
dtype: string
- name: cgis_split
dtype: string
- name: old_id
dtype: string
- name: new_id
dtype: string
- name: places
sequence: string
- name: people
sequence: string
- name: orgs
sequence: string
- name: exchanges
sequence: string
- name: date
dtype: string
- name: title
dtype: string
splits:
- name: test
num_bytes: 5400506
num_examples: 6188
- name: train
num_bytes: 12994591
num_examples: 13625
- name: unused
num_bytes: 948244
num_examples: 722
download_size: 8150596
dataset_size: 19343341
---
# Dataset Card for "reuters21578"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://archive.ics.uci.edu/dataset/137/reuters+21578+text+categorization+collection
- **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of downloaded dataset files:** 24.45 MB
- **Size of the generated dataset:** 52.22 MB
- **Total amount of disk used:** 76.67 MB
### Dataset Summary
The Reuters-21578 dataset is one of the most widely used data collections for text
categorization research. It is collected from the Reuters financial newswire service in 1987.
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Dataset Structure
### Data Instances
#### ModApte
- **Size of downloaded dataset files:** 8.15 MB
- **Size of the generated dataset:** 13.05 MB
- **Total amount of disk used:** 21.21 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"cgis_split": "\"TRAINING-SET\"",
"date": "19-MAR-1987 06:17:22.36",
"exchanges": [],
"lewis_split": "\"TRAIN\"",
"new_id": "\"7001\"",
"old_id": "\"11914\"",
"orgs": [],
"people": [],
"places": ["australia"],
"text": "\"Media group John Fairfax Ltd <FFXA.S>\\nsaid that its flat first half net profit partly reflected the\\nimpact of changes in t...",
"title": "FAIRFAX SAYS HIGHER TAX HITS FIRST HALF EARNINGS",
"topics": ["earn"]
}
```
#### ModHayes
- **Size of downloaded dataset files:** 8.15 MB
- **Size of the generated dataset:** 19.79 MB
- **Total amount of disk used:** 27.93 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"cgis_split": "\"TRAINING-SET\"",
"date": "19-OCT-1987 23:49:31.45",
"exchanges": [],
"lewis_split": "\"TEST\"",
"new_id": "\"20001\"",
"old_id": "\"20596\"",
"orgs": [],
"people": [],
"places": ["japan", "usa"],
"text": "\"If the dollar goes the way of Wall Street,\\nJapanese will finally move out of dollar investments in a\\nserious way, Japan inves...",
"title": "IF DOLLAR FOLLOWS WALL STREET JAPANESE WILL DIVEST",
"topics": ["money-fx"]
}
```
#### ModLewis
- **Size of downloaded dataset files:** 8.15 MB
- **Size of the generated dataset:** 19.38 MB
- **Total amount of disk used:** 27.54 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"cgis_split": "\"TRAINING-SET\"",
"date": "19-MAR-1987 06:17:22.36",
"exchanges": [],
"lewis_split": "\"TRAIN\"",
"new_id": "\"7001\"",
"old_id": "\"11914\"",
"orgs": [],
"people": [],
"places": ["australia"],
"text": "\"Media group John Fairfax Ltd <FFXA.S>\\nsaid that its flat first half net profit partly reflected the\\nimpact of changes in t...",
"title": "FAIRFAX SAYS HIGHER TAX HITS FIRST HALF EARNINGS",
"topics": ["earn"]
}
```
### Data Fields
The data fields are the same among all splits.
#### ModApte
- `text`: a `string` feature.
- `topics`: a `list` of `string` features.
- `lewis_split`: a `string` feature.
- `cgis_split`: a `string` feature.
- `old_id`: a `string` feature.
- `new_id`: a `string` feature.
- `places`: a `list` of `string` features.
- `people`: a `list` of `string` features.
- `orgs`: a `list` of `string` features.
- `exchanges`: a `list` of `string` features.
- `date`: a `string` feature.
- `title`: a `string` feature.
#### ModHayes
- `text`: a `string` feature.
- `topics`: a `list` of `string` features.
- `lewis_split`: a `string` feature.
- `cgis_split`: a `string` feature.
- `old_id`: a `string` feature.
- `new_id`: a `string` feature.
- `places`: a `list` of `string` features.
- `people`: a `list` of `string` features.
- `orgs`: a `list` of `string` features.
- `exchanges`: a `list` of `string` features.
- `date`: a `string` feature.
- `title`: a `string` feature.
#### ModLewis
- `text`: a `string` feature.
- `topics`: a `list` of `string` features.
- `lewis_split`: a `string` feature.
- `cgis_split`: a `string` feature.
- `old_id`: a `string` feature.
- `new_id`: a `string` feature.
- `places`: a `list` of `string` features.
- `people`: a `list` of `string` features.
- `orgs`: a `list` of `string` features.
- `exchanges`: a `list` of `string` features.
- `date`: a `string` feature.
- `title`: a `string` feature.
### Data Splits
#### ModApte
| |train|unused|test|
|-------|----:|-----:|---:|
|ModApte| 8762| 720|3009|
#### ModHayes
| |train|test|
|--------|----:|---:|
|ModHayes|18323| 720|
#### ModLewis
| |train|unused|test|
|--------|----:|-----:|---:|
|ModLewis|12449| 720|5458|
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
According to the dataset website (https://archive.ics.uci.edu/dataset/137/reuters+21578+text+categorization+collection),
this dataset is licensed under [Creative Commons Attribution 4.0 International](https://creativecommons.org/licenses/by/4.0/legalcode)
(C BY 4.0) license.
However, the source data file contains a `README.txt` file with the following information under the
**Copyright & Notification** section:
> The copyright for the text of newswire articles and Reuters
annotations in the Reuters-21578 collection resides with Reuters Ltd.
Reuters Ltd. and Carnegie Group, Inc. have agreed to allow the free
distribution of this data *for research purposes only*.
> If you publish results based on this data set, please acknowledge
its use, refer to the data set by the name "Reuters-21578,
Distribution 1.0", and inform your readers of the current location of
the data set (see "Availability & Questions").
### Citation Information
```
@article{APTE94,
author = {Chidanand Apt{'{e}} and Fred Damerau and Sholom M. Weiss},
title = {Automated Learning of Decision Rules for Text Categorization},
journal = {ACM Transactions on Information Systems},
year = {1994},
note = {To appear.}
}
@inproceedings{APTE94b,
author = {Chidanand Apt{'{e}} and Fred Damerau and Sholom M. Weiss},
title = {Toward Language Independent Automated Learning of Text Categorization Models},
booktitle = {sigir94},
year = {1994},
note = {To appear.}
}
@inproceedings{HAYES8},
author = {Philip J. Hayes and Peggy M. Anderson and Irene B. Nirenburg and
Linda M. Schmandt},
title = {{TCS}: A Shell for Content-Based Text Categorization},
booktitle = {IEEE Conference on Artificial Intelligence Applications},
year = {1990}
}
@inproceedings{HAYES90b,
author = {Philip J. Hayes and Steven P. Weinstein},
title = {{CONSTRUE/TIS:} A System for Content-Based Indexing of a
Database of News Stories},
booktitle = {Second Annual Conference on Innovative Applications of
Artificial Intelligence},
year = {1990}
}
@incollection{HAYES92 ,
author = {Philip J. Hayes},
title = {Intelligent High-Volume Text Processing using Shallow,
Domain-Specific Techniques},
booktitle = {Text-Based Intelligent Systems},
publisher = {Lawrence Erlbaum},
address = {Hillsdale, NJ},
year = {1992},
editor = {Paul S. Jacobs}
}
@inproceedings{LEWIS91c ,
author = {David D. Lewis},
title = {Evaluating Text Categorization},
booktitle = {Proceedings of Speech and Natural Language Workshop},
year = {1991},
month = {feb},
organization = {Defense Advanced Research Projects Agency},
publisher = {Morgan Kaufmann},
pages = {312--318}
}
@phdthesis{LEWIS91d,
author = {David Dolan Lewis},
title = {Representation and Learning in Information Retrieval},
school = {Computer Science Dept.; Univ. of Massachusetts; Amherst, MA 01003},
year = 1992},
note = {Technical Report 91--93.}
}
@inproceedings{LEWIS91e,
author = {David D. Lewis},
title = {Data Extraction as Text Categorization: An Experiment with
the {MUC-3} Corpus},
booktitle = {Proceedings of the Third Message Understanding Evaluation
and Conference},
year = {1991},
month = {may},
organization = {Defense Advanced Research Projects Agency},
publisher = {Morgan Kaufmann},
address = {Los Altos, CA}
}
@inproceedings{LEWIS92b,
author = {David D. Lewis},
title = {An Evaluation of Phrasal and Clustered Representations on a Text
Categorization Task},
booktitle = {Fifteenth Annual International ACM SIGIR Conference on
Research and Development in Information Retrieval},
year = {1992},
pages = {37--50}
}
@inproceedings{LEWIS92d ,
author = {David D. Lewis and Richard M. Tong},
title = {Text Filtering in {MUC-3} and {MUC-4}},
booktitle = {Proceedings of the Fourth Message Understanding Conference ({MUC-4})},
year = {1992},
month = {jun},
organization = {Defense Advanced Research Projects Agency},
publisher = {Morgan Kaufmann},
address = {Los Altos, CA}
}
@inproceedings{LEWIS92e,
author = {David D. Lewis},
title = {Feature Selection and Feature Extraction for Text Categorization},
booktitle = {Proceedings of Speech and Natural Language Workshop},
year = {1992},
month = {feb} ,
organization = {Defense Advanced Research Projects Agency},
publisher = {Morgan Kaufmann},
pages = {212--217}
}
@inproceedings{LEWIS94b,
author = {David D. Lewis and Marc Ringuette},
title = {A Comparison of Two Learning Algorithms for Text Categorization},
booktitle = {Symposium on Document Analysis and Information Retrieval},
year = {1994},
organization = {ISRI; Univ. of Nevada, Las Vegas},
address = {Las Vegas, NV},
month = {apr},
pages = {81--93}
}
@article{LEWIS94d,
author = {David D. Lewis and Philip J. Hayes},
title = {Guest Editorial},
journal = {ACM Transactions on Information Systems},
year = {1994},
volume = {12},
number = {3},
pages = {231},
month = {jul}
}
@article{SPARCKJONES76,
author = {K. {Sparck Jones} and C. J. {van Rijsbergen}},
title = {Information Retrieval Test Collections},
journal = {Journal of Documentation},
year = {1976},
volume = {32},
number = {1},
pages = {59--75}
}
@book{WEISS91,
author = {Sholom M. Weiss and Casimir A. Kulikowski},
title = {Computer Systems That Learn},
publisher = {Morgan Kaufmann},
year = {1991},
address = {San Mateo, CA}
}
```
### Contributions
Thanks to [@jplu](https://github.com/jplu), [@jbragg](https://github.com/jbragg), [@thomwolf](https://github.com/thomwolf), [@mariamabarham](https://github.com/mariamabarham), [@lhoestq](https://github.com/lhoestq) for adding this dataset. |
NeelNanda/pile-10k | 2022-10-14T21:27:22.000Z | [
"license:bigscience-bloom-rail-1.0",
"region:us"
] | NeelNanda | null | null | null | 2 | 3,131 | ---
license: bigscience-bloom-rail-1.0
---
The first 10K elements of [The Pile](https://pile.eleuther.ai/), useful for debugging models trained on it. See the [HuggingFace page for the full Pile](https://huggingface.co/datasets/the_pile) for more info. Inspired by [stas' great resource](https://huggingface.co/datasets/stas/openwebtext-10k) doing the same for OpenWebText |
acronym_identification | 2023-01-25T14:18:28.000Z | [
"task_categories:token-classification",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:en",
"license:mit",
"acronym-identification",
"arxiv:2010.14678",
"region:us"
] | null | Acronym identification training and development sets for the acronym identification task at SDU@AAAI-21. | @inproceedings{veyseh-et-al-2020-what,
title={{What Does This Acronym Mean? Introducing a New Dataset for Acronym Identification and Disambiguation}},
author={Amir Pouran Ben Veyseh and Franck Dernoncourt and Quan Hung Tran and Thien Huu Nguyen},
year={2020},
booktitle={Proceedings of COLING},
link={https://arxiv.org/pdf/2010.14678v1.pdf}
} | null | 17 | 3,115 | ---
annotations_creators:
- expert-generated
language_creators:
- found
language:
- en
license:
- mit
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- token-classification
task_ids: []
paperswithcode_id: acronym-identification
pretty_name: Acronym Identification Dataset
tags:
- acronym-identification
dataset_info:
features:
- name: id
dtype: string
- name: tokens
sequence: string
- name: labels
sequence:
class_label:
names:
'0': B-long
'1': B-short
'2': I-long
'3': I-short
'4': O
splits:
- name: train
num_bytes: 7792803
num_examples: 14006
- name: validation
num_bytes: 952705
num_examples: 1717
- name: test
num_bytes: 987728
num_examples: 1750
download_size: 8556464
dataset_size: 9733236
train-eval-index:
- config: default
task: token-classification
task_id: entity_extraction
splits:
eval_split: test
col_mapping:
tokens: tokens
labels: tags
---
# Dataset Card for Acronym Identification Dataset
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://sites.google.com/view/sdu-aaai21/shared-task
- **Repository:** https://github.com/amirveyseh/AAAI-21-SDU-shared-task-1-AI
- **Paper:** [What Does This Acronym Mean? Introducing a New Dataset for Acronym Identification and Disambiguation](https://arxiv.org/pdf/2010.14678v1.pdf)
- **Leaderboard:** https://competitions.codalab.org/competitions/26609
- **Point of Contact:** [More Information Needed]
### Dataset Summary
This dataset contains the training, validation, and test data for the **Shared Task 1: Acronym Identification** of the AAAI-21 Workshop on Scientific Document Understanding.
### Supported Tasks and Leaderboards
The dataset supports an `acronym-identification` task, where the aim is to predic which tokens in a pre-tokenized sentence correspond to acronyms. The dataset was released for a Shared Task which supported a [leaderboard](https://competitions.codalab.org/competitions/26609).
### Languages
The sentences in the dataset are in English (`en`).
## Dataset Structure
### Data Instances
A sample from the training set is provided below:
```
{'id': 'TR-0',
'labels': [4, 4, 4, 4, 0, 2, 2, 4, 1, 4, 4, 4, 4, 4, 4, 4, 4, 4],
'tokens': ['What',
'is',
'here',
'called',
'controlled',
'natural',
'language',
'(',
'CNL',
')',
'has',
'traditionally',
'been',
'given',
'many',
'different',
'names',
'.']}
```
Please note that in test set sentences only the `id` and `tokens` fields are available. `labels` can be ignored for test set. Labels in the test set are all `O`
### Data Fields
The data instances have the following fields:
- `id`: a `string` variable representing the example id, unique across the full dataset
- `tokens`: a list of `string` variables representing the word-tokenized sentence
- `labels`: a list of `categorical` variables with possible values `["B-long", "B-short", "I-long", "I-short", "O"]` corresponding to a BIO scheme. `-long` corresponds to the expanded acronym, such as *controlled natural language* here, and `-short` to the abbrviation, `CNL` here.
### Data Splits
The training, validation, and test set contain `14,006`, `1,717`, and `1750` sentences respectively.
## Dataset Creation
### Curation Rationale
> First, most of the existing datasets for acronym identification (AI) are either limited in their sizes or created using simple rule-based methods.
> This is unfortunate as rules are in general not able to capture all the diverse forms to express acronyms and their long forms in text.
> Second, most of the existing datasets are in the medical domain, ignoring the challenges in other scientific domains.
> In order to address these limitations this paper introduces two new datasets for Acronym Identification.
> Notably, our datasets are annotated by human to achieve high quality and have substantially larger numbers of examples than the existing AI datasets in the non-medical domain.
### Source Data
#### Initial Data Collection and Normalization
> In order to prepare a corpus for acronym annotation, we collect a corpus of 6,786 English papers from arXiv.
> These papers consist of 2,031,592 sentences that would be used for data annotation for AI in this work.
The dataset paper does not report the exact tokenization method.
#### Who are the source language producers?
The language was comes from papers hosted on the online digital archive [arXiv](https://arxiv.org/). No more information is available on the selection process or identity of the writers.
### Annotations
#### Annotation process
> Each sentence for annotation needs to contain at least one word in which more than half of the characters in are capital letters (i.e., acronym candidates).
> Afterward, we search for a sub-sequence of words in which the concatenation of the first one, two or three characters of the words (in the order of the words in the sub-sequence could form an acronym candidate.
> We call the sub-sequence a long form candidate. If we cannot find any long form candidate, we remove the sentence.
> Using this process, we end up with 17,506 sentences to be annotated manually by the annotators from Amazon Mechanical Turk (MTurk).
> In particular, we create a HIT for each sentence and ask the workers to annotate the short forms and the long forms in the sentence.
> In case of disagreements, if two out of three workers agree on an annotation, we use majority voting to decide the correct annotation.
> Otherwise, a fourth annotator is hired to resolve the conflict
#### Who are the annotators?
Workers were recruited through Amazon MEchanical Turk and paid $0.05 per annotation. No further demographic information is provided.
### Personal and Sensitive Information
Papers published on arXiv are unlikely to contain much personal information, although some do include some poorly chosen examples revealing personal details, so the data should be used with care.
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
Dataset provided for research purposes only. Please check dataset license for additional information.
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
The dataset provided for this shared task is licensed under CC BY-NC-SA 4.0 international license.
### Citation Information
```
@inproceedings{Veyseh2020,
author = {Amir Pouran Ben Veyseh and
Franck Dernoncourt and
Quan Hung Tran and
Thien Huu Nguyen},
editor = {Donia Scott and
N{\'{u}}ria Bel and
Chengqing Zong},
title = {What Does This Acronym Mean? Introducing a New Dataset for Acronym
Identification and Disambiguation},
booktitle = {Proceedings of the 28th International Conference on Computational
Linguistics, {COLING} 2020, Barcelona, Spain (Online), December 8-13,
2020},
pages = {3285--3301},
publisher = {International Committee on Computational Linguistics},
year = {2020},
url = {https://doi.org/10.18653/v1/2020.coling-main.292},
doi = {10.18653/v1/2020.coling-main.292}
}
```
### Contributions
Thanks to [@abhishekkrthakur](https://github.com/abhishekkrthakur) for adding this dataset. |
gigaword | 2023-04-05T10:06:42.000Z | [
"task_categories:summarization",
"annotations_creators:found",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:extended|gigaword_2003",
"language:en",
"license:mit",
"headline-generation",
"arxiv:1509.00685",
"region:us"
] | null | Headline-generation on a corpus of article pairs from Gigaword consisting of
around 4 million articles. Use the 'org_data' provided by
https://github.com/microsoft/unilm/ which is identical to
https://github.com/harvardnlp/sent-summary but with better format.
There are two features:
- document: article.
- summary: headline. | @article{graff2003english,
title={English gigaword},
author={Graff, David and Kong, Junbo and Chen, Ke and Maeda, Kazuaki},
journal={Linguistic Data Consortium, Philadelphia},
volume={4},
number={1},
pages={34},
year={2003}
}
@article{Rush_2015,
title={A Neural Attention Model for Abstractive Sentence Summarization},
url={http://dx.doi.org/10.18653/v1/D15-1044},
DOI={10.18653/v1/d15-1044},
journal={Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing},
publisher={Association for Computational Linguistics},
author={Rush, Alexander M. and Chopra, Sumit and Weston, Jason},
year={2015}
} | null | 18 | 3,102 | ---
annotations_creators:
- found
language_creators:
- found
language:
- en
license:
- mit
multilinguality:
- monolingual
size_categories:
- 100K<n<1M
source_datasets:
- extended|gigaword_2003
task_categories:
- summarization
task_ids: []
paperswithcode_id: null
pretty_name: Gigaword
train-eval-index:
- config: default
task: summarization
task_id: summarization
splits:
train_split: train
eval_split: test
col_mapping:
document: text
summary: target
metrics:
- type: rouge
name: Rouge
tags:
- headline-generation
dataset_info:
features:
- name: document
dtype: string
- name: summary
dtype: string
splits:
- name: train
num_bytes: 915249388
num_examples: 3803957
- name: validation
num_bytes: 45767096
num_examples: 189651
- name: test
num_bytes: 450782
num_examples: 1951
download_size: 578402958
dataset_size: 961467266
---
# Dataset Card for Gigaword
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Repository:** [Gigaword repository](https://github.com/harvardnlp/sent-summary)
- **Leaderboard:** [Gigaword leaderboard](https://paperswithcode.com/sota/text-summarization-on-gigaword)
- **Paper:** [A Neural Attention Model for Abstractive Sentence Summarization](https://arxiv.org/abs/1509.00685)
- **Point of Contact:** [Alexander Rush](mailto:arush@cornell.edu)
- **Size of downloaded dataset files:** 578.41 MB
- **Size of the generated dataset:** 962.96 MB
- **Total amount of disk used:** 1.54 GB
### Dataset Summary
Headline-generation on a corpus of article pairs from Gigaword consisting of
around 4 million articles. Use the 'org_data' provided by
https://github.com/microsoft/unilm/ which is identical to
https://github.com/harvardnlp/sent-summary but with better format.
### Supported Tasks and Leaderboards
- `summarization`: This dataset can be used for Summarization, where given a dicument, the goal is to predict its summery. The model performance is evaluated using the [ROUGE](https://huggingface.co/metrics/rouge) metric. The leaderboard for this task is available [here](https://paperswithcode.com/sota/text-summarization-on-gigaword).
### Languages
English.
## Dataset Structure
### Data Instances
An example of 'train' looks as follows.
```
{
'document': "australia 's current account deficit shrunk by a record #.## billion dollars -lrb- #.## billion us -rrb- in the june quarter due to soaring commodity prices , figures released monday showed .",
'summary': 'australian current account deficit narrows sharply'
}
```
### Data Fields
The data fields are the same among all splits.
- `document`: a `string` feature.
- `summary`: a `string` feature.
### Data Splits
| name | train |validation|test|
|-------|------:|---------:|---:|
|default|3803957| 189651|1951|
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
From the paper:
> For our training set, we pair the headline of each article with its first sentence to create an inputsummary pair. While the model could in theory be trained on any pair, Gigaword contains many spurious headline-article pairs. We therefore prune training based on the following heuristic filters: (1) Are there no non-stop-words in common? (2) Does the title contain a byline or other extraneous editing marks? (3) Does the title have a question mark or colon? After applying these filters, the training set consists of roughly J = 4 million title-article pairs. We apply a minimal preprocessing step using PTB tokenization, lower-casing, replacing all digit characters with #, and replacing of word types seen less than 5 times with UNK. We also remove all articles from the time-period of the DUC evaluation. release.
The complete input training vocabulary consists of 119 million word tokens and 110K unique word types with an average sentence size of 31.3 words. The headline vocabulary consists of 31 million tokens and 69K word types with the average title of length 8.3 words (note that this is significantly shorter than the DUC summaries). On average there are 4.6 overlapping word types between the headline and the input; although only 2.6 in the
first 75-characters of the input.
#### Who are the source language producers?
From the paper:
> For training data for both tasks, we utilize the annotated Gigaword data set (Graff et al., 2003; Napoles et al., 2012), which consists of standard Gigaword, preprocessed with Stanford CoreNLP tools (Manning et al., 2014).
### Annotations
#### Annotation process
Annotations are inherited from the annotatated Gigaword data set.
Additional information from the paper:
> Our model only uses annotations for tokenization and sentence separation, although several of the baselines use parsing and tagging as well.
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```bibtex
@article{graff2003english,
title={English gigaword},
author={Graff, David and Kong, Junbo and Chen, Ke and Maeda, Kazuaki},
journal={Linguistic Data Consortium, Philadelphia},
volume={4},
number={1},
pages={34},
year={2003}
}
@article{Rush_2015,
title={A Neural Attention Model for Abstractive Sentence Summarization},
url={http://dx.doi.org/10.18653/v1/D15-1044},
DOI={10.18653/v1/d15-1044},
journal={Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing},
publisher={Association for Computational Linguistics},
author={Rush, Alexander M. and Chopra, Sumit and Weston, Jason},
year={2015}
}
```
### Contributions
Thanks to [@lewtun](https://github.com/lewtun), [@lhoestq](https://github.com/lhoestq), [@thomwolf](https://github.com/thomwolf) for adding this dataset. |
mhenrichsen/alpaca_2k_test | 2023-07-22T19:48:57.000Z | [
"license:apache-2.0",
"region:us"
] | mhenrichsen | null | null | null | 3 | 3,088 | ---
license: apache-2.0
---
|
elyza/ELYZA-tasks-100 | 2023-09-26T01:38:42.000Z | [
"task_categories:text2text-generation",
"size_categories:n<1K",
"language:ja",
"license:cc-by-sa-4.0",
"arxiv:2307.09288",
"region:us"
] | elyza | null | null | null | 21 | 3,086 | ---
task_categories:
- text2text-generation
language:
- ja
size_categories:
- n<1K
license: cc-by-sa-4.0
---
# ELYZA-tasks-100: 日本語instructionモデル評価データセット

## Data Description
本データセットはinstruction-tuningを行ったモデルの評価用データセットです。詳細は [リリースのnote記事](https://note.com/elyza/n/na405acaca130) を参照してください。
特徴:
- 複雑な指示・タスクを含む100件の日本語データです。
- 役に立つAIアシスタントとして、丁寧な出力が求められます。
- 全てのデータに対して評価観点がアノテーションされており、評価の揺らぎを抑えることが期待されます。
具体的には以下のようなタスクを含みます。
- 要約を修正し、修正箇所を説明するタスク
- 具体的なエピソードから抽象的な教訓を述べるタスク
- ユーザーの意図を汲み役に立つAIアシスタントとして振る舞うタスク
- 場合分けを必要とする複雑な算数のタスク
- 未知の言語からパターンを抽出し日本語訳する高度な推論を必要とするタスク
- 複数の指示を踏まえた上でyoutubeの対話を生成するタスク
- 架空の生き物や熟語に関する生成・大喜利などの想像力が求められるタスク
## Usage
datasetsライブラリから利用が可能です。
```py
>>> from datasets import load_dataset
>>> ds = load_dataset("elyza/ELYZA-tasks-100")
>>> ds
DatasetDict({
test: Dataset({
features: ["input", "output", "eval_aspect"],
num_rows: 100
})
})
>>> ds["test"][0]
{
'input': '仕事の熱意を取り戻すためのアイデアを5つ挙げてください。',
'output': '1. 自分の仕事に対する興味を再発見するために、新しい技能や知識を学ぶこと。\n2. カレッジやセミナーなどで講演を聴くことで、仕事に対する新しいアイデアや視点を得ること。\n3. 仕事に対してストレスを感じている場合は、ストレスマネジメントのテクニックを学ぶこと。\n4. 仕事以外の楽しいことをすることで、ストレスを発散すること。\n5. 仕事に対して自己評価をすることで、自分がどのように進化しているのかを知ること。',
'eval_aspect': '- 熱意を取り戻すのではなく、仕事の効率化・スキルアップのような文脈になっていたら1点減点\n- 出したアイデアが5つより多い、少ない場合は1点減点\n- 5つのアイデアのうち、内容が重複しているものがあれば1点減点\n\n'
}
```
## Baseline Evaluation
本データセットは手動/自動, 絶対/相対 評価のいずれの評価形式でも利用していただくことができますが、今回我々はベースラインモデルの評価として、5段階の絶対評価を手動で行いました。
### 評価手順
1. [こちらの推論スクリプト](https://huggingface.co/datasets/elyza/ELYZA-tasks-100/tree/main/baseline/scripts)のようにベースラインとなるモデルでの推論を行い、[baseline/preds](https://huggingface.co/datasets/elyza/ELYZA-tasks-100/tree/main/baseline/preds)以下に推論結果を格納しました。
- 基本的にgenerate時のパラメータはREADMEなどに記載されているデフォルト値を用いました。
2. [shuffle_for_humaneval.py](https://huggingface.co/datasets/elyza/ELYZA-tasks-100/blob/main/baseline/humaneval/shuffle_for_humaneval.py)を用いて匿名化されたモデルの推論結果 [shuffled_preds.csv](https://huggingface.co/datasets/elyza/ELYZA-tasks-100/blob/main/baseline/humaneval/shuffled_preds.csv) と匿名化を復元するための対応表 [uuids.csv](https://huggingface.co/datasets/elyza/ELYZA-tasks-100/blob/main/baseline/humaneval/uuids.csv) を作成しました。
3. [shuffled_preds.csv](https://huggingface.co/datasets/elyza/ELYZA-tasks-100/blob/main/baseline/humaneval/shuffled_preds.csv) を Googleスプレッドシートにアップロードし、[評価ガイドライン](https://huggingface.co/datasets/elyza/ELYZA-tasks-100/blob/main/baseline/humaneval/guideline.md) に従って、各データ3人で人手評価を行いました。
4. スプレッドシートでの評価結果を[annotated_shuffled_preds.xlsx](https://huggingface.co/datasets/elyza/ELYZA-tasks-100/blob/main/baseline/humaneval/annotated_shuffled_preds.xlsx)としてダウンロードし、 [deshuffle_annotations.py](https://huggingface.co/datasets/elyza/ELYZA-tasks-100/blob/main/baseline/humaneval/deshuffle_annotations.py) を利用し、匿名化された評価結果を復号して[annotated_deshuffled_preds.csv](https://huggingface.co/datasets/elyza/ELYZA-tasks-100/blob/main/baseline/humaneval/annotated_deshuffled_preds.csv) として保存しました。
5. 最後にGoogleスプレッドシートに[評価結果シート](https://docs.google.com/spreadsheets/d/1mtoy4QAqDPk2f_B0vDogFoOrbA5G42DBEEHdqM4VmDI/edit#gid=1023787356)にアップロードして可視化しました。
### 評価結果
- スコアについては、[リリースのnote記事](https://note.com/elyza/n/na405acaca130) を参照してください。
- [評価結果シート](https://docs.google.com/spreadsheets/d/1mtoy4QAqDPk2f_B0vDogFoOrbA5G42DBEEHdqM4VmDI/edit#gid=1023787356):
- 全ての入出力と評価を公開しています。スコアだけでは分からないモデルの傾向を知ることができます。
### 評価手法の妥当性について
[zennの技術ブログ](https://zenn.dev/elyza/articles/5e7d9373c32a98)にて今回のベースラインの評価の詳細な分析についての記事を書きました。よければそちらもご覧ください。
## GPT4での自動評価について
こちらも[zennの技術ブログ](https://zenn.dev/elyza/articles/5e7d9373c32a98)にて実際にGPT4での評価を行う際のコードと結果を示しています。
## Developers
以下アルファベット順です。
- [Akira Sasaki](https://huggingface.co/akirasasaki)
- [Masato Hirakawa](https://huggingface.co/m-hirakawa)
- [Shintaro Horie](https://huggingface.co/e-mon)
- [Tomoaki Nakamura](https://huggingface.co/tyoyo)
## License

このデータセットは [CC BY-SA 4.0](https://creativecommons.org/licenses/by-sa/4.0/deed.ja) でライセンスされています。
## How to Cite
```tex
@misc{elyzatasks100,
title={ELYZA-tasks-100: 日本語instructionモデル評価データセット},
url={https://huggingface.co/elyza/ELYZA-tasks-100},
author={Akira Sasaki and Masato Hirakawa and Shintaro Horie and Tomoaki Nakamura},
year={2023},
}
```
## Citations
```tex
@misc{touvron2023llama,
title={Llama 2: Open Foundation and Fine-Tuned Chat Models},
author={Hugo Touvron and Louis Martin and Kevin Stone and Peter Albert and Amjad Almahairi and Yasmine Babaei and Nikolay Bashlykov and Soumya Batra and Prajjwal Bhargava and Shruti Bhosale and Dan Bikel and Lukas Blecher and Cristian Canton Ferrer and Moya Chen and Guillem Cucurull and David Esiobu and Jude Fernandes and Jeremy Fu and Wenyin Fu and Brian Fuller and Cynthia Gao and Vedanuj Goswami and Naman Goyal and Anthony Hartshorn and Saghar Hosseini and Rui Hou and Hakan Inan and Marcin Kardas and Viktor Kerkez and Madian Khabsa and Isabel Kloumann and Artem Korenev and Punit Singh Koura and Marie-Anne Lachaux and Thibaut Lavril and Jenya Lee and Diana Liskovich and Yinghai Lu and Yuning Mao and Xavier Martinet and Todor Mihaylov and Pushkar Mishra and Igor Molybog and Yixin Nie and Andrew Poulton and Jeremy Reizenstein and Rashi Rungta and Kalyan Saladi and Alan Schelten and Ruan Silva and Eric Michael Smith and Ranjan Subramanian and Xiaoqing Ellen Tan and Binh Tang and Ross Taylor and Adina Williams and Jian Xiang Kuan and Puxin Xu and Zheng Yan and Iliyan Zarov and Yuchen Zhang and Angela Fan and Melanie Kambadur and Sharan Narang and Aurelien Rodriguez and Robert Stojnic and Sergey Edunov and Thomas Scialom},
year={2023},
eprint={2307.09288},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
Cubpaw/voxelgym_5c_42x42_25000 | 2023-05-31T21:28:34.000Z | [
"region:us"
] | Cubpaw | null | null | null | 0 | 3,070 | ---
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype: image
- name: rgb_label
dtype: image
- name: path_label
dtype: image
- name: path_rgb_label
dtype: image
splits:
- name: train
num_bytes: 18480640.0
num_examples: 20000
- name: validation
num_bytes: 4639555.0
num_examples: 5000
download_size: 17635174
dataset_size: 23120195.0
---
# Dataset Card for "voxelgym_5c_42x42_25000"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
quoref | 2023-04-05T13:37:27.000Z | [
"task_categories:question-answering",
"annotations_creators:crowdsourced",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:en",
"license:cc-by-4.0",
"coreference-resolution",
"region:us"
] | null | Quoref is a QA dataset which tests the coreferential reasoning capability of reading comprehension systems. In this
span-selection benchmark containing 24K questions over 4.7K paragraphs from Wikipedia, a system must resolve hard
coreferences before selecting the appropriate span(s) in the paragraphs for answering questions. | @article{allenai:quoref,
author = {Pradeep Dasigi and Nelson F. Liu and Ana Marasovic and Noah A. Smith and Matt Gardner},
title = {Quoref: A Reading Comprehension Dataset with Questions Requiring Coreferential Reasoning},
journal = {arXiv:1908.05803v2 },
year = {2019},
} | null | 3 | 3,057 | ---
annotations_creators:
- crowdsourced
language:
- en
language_creators:
- found
license:
- cc-by-4.0
multilinguality:
- monolingual
pretty_name: Quoref
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- question-answering
task_ids: []
paperswithcode_id: quoref
tags:
- coreference-resolution
dataset_info:
features:
- name: id
dtype: string
- name: question
dtype: string
- name: context
dtype: string
- name: title
dtype: string
- name: url
dtype: string
- name: answers
sequence:
- name: answer_start
dtype: int32
- name: text
dtype: string
splits:
- name: train
num_bytes: 44377729
num_examples: 19399
- name: validation
num_bytes: 5442031
num_examples: 2418
download_size: 5078438
dataset_size: 49819760
---
# Dataset Card for "quoref"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://allenai.org/data/quoref
- **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Paper:** [Quoref: A Reading Comprehension Dataset with Questions Requiring Coreferential Reasoning](https://aclanthology.org/D19-1606/)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of downloaded dataset files:** 5.08 MB
- **Size of the generated dataset:** 49.82 MB
- **Total amount of disk used:** 54.90 MB
### Dataset Summary
Quoref is a QA dataset which tests the coreferential reasoning capability of reading comprehension systems. In this
span-selection benchmark containing 24K questions over 4.7K paragraphs from Wikipedia, a system must resolve hard
coreferences before selecting the appropriate span(s) in the paragraphs for answering questions.
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Dataset Structure
### Data Instances
#### default
- **Size of downloaded dataset files:** 5.08 MB
- **Size of the generated dataset:** 49.82 MB
- **Total amount of disk used:** 54.90 MB
An example of 'validation' looks as follows.
```
This example was too long and was cropped:
{
"answers": {
"answer_start": [1633],
"text": ["Frankie"]
},
"context": "\"Frankie Bono, a mentally disturbed hitman from Cleveland, comes back to his hometown in New York City during Christmas week to ...",
"id": "bfc3b34d6b7e73c0bd82a009db12e9ce196b53e6",
"question": "What is the first name of the person who has until New Year's Eve to perform a hit?",
"title": "Blast of Silence",
"url": "https://en.wikipedia.org/wiki/Blast_of_Silence"
}
```
### Data Fields
The data fields are the same among all splits.
#### default
- `id`: a `string` feature.
- `question`: a `string` feature.
- `context`: a `string` feature.
- `title`: a `string` feature.
- `url`: a `string` feature.
- `answers`: a dictionary feature containing:
- `answer_start`: a `int32` feature.
- `text`: a `string` feature.
### Data Splits
| name |train|validation|
|-------|----:|---------:|
|default|19399| 2418|
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@article{allenai:quoref,
author = {Pradeep Dasigi and Nelson F. Liu and Ana Marasovic and Noah A. Smith and Matt Gardner},
title = {Quoref: A Reading Comprehension Dataset with Questions Requiring Coreferential Reasoning},
journal = {arXiv:1908.05803v2 },
year = {2019},
}
```
### Contributions
Thanks to [@lewtun](https://github.com/lewtun), [@patrickvonplaten](https://github.com/patrickvonplaten), [@thomwolf](https://github.com/thomwolf) for adding this dataset. |
cos_e | 2023-04-05T10:02:39.000Z | [
"task_categories:question-answering",
"task_ids:open-domain-qa",
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:extended|commonsense_qa",
"language:en",
"license:unknown",
"arxiv:1906.02361",
"region:us"
] | null | Common Sense Explanations (CoS-E) allows for training language models to
automatically generate explanations that can be used during training and
inference in a novel Commonsense Auto-Generated Explanation (CAGE) framework. | @inproceedings{rajani2019explain,
title = {Explain Yourself! Leveraging Language models for Commonsense Reasoning},
author = {Rajani, Nazneen Fatema and
McCann, Bryan and
Xiong, Caiming and
Socher, Richard}
year={2019}
booktitle = {Proceedings of the 2019 Conference of the Association for Computational Linguistics (ACL2019)}
url ={https://arxiv.org/abs/1906.02361}
} | null | 6 | 2,995 | ---
annotations_creators:
- crowdsourced
language:
- en
language_creators:
- crowdsourced
license:
- unknown
multilinguality:
- monolingual
pretty_name: Commonsense Explanations
size_categories:
- 10K<n<100K
source_datasets:
- extended|commonsense_qa
task_categories:
- question-answering
task_ids:
- open-domain-qa
paperswithcode_id: cos-e
dataset_info:
- config_name: v1.0
features:
- name: id
dtype: string
- name: question
dtype: string
- name: choices
sequence: string
- name: answer
dtype: string
- name: abstractive_explanation
dtype: string
- name: extractive_explanation
dtype: string
splits:
- name: train
num_bytes: 2077517
num_examples: 7610
- name: validation
num_bytes: 261887
num_examples: 950
download_size: 4295320
dataset_size: 2339404
- config_name: v1.11
features:
- name: id
dtype: string
- name: question
dtype: string
- name: choices
sequence: string
- name: answer
dtype: string
- name: abstractive_explanation
dtype: string
- name: extractive_explanation
dtype: string
splits:
- name: train
num_bytes: 2717420
num_examples: 9741
- name: validation
num_bytes: 331760
num_examples: 1221
download_size: 6535534
dataset_size: 3049180
---
# Dataset Card for "cos_e"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:** https://github.com/salesforce/cos-e
- **Paper:** [Explain Yourself! Leveraging Language Models for Commonsense Reasoning](https://arxiv.org/abs/1906.02361)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of downloaded dataset files:** 10.83 MB
- **Size of the generated dataset:** 5.39 MB
- **Total amount of disk used:** 16.22 MB
### Dataset Summary
Common Sense Explanations (CoS-E) allows for training language models to
automatically generate explanations that can be used during training and
inference in a novel Commonsense Auto-Generated Explanation (CAGE) framework.
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Dataset Structure
### Data Instances
#### v1.0
- **Size of downloaded dataset files:** 4.30 MB
- **Size of the generated dataset:** 2.34 MB
- **Total amount of disk used:** 6.64 MB
An example of 'train' looks as follows.
```
{
"abstractive_explanation": "this is open-ended",
"answer": "b",
"choices": ["a", "b", "c"],
"extractive_explanation": "this is selected train",
"id": "42",
"question": "question goes here."
}
```
#### v1.11
- **Size of downloaded dataset files:** 6.53 MB
- **Size of the generated dataset:** 3.05 MB
- **Total amount of disk used:** 9.58 MB
An example of 'train' looks as follows.
```
{
"abstractive_explanation": "this is open-ended",
"answer": "b",
"choices": ["a", "b", "c"],
"extractive_explanation": "this is selected train",
"id": "42",
"question": "question goes here."
}
```
### Data Fields
The data fields are the same among all splits.
#### v1.0
- `id`: a `string` feature.
- `question`: a `string` feature.
- `choices`: a `list` of `string` features.
- `answer`: a `string` feature.
- `abstractive_explanation`: a `string` feature.
- `extractive_explanation`: a `string` feature.
#### v1.11
- `id`: a `string` feature.
- `question`: a `string` feature.
- `choices`: a `list` of `string` features.
- `answer`: a `string` feature.
- `abstractive_explanation`: a `string` feature.
- `extractive_explanation`: a `string` feature.
### Data Splits
|name |train|validation|
|-----|----:|---------:|
|v1.0 | 7610| 950|
|v1.11| 9741| 1221|
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
Unknown.
### Citation Information
```
@inproceedings{rajani2019explain,
title = "Explain Yourself! Leveraging Language models for Commonsense Reasoning",
author = "Rajani, Nazneen Fatema and
McCann, Bryan and
Xiong, Caiming and
Socher, Richard",
year="2019",
booktitle = "Proceedings of the 2019 Conference of the Association for Computational Linguistics (ACL2019)",
url ="https://arxiv.org/abs/1906.02361"
}
```
### Contributions
Thanks to [@lewtun](https://github.com/lewtun), [@thomwolf](https://github.com/thomwolf), [@mariamabarham](https://github.com/mariamabarham), [@patrickvonplaten](https://github.com/patrickvonplaten), [@albertvillanova](https://github.com/albertvillanova), [@lhoestq](https://github.com/lhoestq) for adding this dataset. |
banking77 | 2023-04-17T13:46:23.000Z | [
"task_categories:text-classification",
"task_ids:intent-classification",
"task_ids:multi-class-classification",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:en",
"license:cc-by-4.0",
"arxiv:2003.04807",
"region:us"
] | null | BANKING77 dataset provides a very fine-grained set of intents in a banking domain.
It comprises 13,083 customer service queries labeled with 77 intents.
It focuses on fine-grained single-domain intent detection. | null | null | 26 | 2,972 | ---
annotations_creators:
- expert-generated
language_creators:
- expert-generated
language:
- en
license:
- cc-by-4.0
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- intent-classification
- multi-class-classification
pretty_name: BANKING77
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': activate_my_card
'1': age_limit
'2': apple_pay_or_google_pay
'3': atm_support
'4': automatic_top_up
'5': balance_not_updated_after_bank_transfer
'6': balance_not_updated_after_cheque_or_cash_deposit
'7': beneficiary_not_allowed
'8': cancel_transfer
'9': card_about_to_expire
'10': card_acceptance
'11': card_arrival
'12': card_delivery_estimate
'13': card_linking
'14': card_not_working
'15': card_payment_fee_charged
'16': card_payment_not_recognised
'17': card_payment_wrong_exchange_rate
'18': card_swallowed
'19': cash_withdrawal_charge
'20': cash_withdrawal_not_recognised
'21': change_pin
'22': compromised_card
'23': contactless_not_working
'24': country_support
'25': declined_card_payment
'26': declined_cash_withdrawal
'27': declined_transfer
'28': direct_debit_payment_not_recognised
'29': disposable_card_limits
'30': edit_personal_details
'31': exchange_charge
'32': exchange_rate
'33': exchange_via_app
'34': extra_charge_on_statement
'35': failed_transfer
'36': fiat_currency_support
'37': get_disposable_virtual_card
'38': get_physical_card
'39': getting_spare_card
'40': getting_virtual_card
'41': lost_or_stolen_card
'42': lost_or_stolen_phone
'43': order_physical_card
'44': passcode_forgotten
'45': pending_card_payment
'46': pending_cash_withdrawal
'47': pending_top_up
'48': pending_transfer
'49': pin_blocked
'50': receiving_money
'51': Refund_not_showing_up
'52': request_refund
'53': reverted_card_payment?
'54': supported_cards_and_currencies
'55': terminate_account
'56': top_up_by_bank_transfer_charge
'57': top_up_by_card_charge
'58': top_up_by_cash_or_cheque
'59': top_up_failed
'60': top_up_limits
'61': top_up_reverted
'62': topping_up_by_card
'63': transaction_charged_twice
'64': transfer_fee_charged
'65': transfer_into_account
'66': transfer_not_received_by_recipient
'67': transfer_timing
'68': unable_to_verify_identity
'69': verify_my_identity
'70': verify_source_of_funds
'71': verify_top_up
'72': virtual_card_not_working
'73': visa_or_mastercard
'74': why_verify_identity
'75': wrong_amount_of_cash_received
'76': wrong_exchange_rate_for_cash_withdrawal
splits:
- name: train
num_bytes: 715036
num_examples: 10003
- name: test
num_bytes: 204014
num_examples: 3080
download_size: 1079034
dataset_size: 919050
train-eval-index:
- config: default
task: text-classification
task_id: multi_class_classification
splits:
train_split: train
eval_split: test
col_mapping:
text: text
label: target
metrics:
- type: accuracy
name: Accuracy
- type: f1
name: F1 macro
args:
average: macro
- type: f1
name: F1 micro
args:
average: micro
- type: f1
name: F1 weighted
args:
average: weighted
- type: precision
name: Precision macro
args:
average: macro
- type: precision
name: Precision micro
args:
average: micro
- type: precision
name: Precision weighted
args:
average: weighted
- type: recall
name: Recall macro
args:
average: macro
- type: recall
name: Recall micro
args:
average: micro
- type: recall
name: Recall weighted
args:
average: weighted
---
# Dataset Card for BANKING77
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [Github](https://github.com/PolyAI-LDN/task-specific-datasets)
- **Repository:** [Github](https://github.com/PolyAI-LDN/task-specific-datasets)
- **Paper:** [ArXiv](https://arxiv.org/abs/2003.04807)
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
<div class="course-tip course-tip-orange bg-gradient-to-br dark:bg-gradient-to-r before:border-orange-500 dark:before:border-orange-800 from-orange-50 dark:from-gray-900 to-white dark:to-gray-950 border border-orange-50 text-orange-700 dark:text-gray-400">
<p><b>Deprecated:</b> Dataset "banking77" is deprecated and will be deleted. Use "<a href="https://huggingface.co/datasets/PolyAI/banking77">PolyAI/banking77</a>" instead.</p>
</div>
Dataset composed of online banking queries annotated with their corresponding intents.
BANKING77 dataset provides a very fine-grained set of intents in a banking domain.
It comprises 13,083 customer service queries labeled with 77 intents.
It focuses on fine-grained single-domain intent detection.
### Supported Tasks and Leaderboards
Intent classification, intent detection
### Languages
English
## Dataset Structure
### Data Instances
An example of 'train' looks as follows:
```
{
'label': 11, # integer label corresponding to "card_arrival" intent
'text': 'I am still waiting on my card?'
}
```
### Data Fields
- `text`: a string feature.
- `label`: One of classification labels (0-76) corresponding to unique intents.
Intent names are mapped to `label` in the following way:
| label | intent (category) |
|---:|:-------------------------------------------------|
| 0 | activate_my_card |
| 1 | age_limit |
| 2 | apple_pay_or_google_pay |
| 3 | atm_support |
| 4 | automatic_top_up |
| 5 | balance_not_updated_after_bank_transfer |
| 6 | balance_not_updated_after_cheque_or_cash_deposit |
| 7 | beneficiary_not_allowed |
| 8 | cancel_transfer |
| 9 | card_about_to_expire |
| 10 | card_acceptance |
| 11 | card_arrival |
| 12 | card_delivery_estimate |
| 13 | card_linking |
| 14 | card_not_working |
| 15 | card_payment_fee_charged |
| 16 | card_payment_not_recognised |
| 17 | card_payment_wrong_exchange_rate |
| 18 | card_swallowed |
| 19 | cash_withdrawal_charge |
| 20 | cash_withdrawal_not_recognised |
| 21 | change_pin |
| 22 | compromised_card |
| 23 | contactless_not_working |
| 24 | country_support |
| 25 | declined_card_payment |
| 26 | declined_cash_withdrawal |
| 27 | declined_transfer |
| 28 | direct_debit_payment_not_recognised |
| 29 | disposable_card_limits |
| 30 | edit_personal_details |
| 31 | exchange_charge |
| 32 | exchange_rate |
| 33 | exchange_via_app |
| 34 | extra_charge_on_statement |
| 35 | failed_transfer |
| 36 | fiat_currency_support |
| 37 | get_disposable_virtual_card |
| 38 | get_physical_card |
| 39 | getting_spare_card |
| 40 | getting_virtual_card |
| 41 | lost_or_stolen_card |
| 42 | lost_or_stolen_phone |
| 43 | order_physical_card |
| 44 | passcode_forgotten |
| 45 | pending_card_payment |
| 46 | pending_cash_withdrawal |
| 47 | pending_top_up |
| 48 | pending_transfer |
| 49 | pin_blocked |
| 50 | receiving_money |
| 51 | Refund_not_showing_up |
| 52 | request_refund |
| 53 | reverted_card_payment? |
| 54 | supported_cards_and_currencies |
| 55 | terminate_account |
| 56 | top_up_by_bank_transfer_charge |
| 57 | top_up_by_card_charge |
| 58 | top_up_by_cash_or_cheque |
| 59 | top_up_failed |
| 60 | top_up_limits |
| 61 | top_up_reverted |
| 62 | topping_up_by_card |
| 63 | transaction_charged_twice |
| 64 | transfer_fee_charged |
| 65 | transfer_into_account |
| 66 | transfer_not_received_by_recipient |
| 67 | transfer_timing |
| 68 | unable_to_verify_identity |
| 69 | verify_my_identity |
| 70 | verify_source_of_funds |
| 71 | verify_top_up |
| 72 | virtual_card_not_working |
| 73 | visa_or_mastercard |
| 74 | why_verify_identity |
| 75 | wrong_amount_of_cash_received |
| 76 | wrong_exchange_rate_for_cash_withdrawal |
### Data Splits
| Dataset statistics | Train | Test |
| --- | --- | --- |
| Number of examples | 10 003 | 3 080 |
| Average character length | 59.5 | 54.2 |
| Number of intents | 77 | 77 |
| Number of domains | 1 | 1 |
## Dataset Creation
### Curation Rationale
Previous intent detection datasets such as Web Apps, Ask Ubuntu, the Chatbot Corpus or SNIPS are limited to small number of classes (<10), which oversimplifies the intent detection task and does not emulate the true environment of commercial systems. Although there exist large scale *multi-domain* datasets ([HWU64](https://github.com/xliuhw/NLU-Evaluation-Data) and [CLINC150](https://github.com/clinc/oos-eval)), the examples per each domain may not sufficiently capture the full complexity of each domain as encountered "in the wild". This dataset tries to fill the gap and provides a very fine-grained set of intents in a *single-domain* i.e. **banking**. Its focus on fine-grained single-domain intent detection makes it complementary to the other two multi-domain datasets.
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
The dataset does not contain any additional annotations.
#### Who are the annotators?
[N/A]
### Personal and Sensitive Information
[N/A]
## Considerations for Using the Data
### Social Impact of Dataset
The purpose of this dataset it to help develop better intent detection systems.
Any comprehensive intent detection evaluation should involve both coarser-grained multi-domain datasets and a fine-grained single-domain dataset such as BANKING77.
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[PolyAI](https://github.com/PolyAI-LDN)
### Licensing Information
Creative Commons Attribution 4.0 International
### Citation Information
```
@inproceedings{Casanueva2020,
author = {I{\~{n}}igo Casanueva and Tadas Temcinas and Daniela Gerz and Matthew Henderson and Ivan Vulic},
title = {Efficient Intent Detection with Dual Sentence Encoders},
year = {2020},
month = {mar},
note = {Data available at https://github.com/PolyAI-LDN/task-specific-datasets},
url = {https://arxiv.org/abs/2003.04807},
booktitle = {Proceedings of the 2nd Workshop on NLP for ConvAI - ACL 2020}
}
```
### Contributions
Thanks to [@dkajtoch](https://github.com/dkajtoch) for adding this dataset. |
hf-internal-testing/fixtures_ade20k | 2021-11-09T10:26:23.000Z | [
"region:us"
] | hf-internal-testing | \\n | \\n | null | 0 | 2,956 | Entry not found |
fujiki/databricks-dolly-15k-ja-reformat-v1 | 2023-10-06T13:37:15.000Z | [
"license:cc-by-sa-3.0",
"region:us"
] | fujiki | null | null | null | 0 | 2,919 | ---
license: cc-by-sa-3.0
dataset_info:
features:
- name: index
dtype: string
- name: category
dtype: string
- name: instructions
sequence: string
- name: responses
sequence: string
splits:
- name: train
num_bytes: 15973503
num_examples: 15015
download_size: 9056298
dataset_size: 15973503
---
This is a reformatted version of [kunishou/databricks-dolly-15k-ja](https://huggingface.co/datasets/kunishou/databricks-dolly-15k-ja).
If you use this dataset, please cite the original dataset as well. |
kilt_tasks | 2023-06-01T14:59:56.000Z | [
"task_categories:fill-mask",
"task_categories:question-answering",
"task_categories:text-classification",
"task_categories:text-generation",
"task_categories:text-retrieval",
"task_categories:text2text-generation",
"task_ids:abstractive-qa",
"task_ids:dialogue-modeling",
"task_ids:document-retrieval",
"task_ids:entity-linking-retrieval",
"task_ids:extractive-qa",
"task_ids:fact-checking",
"task_ids:fact-checking-retrieval",
"task_ids:open-domain-abstractive-qa",
"task_ids:open-domain-qa",
"task_ids:slot-filling",
"annotations_creators:crowdsourced",
"annotations_creators:found",
"annotations_creators:machine-generated",
"language_creators:crowdsourced",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"size_categories:10K<n<100K",
"size_categories:1K<n<10K",
"size_categories:1M<n<10M",
"source_datasets:extended|natural_questions",
"source_datasets:extended|other-aidayago",
"source_datasets:extended|other-fever",
"source_datasets:extended|other-hotpotqa",
"source_datasets:extended|other-trex",
"source_datasets:extended|other-triviaqa",
"source_datasets:extended|other-wizardsofwikipedia",
"source_datasets:extended|other-wned-cweb",
"source_datasets:extended|other-wned-wiki",
"source_datasets:extended|other-zero-shot-re",
"source_datasets:original",
"language:en",
"license:mit",
"arxiv:2009.02252",
"region:us"
] | null | KILT tasks training and evaluation data.
- [FEVER](https://fever.ai) | Fact Checking | fever
- [AIDA CoNLL-YAGO](https://www.mpi-inf.mpg.de/departments/databases-and-information-systems/research/ambiverse-nlu/aida/downloads) | Entity Linking | aidayago2
- [WNED-WIKI](https://github.com/U-Alberta/wned) | Entity Linking | wned
- [WNED-CWEB](https://github.com/U-Alberta/wned) | Entity Linking | cweb
- [T-REx](https://hadyelsahar.github.io/t-rex) | Slot Filling | trex
- [Zero-Shot RE](http://nlp.cs.washington.edu/zeroshot) | Slot Filling | structured_zeroshot
- [Natural Questions](https://ai.google.com/research/NaturalQuestions) | Open Domain QA | nq
- [HotpotQA](https://hotpotqa.github.io) | Open Domain QA | hotpotqa
- [TriviaQA](http://nlp.cs.washington.edu/triviaqa) | Open Domain QA | triviaqa
- [ELI5](https://facebookresearch.github.io/ELI5/explore.html) | Open Domain QA | eli5
- [Wizard of Wikipedia](https://parl.ai/projects/wizard_of_wikipedia) | Dialogue | wow
To finish linking TriviaQA questions to the IDs provided, follow the instructions [here](http://github.com/huggingface/datasets/datasets/kilt_tasks/README.md). | @inproceedings{fb_kilt,
author = {Fabio Petroni and
Aleksandra Piktus and
Angela Fan and
Patrick Lewis and
Majid Yazdani and
Nicola De Cao and
James Thorne and
Yacine Jernite and
Vassilis Plachouras and
Tim Rockt\"aschel and
Sebastian Riedel},
title = {{KILT:} a {B}enchmark for {K}nowledge {I}ntensive {L}anguage {T}asks},
journal = {CoRR},
archivePrefix = {arXiv},
year = {2020}, | null | 31 | 2,907 | ---
annotations_creators:
- crowdsourced
- found
- machine-generated
language_creators:
- crowdsourced
- found
language:
- en
license:
- mit
multilinguality:
- monolingual
size_categories:
- 100K<n<1M
- 10K<n<100K
- 1K<n<10K
- 1M<n<10M
source_datasets:
- extended|natural_questions
- extended|other-aidayago
- extended|other-fever
- extended|other-hotpotqa
- extended|other-trex
- extended|other-triviaqa
- extended|other-wizardsofwikipedia
- extended|other-wned-cweb
- extended|other-wned-wiki
- extended|other-zero-shot-re
- original
task_categories:
- fill-mask
- question-answering
- text-classification
- text-generation
- text-retrieval
- text2text-generation
task_ids:
- abstractive-qa
- dialogue-modeling
- document-retrieval
- entity-linking-retrieval
- extractive-qa
- fact-checking
- fact-checking-retrieval
- open-domain-abstractive-qa
- open-domain-qa
- slot-filling
paperswithcode_id: kilt
pretty_name: KILT
dataset_info:
- config_name: triviaqa_support_only
features:
- name: id
dtype: string
- name: input
dtype: string
- name: meta
struct:
- name: left_context
dtype: string
- name: mention
dtype: string
- name: right_context
dtype: string
- name: partial_evidence
list:
- name: start_paragraph_id
dtype: int32
- name: end_paragraph_id
dtype: int32
- name: title
dtype: string
- name: section
dtype: string
- name: wikipedia_id
dtype: string
- name: meta
struct:
- name: evidence_span
list: string
- name: obj_surface
list: string
- name: sub_surface
list: string
- name: subj_aliases
list: string
- name: template_questions
list: string
- name: output
list:
- name: answer
dtype: string
- name: meta
struct:
- name: score
dtype: int32
- name: provenance
list:
- name: bleu_score
dtype: float32
- name: start_character
dtype: int32
- name: start_paragraph_id
dtype: int32
- name: end_character
dtype: int32
- name: end_paragraph_id
dtype: int32
- name: meta
struct:
- name: fever_page_id
dtype: string
- name: fever_sentence_id
dtype: int32
- name: annotation_id
dtype: string
- name: yes_no_answer
dtype: string
- name: evidence_span
list: string
- name: section
dtype: string
- name: title
dtype: string
- name: wikipedia_id
dtype: string
splits:
- name: train
num_bytes: 72024147
num_examples: 61844
- name: validation
num_bytes: 6824774
num_examples: 5359
- name: test
num_bytes: 341964
num_examples: 6586
download_size: 111546348
dataset_size: 79190885
- config_name: fever
features:
- name: id
dtype: string
- name: input
dtype: string
- name: meta
struct:
- name: left_context
dtype: string
- name: mention
dtype: string
- name: right_context
dtype: string
- name: partial_evidence
list:
- name: start_paragraph_id
dtype: int32
- name: end_paragraph_id
dtype: int32
- name: title
dtype: string
- name: section
dtype: string
- name: wikipedia_id
dtype: string
- name: meta
struct:
- name: evidence_span
list: string
- name: obj_surface
list: string
- name: sub_surface
list: string
- name: subj_aliases
list: string
- name: template_questions
list: string
- name: output
list:
- name: answer
dtype: string
- name: meta
struct:
- name: score
dtype: int32
- name: provenance
list:
- name: bleu_score
dtype: float32
- name: start_character
dtype: int32
- name: start_paragraph_id
dtype: int32
- name: end_character
dtype: int32
- name: end_paragraph_id
dtype: int32
- name: meta
struct:
- name: fever_page_id
dtype: string
- name: fever_sentence_id
dtype: int32
- name: annotation_id
dtype: string
- name: yes_no_answer
dtype: string
- name: evidence_span
list: string
- name: section
dtype: string
- name: title
dtype: string
- name: wikipedia_id
dtype: string
splits:
- name: train
num_bytes: 23941622
num_examples: 104966
- name: validation
num_bytes: 3168503
num_examples: 10444
- name: test
num_bytes: 1042660
num_examples: 10100
download_size: 45954548
dataset_size: 28152785
- config_name: aidayago2
features:
- name: id
dtype: string
- name: input
dtype: string
- name: meta
struct:
- name: left_context
dtype: string
- name: mention
dtype: string
- name: right_context
dtype: string
- name: partial_evidence
list:
- name: start_paragraph_id
dtype: int32
- name: end_paragraph_id
dtype: int32
- name: title
dtype: string
- name: section
dtype: string
- name: wikipedia_id
dtype: string
- name: meta
struct:
- name: evidence_span
list: string
- name: obj_surface
list: string
- name: sub_surface
list: string
- name: subj_aliases
list: string
- name: template_questions
list: string
- name: output
list:
- name: answer
dtype: string
- name: meta
struct:
- name: score
dtype: int32
- name: provenance
list:
- name: bleu_score
dtype: float32
- name: start_character
dtype: int32
- name: start_paragraph_id
dtype: int32
- name: end_character
dtype: int32
- name: end_paragraph_id
dtype: int32
- name: meta
struct:
- name: fever_page_id
dtype: string
- name: fever_sentence_id
dtype: int32
- name: annotation_id
dtype: string
- name: yes_no_answer
dtype: string
- name: evidence_span
list: string
- name: section
dtype: string
- name: title
dtype: string
- name: wikipedia_id
dtype: string
splits:
- name: train
num_bytes: 68944642
num_examples: 18395
- name: validation
num_bytes: 20743548
num_examples: 4784
- name: test
num_bytes: 14211859
num_examples: 4463
download_size: 105637528
dataset_size: 103900049
- config_name: wned
features:
- name: id
dtype: string
- name: input
dtype: string
- name: meta
struct:
- name: left_context
dtype: string
- name: mention
dtype: string
- name: right_context
dtype: string
- name: partial_evidence
list:
- name: start_paragraph_id
dtype: int32
- name: end_paragraph_id
dtype: int32
- name: title
dtype: string
- name: section
dtype: string
- name: wikipedia_id
dtype: string
- name: meta
struct:
- name: evidence_span
list: string
- name: obj_surface
list: string
- name: sub_surface
list: string
- name: subj_aliases
list: string
- name: template_questions
list: string
- name: output
list:
- name: answer
dtype: string
- name: meta
struct:
- name: score
dtype: int32
- name: provenance
list:
- name: bleu_score
dtype: float32
- name: start_character
dtype: int32
- name: start_paragraph_id
dtype: int32
- name: end_character
dtype: int32
- name: end_paragraph_id
dtype: int32
- name: meta
struct:
- name: fever_page_id
dtype: string
- name: fever_sentence_id
dtype: int32
- name: annotation_id
dtype: string
- name: yes_no_answer
dtype: string
- name: evidence_span
list: string
- name: section
dtype: string
- name: title
dtype: string
- name: wikipedia_id
dtype: string
splits:
- name: validation
num_bytes: 12659894
num_examples: 3396
- name: test
num_bytes: 13082096
num_examples: 3376
download_size: 26163472
dataset_size: 25741990
- config_name: cweb
features:
- name: id
dtype: string
- name: input
dtype: string
- name: meta
struct:
- name: left_context
dtype: string
- name: mention
dtype: string
- name: right_context
dtype: string
- name: partial_evidence
list:
- name: start_paragraph_id
dtype: int32
- name: end_paragraph_id
dtype: int32
- name: title
dtype: string
- name: section
dtype: string
- name: wikipedia_id
dtype: string
- name: meta
struct:
- name: evidence_span
list: string
- name: obj_surface
list: string
- name: sub_surface
list: string
- name: subj_aliases
list: string
- name: template_questions
list: string
- name: output
list:
- name: answer
dtype: string
- name: meta
struct:
- name: score
dtype: int32
- name: provenance
list:
- name: bleu_score
dtype: float32
- name: start_character
dtype: int32
- name: start_paragraph_id
dtype: int32
- name: end_character
dtype: int32
- name: end_paragraph_id
dtype: int32
- name: meta
struct:
- name: fever_page_id
dtype: string
- name: fever_sentence_id
dtype: int32
- name: annotation_id
dtype: string
- name: yes_no_answer
dtype: string
- name: evidence_span
list: string
- name: section
dtype: string
- name: title
dtype: string
- name: wikipedia_id
dtype: string
splits:
- name: validation
num_bytes: 89819628
num_examples: 5599
- name: test
num_bytes: 99209665
num_examples: 5543
download_size: 190444736
dataset_size: 189029293
- config_name: trex
features:
- name: id
dtype: string
- name: input
dtype: string
- name: meta
struct:
- name: left_context
dtype: string
- name: mention
dtype: string
- name: right_context
dtype: string
- name: partial_evidence
list:
- name: start_paragraph_id
dtype: int32
- name: end_paragraph_id
dtype: int32
- name: title
dtype: string
- name: section
dtype: string
- name: wikipedia_id
dtype: string
- name: meta
struct:
- name: evidence_span
list: string
- name: obj_surface
list: string
- name: sub_surface
list: string
- name: subj_aliases
list: string
- name: template_questions
list: string
- name: output
list:
- name: answer
dtype: string
- name: meta
struct:
- name: score
dtype: int32
- name: provenance
list:
- name: bleu_score
dtype: float32
- name: start_character
dtype: int32
- name: start_paragraph_id
dtype: int32
- name: end_character
dtype: int32
- name: end_paragraph_id
dtype: int32
- name: meta
struct:
- name: fever_page_id
dtype: string
- name: fever_sentence_id
dtype: int32
- name: annotation_id
dtype: string
- name: yes_no_answer
dtype: string
- name: evidence_span
list: string
- name: section
dtype: string
- name: title
dtype: string
- name: wikipedia_id
dtype: string
splits:
- name: train
num_bytes: 1190269126
num_examples: 2284168
- name: validation
num_bytes: 2573820
num_examples: 5000
- name: test
num_bytes: 758742
num_examples: 5000
download_size: 1757029516
dataset_size: 1193601688
- config_name: structured_zeroshot
features:
- name: id
dtype: string
- name: input
dtype: string
- name: meta
struct:
- name: left_context
dtype: string
- name: mention
dtype: string
- name: right_context
dtype: string
- name: partial_evidence
list:
- name: start_paragraph_id
dtype: int32
- name: end_paragraph_id
dtype: int32
- name: title
dtype: string
- name: section
dtype: string
- name: wikipedia_id
dtype: string
- name: meta
struct:
- name: evidence_span
list: string
- name: obj_surface
list: string
- name: sub_surface
list: string
- name: subj_aliases
list: string
- name: template_questions
list: string
- name: output
list:
- name: answer
dtype: string
- name: meta
struct:
- name: score
dtype: int32
- name: provenance
list:
- name: bleu_score
dtype: float32
- name: start_character
dtype: int32
- name: start_paragraph_id
dtype: int32
- name: end_character
dtype: int32
- name: end_paragraph_id
dtype: int32
- name: meta
struct:
- name: fever_page_id
dtype: string
- name: fever_sentence_id
dtype: int32
- name: annotation_id
dtype: string
- name: yes_no_answer
dtype: string
- name: evidence_span
list: string
- name: section
dtype: string
- name: title
dtype: string
- name: wikipedia_id
dtype: string
splits:
- name: train
num_bytes: 47171201
num_examples: 147909
- name: validation
num_bytes: 1612499
num_examples: 3724
- name: test
num_bytes: 1141537
num_examples: 4966
download_size: 74927220
dataset_size: 49925237
- config_name: nq
features:
- name: id
dtype: string
- name: input
dtype: string
- name: meta
struct:
- name: left_context
dtype: string
- name: mention
dtype: string
- name: right_context
dtype: string
- name: partial_evidence
list:
- name: start_paragraph_id
dtype: int32
- name: end_paragraph_id
dtype: int32
- name: title
dtype: string
- name: section
dtype: string
- name: wikipedia_id
dtype: string
- name: meta
struct:
- name: evidence_span
list: string
- name: obj_surface
list: string
- name: sub_surface
list: string
- name: subj_aliases
list: string
- name: template_questions
list: string
- name: output
list:
- name: answer
dtype: string
- name: meta
struct:
- name: score
dtype: int32
- name: provenance
list:
- name: bleu_score
dtype: float32
- name: start_character
dtype: int32
- name: start_paragraph_id
dtype: int32
- name: end_character
dtype: int32
- name: end_paragraph_id
dtype: int32
- name: meta
struct:
- name: fever_page_id
dtype: string
- name: fever_sentence_id
dtype: int32
- name: annotation_id
dtype: string
- name: yes_no_answer
dtype: string
- name: evidence_span
list: string
- name: section
dtype: string
- name: title
dtype: string
- name: wikipedia_id
dtype: string
splits:
- name: train
num_bytes: 30388752
num_examples: 87372
- name: validation
num_bytes: 6190493
num_examples: 2837
- name: test
num_bytes: 334178
num_examples: 1444
download_size: 60166499
dataset_size: 36913423
- config_name: hotpotqa
features:
- name: id
dtype: string
- name: input
dtype: string
- name: meta
struct:
- name: left_context
dtype: string
- name: mention
dtype: string
- name: right_context
dtype: string
- name: partial_evidence
list:
- name: start_paragraph_id
dtype: int32
- name: end_paragraph_id
dtype: int32
- name: title
dtype: string
- name: section
dtype: string
- name: wikipedia_id
dtype: string
- name: meta
struct:
- name: evidence_span
list: string
- name: obj_surface
list: string
- name: sub_surface
list: string
- name: subj_aliases
list: string
- name: template_questions
list: string
- name: output
list:
- name: answer
dtype: string
- name: meta
struct:
- name: score
dtype: int32
- name: provenance
list:
- name: bleu_score
dtype: float32
- name: start_character
dtype: int32
- name: start_paragraph_id
dtype: int32
- name: end_character
dtype: int32
- name: end_paragraph_id
dtype: int32
- name: meta
struct:
- name: fever_page_id
dtype: string
- name: fever_sentence_id
dtype: int32
- name: annotation_id
dtype: string
- name: yes_no_answer
dtype: string
- name: evidence_span
list: string
- name: section
dtype: string
- name: title
dtype: string
- name: wikipedia_id
dtype: string
splits:
- name: train
num_bytes: 33598679
num_examples: 88869
- name: validation
num_bytes: 2371638
num_examples: 5600
- name: test
num_bytes: 888476
num_examples: 5569
download_size: 57516638
dataset_size: 36858793
- config_name: eli5
features:
- name: id
dtype: string
- name: input
dtype: string
- name: meta
struct:
- name: left_context
dtype: string
- name: mention
dtype: string
- name: right_context
dtype: string
- name: partial_evidence
list:
- name: start_paragraph_id
dtype: int32
- name: end_paragraph_id
dtype: int32
- name: title
dtype: string
- name: section
dtype: string
- name: wikipedia_id
dtype: string
- name: meta
struct:
- name: evidence_span
list: string
- name: obj_surface
list: string
- name: sub_surface
list: string
- name: subj_aliases
list: string
- name: template_questions
list: string
- name: output
list:
- name: answer
dtype: string
- name: meta
struct:
- name: score
dtype: int32
- name: provenance
list:
- name: bleu_score
dtype: float32
- name: start_character
dtype: int32
- name: start_paragraph_id
dtype: int32
- name: end_character
dtype: int32
- name: end_paragraph_id
dtype: int32
- name: meta
struct:
- name: fever_page_id
dtype: string
- name: fever_sentence_id
dtype: int32
- name: annotation_id
dtype: string
- name: yes_no_answer
dtype: string
- name: evidence_span
list: string
- name: section
dtype: string
- name: title
dtype: string
- name: wikipedia_id
dtype: string
splits:
- name: train
num_bytes: 525586490
num_examples: 272634
- name: validation
num_bytes: 13860153
num_examples: 1507
- name: test
num_bytes: 108108
num_examples: 600
download_size: 562498660
dataset_size: 539554751
- config_name: wow
features:
- name: id
dtype: string
- name: input
dtype: string
- name: meta
struct:
- name: left_context
dtype: string
- name: mention
dtype: string
- name: right_context
dtype: string
- name: partial_evidence
list:
- name: start_paragraph_id
dtype: int32
- name: end_paragraph_id
dtype: int32
- name: title
dtype: string
- name: section
dtype: string
- name: wikipedia_id
dtype: string
- name: meta
struct:
- name: evidence_span
list: string
- name: obj_surface
list: string
- name: sub_surface
list: string
- name: subj_aliases
list: string
- name: template_questions
list: string
- name: output
list:
- name: answer
dtype: string
- name: meta
struct:
- name: score
dtype: int32
- name: provenance
list:
- name: bleu_score
dtype: float32
- name: start_character
dtype: int32
- name: start_paragraph_id
dtype: int32
- name: end_character
dtype: int32
- name: end_paragraph_id
dtype: int32
- name: meta
struct:
- name: fever_page_id
dtype: string
- name: fever_sentence_id
dtype: int32
- name: annotation_id
dtype: string
- name: yes_no_answer
dtype: string
- name: evidence_span
list: string
- name: section
dtype: string
- name: title
dtype: string
- name: wikipedia_id
dtype: string
splits:
- name: train
num_bytes: 41873570
num_examples: 63734
- name: validation
num_bytes: 2022128
num_examples: 3054
- name: test
num_bytes: 1340818
num_examples: 2944
download_size: 52647339
dataset_size: 45236516
config_names:
- aidayago2
- cweb
- eli5
- fever
- hotpotqa
- nq
- structured_zeroshot
- trex
- triviaqa_support_only
- wned
- wow
---
# Dataset Card for KILT
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://ai.facebook.com/tools/kilt/
- **Repository:** https://github.com/facebookresearch/KILT
- **Paper:** https://arxiv.org/abs/2009.02252
- **Leaderboard:** https://eval.ai/web/challenges/challenge-page/689/leaderboard/
- **Point of Contact:** [Needs More Information]
### Dataset Summary
KILT has been built from 11 datasets representing 5 types of tasks:
- Fact-checking
- Entity linking
- Slot filling
- Open domain QA
- Dialog generation
All these datasets have been grounded in a single pre-processed Wikipedia dump, allowing for fairer and more consistent evaluation as well as enabling new task setups such as multitask and transfer learning with minimal effort. KILT also provides tools to analyze and understand the predictions made by models, as well as the evidence they provide for their predictions.
#### Loading the KILT knowledge source and task data
The original KILT [release](https://github.com/facebookresearch/KILT) only provides question IDs for the TriviaQA task. Using the full dataset requires mapping those back to the TriviaQA questions, which can be done as follows:
```python
from datasets import load_dataset
# Get the pre-processed Wikipedia knowledge source for kild
kilt_wiki = load_dataset("kilt_wikipedia")
# Get the KILT task datasets
kilt_triviaqa = load_dataset("kilt_tasks", name="triviaqa_support_only")
# Most tasks in KILT already have all required data, but KILT-TriviaQA
# only provides the question IDs, not the questions themselves.
# Thankfully, we can get the original TriviaQA data with:
trivia_qa = load_dataset('trivia_qa', 'unfiltered.nocontext')
# The KILT IDs can then be mapped to the TriviaQA questions with:
triviaqa_map = {}
def add_missing_data(x, trivia_qa_subset, triviaqa_map):
i = triviaqa_map[x['id']]
x['input'] = trivia_qa_subset[i]['question']
x['output']['original_answer'] = trivia_qa_subset[i]['answer']['value']
return x
for k in ['train', 'validation', 'test']:
triviaqa_map = dict([(q_id, i) for i, q_id in enumerate(trivia_qa[k]['question_id'])])
kilt_triviaqa[k] = kilt_triviaqa[k].filter(lambda x: x['id'] in triviaqa_map)
kilt_triviaqa[k] = kilt_triviaqa[k].map(add_missing_data, fn_kwargs=dict(trivia_qa_subset=trivia_qa[k], triviaqa_map=triviaqa_map))
```
### Supported Tasks and Leaderboards
The dataset supports a leaderboard that evaluates models against task-specific metrics such as F1 or EM, as well as their ability to retrieve supporting information from Wikipedia.
The current best performing models can be found [here](https://eval.ai/web/challenges/challenge-page/689/leaderboard/).
### Languages
All tasks are in English (`en`).
## Dataset Structure
### Data Instances
An example of open-domain QA from the Natural Questions `nq` configuration looks as follows:
```
{'id': '-5004457603684974952',
'input': 'who is playing the halftime show at super bowl 2016',
'meta': {'left_context': '',
'mention': '',
'obj_surface': [],
'partial_evidence': [],
'right_context': '',
'sub_surface': [],
'subj_aliases': [],
'template_questions': []},
'output': [{'answer': 'Coldplay',
'meta': {'score': 0},
'provenance': [{'bleu_score': 1.0,
'end_character': 186,
'end_paragraph_id': 1,
'meta': {'annotation_id': '-1',
'evidence_span': [],
'fever_page_id': '',
'fever_sentence_id': -1,
'yes_no_answer': ''},
'section': 'Section::::Abstract.',
'start_character': 178,
'start_paragraph_id': 1,
'title': 'Super Bowl 50 halftime show',
'wikipedia_id': '45267196'}]},
{'answer': 'Beyoncé',
'meta': {'score': 0},
'provenance': [{'bleu_score': 1.0,
'end_character': 224,
'end_paragraph_id': 1,
'meta': {'annotation_id': '-1',
'evidence_span': [],
'fever_page_id': '',
'fever_sentence_id': -1,
'yes_no_answer': ''},
'section': 'Section::::Abstract.',
'start_character': 217,
'start_paragraph_id': 1,
'title': 'Super Bowl 50 halftime show',
'wikipedia_id': '45267196'}]},
{'answer': 'Bruno Mars',
'meta': {'score': 0},
'provenance': [{'bleu_score': 1.0,
'end_character': 239,
'end_paragraph_id': 1,
'meta': {'annotation_id': '-1',
'evidence_span': [],
'fever_page_id': '',
'fever_sentence_id': -1,
'yes_no_answer': ''},
'section': 'Section::::Abstract.',
'start_character': 229,
'start_paragraph_id': 1,
'title': 'Super Bowl 50 halftime show',
'wikipedia_id': '45267196'}]},
{'answer': 'Coldplay with special guest performers Beyoncé and Bruno Mars',
'meta': {'score': 0},
'provenance': []},
{'answer': 'British rock group Coldplay with special guest performers Beyoncé and Bruno Mars',
'meta': {'score': 0},
'provenance': []},
{'answer': '',
'meta': {'score': 0},
'provenance': [{'bleu_score': 0.9657992720603943,
'end_character': 341,
'end_paragraph_id': 1,
'meta': {'annotation_id': '2430977867500315580',
'evidence_span': [],
'fever_page_id': '',
'fever_sentence_id': -1,
'yes_no_answer': 'NONE'},
'section': 'Section::::Abstract.',
'start_character': 0,
'start_paragraph_id': 1,
'title': 'Super Bowl 50 halftime show',
'wikipedia_id': '45267196'}]},
{'answer': '',
'meta': {'score': 0},
'provenance': [{'bleu_score': -1.0,
'end_character': -1,
'end_paragraph_id': 1,
'meta': {'annotation_id': '-1',
'evidence_span': ['It was headlined by the British rock group Coldplay with special guest performers Beyoncé and Bruno Mars',
'It was headlined by the British rock group Coldplay with special guest performers Beyoncé and Bruno Mars, who previously had headlined the Super Bowl XLVII and Super Bowl XLVIII halftime shows, respectively.',
"The Super Bowl 50 Halftime Show took place on February 7, 2016, at Levi's Stadium in Santa Clara, California as part of Super Bowl 50. It was headlined by the British rock group Coldplay with special guest performers Beyoncé and Bruno Mars",
"The Super Bowl 50 Halftime Show took place on February 7, 2016, at Levi's Stadium in Santa Clara, California as part of Super Bowl 50. It was headlined by the British rock group Coldplay with special guest performers Beyoncé and Bruno Mars,"],
'fever_page_id': '',
'fever_sentence_id': -1,
'yes_no_answer': ''},
'section': 'Section::::Abstract.',
'start_character': -1,
'start_paragraph_id': 1,
'title': 'Super Bowl 50 halftime show',
'wikipedia_id': '45267196'}]}]}
```
### Data Fields
Examples from all configurations have the following features:
- `input`: a `string` feature representing the query.
- `output`: a `list` of features each containing information for an answer, made up of:
- `answer`: a `string` feature representing a possible answer.
- `provenance`: a `list` of features representing Wikipedia passages that support the `answer`, denoted by:
- `title`: a `string` feature, the title of the Wikipedia article the passage was retrieved from.
- `section`: a `string` feature, the title of the section in Wikipedia article.
- `wikipedia_id`: a `string` feature, a unique identifier for the Wikipedia article.
- `start_character`: a `int32` feature.
- `start_paragraph_id`: a `int32` feature.
- `end_character`: a `int32` feature.
- `end_paragraph_id`: a `int32` feature.
### Data Splits
The configurations have the following splits:
| | Train | Validation | Test |
| ----------- | ----------- | ----------- | ----------- |
| triviaqa | 61844 | 5359 | 6586 |
| fever | 104966 | 10444 | 10100 |
| aidayago2 | 18395 | 4784 | 4463 |
| wned | | 3396 | 3376 |
| cweb | | 5599 | 5543 |
| trex | 2284168 | 5000 | 5000 |
| structured_zeroshot | 147909 | 3724 | 4966 |
| nq | 87372 | 2837 | 1444 |
| hotpotqa | 88869 | 5600 | 5569 |
| eli5 | 272634 | 1507 | 600 |
| wow | 94577 | 3058 | 2944 |
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
[Needs More Information]
## Considerations for Using the Data
### Social Impact of Dataset
[Needs More Information]
### Discussion of Biases
[Needs More Information]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
[Needs More Information]
### Licensing Information
[Needs More Information]
### Citation Information
Cite as:
```
@inproceedings{kilt_tasks,
author = {Fabio Petroni and
Aleksandra Piktus and
Angela Fan and
Patrick S. H. Lewis and
Majid Yazdani and
Nicola De Cao and
James Thorne and
Yacine Jernite and
Vladimir Karpukhin and
Jean Maillard and
Vassilis Plachouras and
Tim Rockt{\"{a}}schel and
Sebastian Riedel},
editor = {Kristina Toutanova and
Anna Rumshisky and
Luke Zettlemoyer and
Dilek Hakkani{-}T{\"{u}}r and
Iz Beltagy and
Steven Bethard and
Ryan Cotterell and
Tanmoy Chakraborty and
Yichao Zhou},
title = {{KILT:} a Benchmark for Knowledge Intensive Language Tasks},
booktitle = {Proceedings of the 2021 Conference of the North American Chapter of
the Association for Computational Linguistics: Human Language Technologies,
{NAACL-HLT} 2021, Online, June 6-11, 2021},
pages = {2523--2544},
publisher = {Association for Computational Linguistics},
year = {2021},
url = {https://www.aclweb.org/anthology/2021.naacl-main.200/}
}
```
### Contributions
Thanks to [@thomwolf](https://github.com/thomwolf), [@yjernite](https://github.com/yjernite) for adding this dataset. |
story_cloze | 2023-04-05T13:40:54.000Z | [
"task_categories:other",
"annotations_creators:found",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:en",
"license:unknown",
"region:us"
] | null | Story Cloze Test' is a commonsense reasoning framework for evaluating story understanding,
story generation, and script learning.This test requires a system to choose the correct ending
to a four-sentence story. | @inproceedings{mostafazadeh2017lsdsem,
title={Lsdsem 2017 shared task: The story cloze test},
author={Mostafazadeh, Nasrin and Roth, Michael and Louis, Annie and Chambers, Nathanael and Allen, James},
booktitle={Proceedings of the 2nd Workshop on Linking Models of Lexical, Sentential and Discourse-level Semantics},
pages={46--51},
year={2017}
} | null | 6 | 2,899 | ---
annotations_creators:
- found
language_creators:
- found
language:
- en
license:
- unknown
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- other
task_ids: []
paperswithcode_id: null
pretty_name: Story Cloze Test
dataset_info:
- config_name: '2016'
features:
- name: story_id
dtype: string
- name: input_sentence_1
dtype: string
- name: input_sentence_2
dtype: string
- name: input_sentence_3
dtype: string
- name: input_sentence_4
dtype: string
- name: sentence_quiz1
dtype: string
- name: sentence_quiz2
dtype: string
- name: answer_right_ending
dtype: int32
splits:
- name: validation
num_bytes: 614084
num_examples: 1871
- name: test
num_bytes: 613184
num_examples: 1871
download_size: 0
dataset_size: 1227268
- config_name: '2018'
features:
- name: story_id
dtype: string
- name: input_sentence_1
dtype: string
- name: input_sentence_2
dtype: string
- name: input_sentence_3
dtype: string
- name: input_sentence_4
dtype: string
- name: sentence_quiz1
dtype: string
- name: sentence_quiz2
dtype: string
- name: answer_right_ending
dtype: int32
splits:
- name: validation
num_bytes: 515439
num_examples: 1571
download_size: 0
dataset_size: 515439
---
# Dataset Card for "story_cloze"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://cs.rochester.edu/nlp/rocstories/](https://cs.rochester.edu/nlp/rocstories/)
- **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Paper:** [Lsdsem 2017 shared task: The story cloze test](https://aclanthology.org/W17-0906.pdf)
- **Point of Contact:** [Nasrin Mostafazadeh](nasrinm@cs.rochester.edu)
- **Size of downloaded dataset files:** 2.13 MB
- **Size of the generated dataset:** 2.13 MB
- **Total amount of disk used:** 2.15 MB
### Dataset Summary
Story Cloze Test' is a new commonsense reasoning framework for evaluating story understanding,
story generation, and script learning.This test requires a system to choose the correct ending
to a four-sentence story.
### Supported Tasks and Leaderboards
commonsense reasoning
### Languages
English
## Dataset Structure
### Data Instances
- **Size of downloaded dataset files:** 2.13 MB
- **Size of the generated dataset:** 2.13 MB
- **Total amount of disk used:** 2.15 MB
An example of 'train' looks as follows.
```
{'answer_right_ending': 1,
'input_sentence_1': 'Rick grew up in a troubled household.',
'input_sentence_2': 'He never found good support in family, and turned to gangs.',
'input_sentence_3': "It wasn't long before Rick got shot in a robbery.",
'input_sentence_4': 'The incident caused him to turn a new leaf.',
'sentence_quiz1': 'He is happy now.',
'sentence_quiz2': 'He joined a gang.',
'story_id': '138d5bfb-05cc-41e3-bf2c-fa85ebad14e2'}
```
### Data Fields
The data fields are the same among all splits.
- `input_sentence_1`: The first statement in the story.
- `input_sentence_2`: The second statement in the story.
- `input_sentence_3`: The third statement in the story.
- `input_sentence_4`: The forth statement in the story.
- `sentence_quiz1`: first possible continuation of the story.
- `sentence_quiz2`: second possible continuation of the story.
- `answer_right_ending`: correct possible ending; either 1 or 2.
- `story_id`: story id.
### Data Splits
| name |validation |test|
|-------|-----:|---:|
|2016|1871|1871|
|2018|1571|-|
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@inproceedings{mostafazadeh2017lsdsem,
title={Lsdsem 2017 shared task: The story cloze test},
author={Mostafazadeh, Nasrin and Roth, Michael and Louis, Annie and Chambers, Nathanael and Allen, James},
booktitle={Proceedings of the 2nd Workshop on Linking Models of Lexical, Sentential and Discourse-level Semantics},
pages={46--51},
year={2017}
}
```
### Contributions
Thanks to [@zaidalyafeai](https://github.com/zaidalyafeai). |
hf-internal-testing/instructpix2pix-10-samples | 2023-06-09T19:57:18.000Z | [
"region:us"
] | hf-internal-testing | null | null | null | 0 | 2,899 | ---
dataset_info:
features:
- name: input_image
dtype: image
- name: edited_image
dtype: image
- name: edit_prompt
dtype: string
splits:
- name: train
num_bytes: 4479546.0
num_examples: 10
download_size: 4481212
dataset_size: 4479546.0
---
# Dataset Card for "test"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
bentrevett/multi30k | 2023-03-24T14:50:27.000Z | [
"task_categories:translation",
"size_categories:10K<n<100K",
"language:en",
"language:de",
"region:us"
] | bentrevett | null | null | null | 1 | 2,881 | ---
task_categories:
- translation
language:
- en
- de
size_categories:
- 10K<n<100K
---
# Multi30k
This dataset contains the "multi30k" dataset, which is the "task 1" dataset from [here](https://www.statmt.org/wmt16/multimodal-task.html).
Each example consists of an "en" and a "de" feature. "en" is an English sentence, and "de" is the German translation of the English sentence.
### Data Splits
The Multi30k dataset has 3 splits: _train_, _validation_, and _test_.
| Dataset Split | Number of Instances in Split |
| ------------- | ------------------------------------------- |
| Train | 29,000 |
| Validation | 1,014 |
| Test | 1,000 |
### Citation Information
```
@article{elliott-EtAl:2016:VL16,
author = {{Elliott}, D. and {Frank}, S. and {Sima'an}, K. and {Specia}, L.},
title = {Multi30K: Multilingual English-German Image Descriptions},
booktitle = {Proceedings of the 5th Workshop on Vision and Language},
year = {2016},
pages = {70--74},
year = 2016
}
``` |
InstaDeepAI/human_reference_genome | 2023-04-20T13:37:22.000Z | [
"DNA",
"Genomics",
"Nucleotide",
"region:us"
] | InstaDeepAI | Genome Reference Consortium Human Build 38 patch release 14 (GRCh38.p14)
filtered and split into chunks. | @article{o2016reference,
title={Reference sequence (RefSeq) database at NCBI: current status, taxonomic expansion, and functional annotation},
author={O'Leary, Nuala A and Wright, Mathew W and Brister, J Rodney and Ciufo, Stacy and Haddad, Diana and McVeigh, Rich and Rajput, Bhanu and Robbertse, Barbara and Smith-White, Brian and Ako-Adjei, Danso and others},
journal={Nucleic acids research},
volume={44},
number={D1},
pages={D733--D745},
year={2016},
publisher={Oxford University Press}
} | null | 0 | 2,876 | ---
tags:
- DNA
- Genomics
- Nucleotide
pretty_name: Human Reference Genome
---
# Dataset Card for the human reference genome
## Dataset Description
- **Repository:** [Nucleotide Transformer](https://github.com/instadeepai/nucleotide-transformer)
- **Paper:** [The Nucleotide Transformer: Building and Evaluating Robust Foundation Models for Human Genomics](https://www.biorxiv.org/content/10.1101/2023.01.11.523679v1)
### Dataset Summary
The Human reference genome dataset was constructed by considering all autosomal and sex chromosomes sequences from reference assembly [GRCh38/hg38](https://www.ncbi.nlm.nih.gov/assembly/GCF_000001405.26) and reaches a total of 3.2 billion nucleotides.
### Supported Tasks and Leaderboards
This dataset has been used as a pre-training corpus for the Nucleotide Transformers models. Depending on the configuration used, each sequence is 6,200 or 12,200 base pase pairs long. If the dataset is iterated without being shuffled, the first 100 nucleotides of a sequence are the same as the last 100 base pairs of the previous sequence, and the last 100 nucleotides are the same as the first 100 base pairs of the next sequence. During training, this allows for randomly selecting a nucleotide between the first 200 nucleotides of the sequence and start the tokenization from this nucleotide. That way, all the chromosome is covered and the model sees different tokens for a given sequence at each epoch.
### Languages
DNA
## Dataset Structure
[N/A]
### Data Instances
For each instance, there is a string representing the sequence, a string indicating the chromosome, and two integers representing the index of the first and last nucleotide respectively. An instance is shown below:
```python
{'sequence': 'CATCTGCAGGTGTCTGACTTCCAGCAACTGCTGGCCTGTGCCAGGGTGCAAGCTGAGCACTGGAGTGGAGTTTTCCTGTGGAGAGGAGCCATGCCTAGAGTGGGATGGGCCATTGTTCATCTTCTGGCCCCTGTTGTCTGCATGTAACTTAATACCACAACCAGGCATAGGGGAAAGATTGGAGGAAAGATGAGTGAGAGCATCAACTTCTCTCACAACCTAGGCCAGTAAGTAGTGCTTGTGCTCATCTCCTTGGCTGTGATACGTGGCCGGCCCTCGCTCCAGCAGCTGGACCCCTACCTGCCGTCTGCTGCCATCGGAGCCCAAAGCCGGGCTGTGACTGCTCAGACCAGCCGGCTGGAGGGAGGGGCTCAGCAGGTCTGGCTTTGGCCCTGGGAGAGCAGGTGGAAGATCAGGCAGGCCATCGCTGCCACAGAACCCAGTGGATTGGCCTAGGTGGGATCTCTGAGCTCAACAAGCCCTCTCTGGGTGGTAGGTGCAGAGACGGGAGGGGCAGAGCCGCAGGCACAGCCAAGAGGGCTGAAGAAATGGTAGAACGGAGCAGCTGGTGATGTGTGGGCCCACCGGCCCCAGGCTCCTGTCTCCCCCCAGGTGTGTGGTGATGCCAGGCATGCCCTTCCCCAGCATCAGGTCTCCAGAGCTGCAGAAGACGACGGCCGACTTGGATCACACTCTTGTGAGTGTCCCCAGTGTTGCAGAGGTGAGAGGAGAGTAGACAGTGAGTGGGAGTGGCGTCGCCCCTAGGGCTCTACGGGGCCGGCGTCTCCTGTCTCCTGGAGAGGCTTCGATGCCCCTCCACACCCTCTTGATCTTCCCTGTGATGTCATCTGGAGCCCTGCTGCTTGCGGTGGCCTATAAAGCCTCCTAGTCTGGCTCCAAGGCCTGGCAGAGTCTTTCCCAGGGAAAGCTACAAGCAGCAAACAGTCTGCATGGGTCATCCCCTTCACTCCCAGCTCAGAGCCCAGGCCAGGGGCCCCCAAGAAAGGCTCTGGTGGAGAACCTGTGCATGAAGGCTGTCAACCAGTCCATAGGCAAGCCTGGCTGCCTCCAGCTGGGTCGACAGACAGGGGCTGGAGAAGGGGAGAAGAGGAAAGTGAGGTTGCCTGCCCTGTCTCCTACCTGAGGCTGAGGAAGGAGAAGGGGATGCACTGTTGGGGAGGCAGCTGTAACTCAAAGCCTTAGCCTCTGTTCCCACGAAGGCAGGGCCATCAGGCACCAAAGGGATTCTGCCAGCATAGTGCTCCTGGACCAGTGATACACCCGGCACCCTGTCCTGGACACGCTGTTGGCCTGGATCTGAGCCCTGGTGGAGGTCAAAGCCACCTTTGGTTCTGCCATTGCTGCTGTGTGGAAGTTCACTCCTGCCTTTTCCTTTCCCTAGAGCCTCCACCACCCCGAGATCACATTTCTCACTGCCTTTTGTCTGCCCAGTTTCACCAGAAGTAGGCCTCTTCCTGACAGGCAGCTGCACCACTGCCTGGCGCTGTGCCCTTCCTTTGCTCTGCCCGCTGGAGACGGTGTTTGTCATGGGCCTGGTCTGCAGGGATCCTGCTACAAAGGTGAAACCCAGGAGAGTGTGGAGTCCAGAGTGTTGCCAGGACCCAGGCACAGGCATTAGTGCCCGTTGGAGAAAACAGGGGAATCCCGAAGAAATGGTGGGTCCTGGCCATCCGTGAGATCTTCCCAGGGCAGCTCCCCTCTGTGGAATCCAATCTGTCTTCCATCCTGCGTGGCCGAGGGCCAGGCTTCTCACTGGGCCTCTGCAGGAGGCTGCCATTTGTCCTGCCCACCTTCTTAGAAGCGAGACGGAGCAGACCCATCTGCTACTGCCCTTTCTATAATAACTAAAGTTAGCTGCCCTGGACTATTCACCCCCTAGTCTCAATTTAAGAAGATCCCCATGGCCACAGGGCCCCTGCCTGGGGGCTTGTCACCTCCCCCACCTTCTTCCTGAGTCATTCCTGCAGCCTTGCTCCCTAACCTGCCCCACAGCCTTGCCTGGATTTCTATCTCCCTGGCTTGGTGCCAGTTCCTCCAAGTCGATGGCACCTCCCTCCCTCTCAACCACTTGAGCAAACTCCAAGACATCTTCTACCCCAACACCAGCAATTGTGCCAAGGGCCATTAGGCTCTCAGCATGACTATTTTTAGAGACCCCGTGTCTGTCACTGAAACCTTTTTTGTGGGAGACTATTCCTCCCATCTGCAACAGCTGCCCCTGCTGACTGCCCTTCTCTCCTCCCTCTCATCCCAGAGAAACAGGTCAGCTGGGAGCTTCTGCCCCCACTGCCTAGGGACCAACAGGGGCAGGAGGCAGTCACTGACCCCGAGACGTTTGCATCCTGCACAGCTAGAGATCCTTTATTAAAAGCACACTGTTGGTTTCTGCTCAGTTCTTTATTGATTGGTGTGCCGTTTTCTCTGGAAGCCTCTTAAGAACACAGTGGCGCAGGCTGGGTGGAGCCGTCCCCCCATGGAGCACAGGCAGACAGAAGTCCCCGCCCCAGCTGTGTGGCCTCAAGCCAGCCTTCCGCTCCTTGAAGCTGGTCTCCACACAGTGCTGGTTCCGTCACCCCCTCCCAAGGAAGTAGGTCTGAGCAGCTTGTCCTGGCTGTGTCCATGTCAGAGCAACGGCCCAAGTCTGGGTCTGGGGGGGAAGGTGTCATGGAGCCCCCTACGATTCCCAGTCGTCCTCGTCCTCCTCTGCCTGTGGCTGCTGCGGTGGCGGCAGAGGAGGGATGGAGTCTGACACGCGGGCAAAGGCTCCTCCGGGCCCCTCACCAGCCCCAGGTCCTTTCCCAGAGATGCCTGGAGGGAAAAGGCTGAGTGAGGGTGGTTGGTGGGAAACCCTGGTTCCCCCAGCCCCCGGAGACTTAAATACAGGAAGAAAAAGGCAGGACAGAATTACAAGGTGCTGGCCCAGGGCGGGCAGCGGCCCTGCCTCCTACCCTTGCGCCTCATGACCAGCTTGTTGAAGAGATCCGACATCAAGTGCCCACCTTGGCTCGTGGCTCTCACTGCAACGGGAAAGCCACAGACTGGGGTGAAGAGTTCAGTCACATGCGACCGGTGACTCCCTGTCCCCACCCCCATGACACTCCCCAGCCCTCCAAGGCCACTGTGTTTCCCAGTTAGCTCAGAGCCTCAGTCGATCCCTGACCCAGCACCGGGCACTGATGAGACAGCGGCTGTTTGAGGAGCCACCTCCCAGCCACCTCGGGGCCAGGGCCAGGGTGTGCAGCACCACTGTACAATGGGGAAACTGGCCCAGAGAGGTGAGGCAGCTTGCCTGGGGTCACAGAGCAAGGCAAAAGCAGCGCTGGGTACAAGCTCAAAACCATAGTGCCCAGGGCACTGCCGCTGCAGGCGCAGGCATCGCATCACACCAGTGTCTGCGTTCACAGCAGGCATCATCAGTAGCCTCCAGAGGCCTCAGGTCCAGTCTCTAAAAATATCTCAGGAGGCTGCAGTGGCTGACCATTGCCTTGGACCGCTCTTGGCAGTCGAAGAAGATTCTCCTGTCAGTTTGAGCTGGGTGAGCTTAGAGAGGAAAGCTCCACTATGGCTCCCAAACCAGGAAGGAGCCATAGCCCAGGCAGGAGGGCTGAGGACCTCTGGTGGCGGCCCAGGGCTTCCAGCATGTGCCCTAGGGGAAGCAGGGGCCAGCTGGCAAGAGCAGGGGGTGGGCAGAAAGCACCCGGTGGACTCAGGGCTGGAGGGGAGGAGGCGATCTTGCCCAAGGCCCTCCGACTGCAAGCTCCAGGGCCCGCTCACCTTGCTCCTGCTCCTTCTGCTGCTGCTTCTCCAGCTTTCGCTCCTTCATGCTGCGCAGCTTGGCCTTGCCGATGCCCCCAGCTTGGCGGATGGACTCTAGCAGAGTGGCCAGCCACCGGAGGGGTCAACCACTTCCCTGGGAGCTCCCTGGACTGGAGCCGGGAGGTGGGGAACAGGGCAAGGAGGAAAGGCTGCTCAGGCAGGGCTGGGGAAGCTTACTGTGTCCAAGAGCCTGCTGGGAGGGAAGTCACCTCCCCTCAAACGAGGAGCCCTGCGCTGGGGAGGCCGGACCTTTGGAGACTGTGTGTGGGGGCCTGGGCACTGACTTCTGCAACCACCTGAGCGCGGGCATCCTGTGTGCAGATACTCCCTGCTTCCTCTCTAGCCCCCACCCTGCAGAGCTGGACCCCTGAGCTAGCCATGCTCTGACAGTCTCAGTTGCACACACGAGCCAGCAGAGGGGTTTTGTGCCACTTCTGGATGCTAGGGTTACACTGGGAGACACAGCAGTGAAGCTGAAATGAAAAATGTGTTGCTGTAGTTTGTTATTAGACCCCTTCTTTCCATTGGTTTAATTAGGAATGGGGAACCCAGAGCCTCACTTGTTCAGGCTCCCTCTGCCCTAGAAGTGAGAAGTCCAGAGCTCTACAGTTTGAAAACCACTATTTTATGAACCAAGTAGAACAAGATATTTGAAATGGAAACTATTCAAAAAATTGAGAATTTCTGACCACTTAACAAACCCACAGAAAATCCACCCGAGTGCACTGAGCACGCCAGAAATCAGGTGGCCTCAAAGAGCTGCTCCCACCTGAAGGAGACGCGCTGCTGCTGCTGTCGTCCTGCCTGGCGCCTTGGCCTACAGGGGCCGCGGTTGAGGGTGGGAGTGGGGGTGCACTGGCCAGCACCTCAGGAGCTGGGGGTGGTGGTGGGGGCGGTGGGGGTGGTGTTAGTACCCCATCTTGTAGGTCTGAAACACAAAGTGTGGGGTGTCTAGGGAAGAAGGTGTGTGACCAGGGAGGTCCCCGGCCCAGCTCCCATCCCAGAACCCAGCTCACCTACCTTGAGAGGCTCGGCTACCTCAGTGTGGAAGGTGGGCAGTTCTGGAATGGTGCCAGGGGCAGAGGGGGCAATGCCGGGGCCCAGGTCGGCAATGTACATGAGGTCGTTGGCAATGCCGGGCAGGTCAGGCAGGTAGGATGGAACATCAATCTCAGGCACCTGGCCCAGGTCTGGCACATAGAAGTAGTTCTCTGGGACCTGCAAGATTAGGCAGGGACATGTGAGAGGTGACAGGGACCTGCAGGGGCAGCCAACAAGACCTTGTGTGCACCTCCCATGGGTGGAATAAGGGGCCCAACAGCCTTGACTGGAGAGGAGCTCTGGCAAGGCCCTGGGCCACTGCACCTGTCTCCACCTCTGTCCCACCCCTCCCACCTGCTGTTCCAGCTGCTCTCTCTTGCTGATGGACAAGGGGGCATCAAACAGCTTCTCCTCTGTCTCTGCCCCCAGCATCACATGGGTCTTTGTTACAGCACCAGCCAGGGGGTCCAGGAAGACATACTTCTTCTACCTACAGAGGCGACATGGGGGTCAGGCAAGCTGACACCCGCTGTCCTGAGCCCATGTTCCTCTCCCACATCATCAGGGGCACAGCGTGCACTGTGGGGTCCCAGGCCTCCCGAGCCGAGCCACCCGTCACCCCCTGGCTCCTGGCCTATGTGCTGTACCTGTGTCTGATGCCCTGGGTCCCCACTAAGCCAGGCCGGGCCTCCCGCCCACACCCCTCGGCCCTGCCCTCTGGCCATACAGGTTCTCGGTGGTGTTGAAGAGCAGCAAGGAGCTGACAGAGCTGATGTTGCTGGGAAGACCCCCAAGTCCCTCTTCTGCATCGTCCTCGGGCTCCGGCTTGGTGCTCACGCACACAGGAAAGTCCTTCAGCTTCTCCTGAGAGGGCCAGGATGGCCAAGGGATGGTGAATATTTGGTGCTGGGCCTAATCAGCTGCCATCCCATCCCAGTCAGCCTCCTCTGGGGGACAGAACCCTATGGTGGCCCCGGCTCCTCCCCAGTATCCAGTCCTCCTGGTGTGTGACAGGCTATATGCGCGGCCAGCAGACCTGCAGGGCCCGCTCGTCCAGGGGGCGGTGCTTGCTCTGGATCCTGTGGCGGGGGCGTCTCTGCAGGCCAGGGTCCTGGGCGCCCGTGAAGATGGAGCCATATTCCTGCAGGCGCCCTGGAGCAGGGTACTTGGCACTGGAGAACACCTGTGGACACAGGGACAAGTCTGAGGGGGCCCCAAGAGGCTCAGAGGGCTAGGATTGCTTGGCAGGAGAGGGTGGAGTTGGAAGCCTGGGCGAGAAGAAAGCTCAAGGTACAGGTGGGCAGCAGGGCAGAGACTGGGCA',
'chromosome': '1',
'start_pos': 12000,
'end_pos': 18200}
```
### Data Fields
- `sequence`: a string containing a DNA sequence from the human reference genome
- `chromosome`: a string indicating the chromosome (1,2,...,21,X,Y)
- `start_pos`: an integer indicating the index of the sequence's first nucleotide
- `end_pos`: an integer indicating the index of the sequence's last nucleotide
### Data Splits
The Human reference genome dataset has 3 splits: train, validation, and test. Below are the statistics for the dataset.
```
| Dataset Split | Number of Instances in Split (6kb) | Number of Instances in Split (12kb) |
| ------------- | ------------------------------------------- | -------------------------------------------------------------- |
| Train | 498,444 | 249,222 |
| Validation | 7,784 | 3,892 |
| Test | 8,469 | 4,234 |
```
## Dataset Creation
[N/A]
### Curation Rationale
[N/A]
### Source Data
#### Initial Data Collection and Normalization
The data consists of sequences cut from the chromosomes found in the [GRCh38/hg38](https://www.ncbi.nlm.nih.gov/assembly/GCF_000001405.26) human reference genome.
#### Who are the source language producers?
[N/A]
### Annotations
The dataset does not contain any additional annotations.
#### Annotation process
[N/A]
#### Who are the annotators?
[N/A]
### Personal and Sensitive Information
[N/A]
## Considerations for Using the Data
### Social Impact of Dataset
[N/A]
### Discussion of Biases
[N/A]
### Other Known Limitations
[N/A]
## Additional Information
### Dataset Curators
[N/A]
### Licensing Information
[N/A]
### Citation Information
```bibtex
@article{dalla2023nucleotide,
title={The Nucleotide Transformer: Building and Evaluating Robust Foundation Models for Human Genomics},
author={Dalla-Torre, Hugo and Gonzalez, Liam and Mendoza Revilla, Javier and Lopez Carranza, Nicolas and Henryk Grywaczewski, Adam and Oteri, Francesco and Dallago, Christian and Trop, Evan and Sirelkhatim, Hassan and Richard, Guillaume and others},
journal={bioRxiv},
pages={2023--01},
year={2023},
publisher={Cold Spring Harbor Laboratory}
}
``` |
FanFan/sentiment-amazon-clean | 2022-03-09T17:12:19.000Z | [
"region:us"
] | FanFan | null | null | null | 0 | 2,829 | Entry not found |
jfleg | 2022-11-18T20:15:50.000Z | [
"task_categories:text2text-generation",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:monolingual",
"multilinguality:other-language-learner",
"size_categories:1K<n<10K",
"source_datasets:extended|other-GUG-grammaticality-judgements",
"language:en",
"license:cc-by-nc-sa-4.0",
"grammatical-error-correction",
"region:us"
] | null | JFLEG (JHU FLuency-Extended GUG) is an English grammatical error correction (GEC) corpus.
It is a gold standard benchmark for developing and evaluating GEC systems with respect to
fluency (extent to which a text is native-sounding) as well as grammaticality.
For each source document, there are four human-written corrections (ref0 to ref3). | @InProceedings{napoles-sakaguchi-tetreault:2017:EACLshort,
author = {Napoles, Courtney
and Sakaguchi, Keisuke
and Tetreault, Joel},
title = {JFLEG: A Fluency Corpus and Benchmark for Grammatical Error Correction},
booktitle = {Proceedings of the 15th Conference of the European Chapter of the
Association for Computational Linguistics: Volume 2, Short Papers},
month = {April},
year = {2017},
address = {Valencia, Spain},
publisher = {Association for Computational Linguistics},
pages = {229--234},
url = {http://www.aclweb.org/anthology/E17-2037}
}
@InProceedings{heilman-EtAl:2014:P14-2,
author = {Heilman, Michael
and Cahill, Aoife
and Madnani, Nitin
and Lopez, Melissa
and Mulholland, Matthew
and Tetreault, Joel},
title = {Predicting Grammaticality on an Ordinal Scale},
booktitle = {Proceedings of the 52nd Annual Meeting of the
Association for Computational Linguistics (Volume 2: Short Papers)},
month = {June},
year = {2014},
address = {Baltimore, Maryland},
publisher = {Association for Computational Linguistics},
pages = {174--180},
url = {http://www.aclweb.org/anthology/P14-2029}
} | null | 35 | 2,809 | ---
annotations_creators:
- expert-generated
language_creators:
- found
language:
- en
license:
- cc-by-nc-sa-4.0
multilinguality:
- monolingual
- other-language-learner
size_categories:
- 1K<n<10K
source_datasets:
- extended|other-GUG-grammaticality-judgements
task_categories:
- text2text-generation
task_ids: []
paperswithcode_id: jfleg
pretty_name: JHU FLuency-Extended GUG corpus
tags:
- grammatical-error-correction
dataset_info:
features:
- name: sentence
dtype: string
- name: corrections
sequence: string
splits:
- name: validation
num_bytes: 379991
num_examples: 755
- name: test
num_bytes: 379711
num_examples: 748
download_size: 731111
dataset_size: 759702
---
# Dataset Card for JFLEG
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [Github](https://github.com/keisks/jfleg)
- **Repository:** [Github](https://github.com/keisks/jfleg)
- **Paper:** [Napoles et al., 2020](https://www.aclweb.org/anthology/E17-2037/)
- **Leaderboard:** [Leaderboard](https://github.com/keisks/jfleg#leader-board-published-results)
- **Point of Contact:** Courtney Napoles, Keisuke Sakaguchi
### Dataset Summary
JFLEG (JHU FLuency-Extended GUG) is an English grammatical error correction (GEC) corpus. It is a gold standard benchmark for developing and evaluating GEC systems with respect to fluency (extent to which a text is native-sounding) as well as grammaticality. For each source document, there are four human-written corrections.
### Supported Tasks and Leaderboards
Grammatical error correction.
### Languages
English (native as well as L2 writers)
## Dataset Structure
### Data Instances
Each instance contains a source sentence and four corrections. For example:
```python
{
'sentence': "They are moved by solar energy ."
'corrections': [
"They are moving by solar energy .",
"They are moved by solar energy .",
"They are moved by solar energy .",
"They are propelled by solar energy ."
]
}
```
### Data Fields
- sentence: original sentence written by an English learner
- corrections: corrected versions by human annotators. The order of the annotations are consistent (eg first sentence will always be written by annotator "ref0").
### Data Splits
- This dataset contains 1511 examples in total and comprise a dev and test split.
- There are 754 and 747 source sentences for dev and test, respectively.
- Each sentence has 4 corresponding corrected versions.
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
This work is licensed under a [Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License](https://creativecommons.org/licenses/by-nc-sa/4.0/).
### Citation Information
This benchmark was proposed by [Napoles et al., 2020](https://www.aclweb.org/anthology/E17-2037/).
```
@InProceedings{napoles-sakaguchi-tetreault:2017:EACLshort,
author = {Napoles, Courtney and Sakaguchi, Keisuke and Tetreault, Joel},
title = {JFLEG: A Fluency Corpus and Benchmark for Grammatical Error Correction},
booktitle = {Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 2, Short Papers},
month = {April},
year = {2017},
address = {Valencia, Spain},
publisher = {Association for Computational Linguistics},
pages = {229--234},
url = {http://www.aclweb.org/anthology/E17-2037}
}
@InProceedings{heilman-EtAl:2014:P14-2,
author = {Heilman, Michael and Cahill, Aoife and Madnani, Nitin and Lopez, Melissa and Mulholland, Matthew and Tetreault, Joel},
title = {Predicting Grammaticality on an Ordinal Scale},
booktitle = {Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)},
month = {June},
year = {2014},
address = {Baltimore, Maryland},
publisher = {Association for Computational Linguistics},
pages = {174--180},
url = {http://www.aclweb.org/anthology/P14-2029}
}
```
### Contributions
Thanks to [@j-chim](https://github.com/j-chim) for adding this dataset. |
superb | 2023-01-25T14:45:01.000Z | [
"task_categories:automatic-speech-recognition",
"task_categories:audio-classification",
"task_ids:keyword-spotting",
"task_ids:speaker-identification",
"task_ids:audio-intent-classification",
"task_ids:audio-emotion-recognition",
"annotations_creators:other",
"language_creators:other",
"multilinguality:monolingual",
"size_categories:unknown",
"source_datasets:original",
"source_datasets:extended|librispeech_asr",
"source_datasets:extended|other-librimix",
"source_datasets:extended|other-speech_commands",
"language:en",
"license:unknown",
"query-by-example-spoken-term-detection",
"audio-slot-filling",
"speaker-diarization",
"automatic-speaker-verification",
"arxiv:2105.01051",
"region:us"
] | null | Self-supervised learning (SSL) has proven vital for advancing research in
natural language processing (NLP) and computer vision (CV). The paradigm
pretrains a shared model on large volumes of unlabeled data and achieves
state-of-the-art (SOTA) for various tasks with minimal adaptation. However, the
speech processing community lacks a similar setup to systematically explore the
paradigm. To bridge this gap, we introduce Speech processing Universal
PERformance Benchmark (SUPERB). SUPERB is a leaderboard to benchmark the
performance of a shared model across a wide range of speech processing tasks
with minimal architecture changes and labeled data. Among multiple usages of the
shared model, we especially focus on extracting the representation learned from
SSL due to its preferable re-usability. We present a simple framework to solve
SUPERB tasks by learning task-specialized lightweight prediction heads on top of
the frozen shared model. Our results demonstrate that the framework is promising
as SSL representations show competitive generalizability and accessibility
across SUPERB tasks. We release SUPERB as a challenge with a leaderboard and a
benchmark toolkit to fuel the research in representation learning and general
speech processing.
Note that in order to limit the required storage for preparing this dataset, the
audio is stored in the .wav format and is not converted to a float32 array. To
convert the audio file to a float32 array, please make use of the `.map()`
function as follows:
```python
import soundfile as sf
def map_to_array(batch):
speech_array, _ = sf.read(batch["file"])
batch["speech"] = speech_array
return batch
dataset = dataset.map(map_to_array, remove_columns=["file"])
``` | @article{DBLP:journals/corr/abs-2105-01051,
author = {Shu{-}Wen Yang and
Po{-}Han Chi and
Yung{-}Sung Chuang and
Cheng{-}I Jeff Lai and
Kushal Lakhotia and
Yist Y. Lin and
Andy T. Liu and
Jiatong Shi and
Xuankai Chang and
Guan{-}Ting Lin and
Tzu{-}Hsien Huang and
Wei{-}Cheng Tseng and
Ko{-}tik Lee and
Da{-}Rong Liu and
Zili Huang and
Shuyan Dong and
Shang{-}Wen Li and
Shinji Watanabe and
Abdelrahman Mohamed and
Hung{-}yi Lee},
title = {{SUPERB:} Speech processing Universal PERformance Benchmark},
journal = {CoRR},
volume = {abs/2105.01051},
year = {2021},
url = {https://arxiv.org/abs/2105.01051},
archivePrefix = {arXiv},
eprint = {2105.01051},
timestamp = {Thu, 01 Jul 2021 13:30:22 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2105-01051.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
} | null | 19 | 2,784 | ---
annotations_creators:
- other
language_creators:
- other
language:
- en
license:
- unknown
multilinguality:
- monolingual
size_categories:
- unknown
source_datasets:
- original
- extended|librispeech_asr
- extended|other-librimix
- extended|other-speech_commands
task_categories:
- automatic-speech-recognition
- audio-classification
task_ids:
- keyword-spotting
- speaker-identification
- audio-intent-classification
- audio-emotion-recognition
pretty_name: SUPERB
tags:
- query-by-example-spoken-term-detection
- audio-slot-filling
- speaker-diarization
- automatic-speaker-verification
dataset_info:
- config_name: asr
features:
- name: file
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: text
dtype: string
- name: speaker_id
dtype: int64
- name: chapter_id
dtype: int64
- name: id
dtype: string
splits:
- name: train
num_bytes: 11852430
num_examples: 28539
- name: validation
num_bytes: 897213
num_examples: 2703
- name: test
num_bytes: 871234
num_examples: 2620
download_size: 7071899769
dataset_size: 13620877
- config_name: sd
features:
- name: record_id
dtype: string
- name: file
dtype: string
- name: start
dtype: int64
- name: end
dtype: int64
- name: speakers
list:
- name: speaker_id
dtype: string
- name: start
dtype: int64
- name: end
dtype: int64
splits:
- name: train
num_bytes: 4622013
num_examples: 13901
- name: dev
num_bytes: 860472
num_examples: 3014
- name: test
num_bytes: 847803
num_examples: 3002
download_size: 7190370211
dataset_size: 6330288
- config_name: ks
features:
- name: file
dtype: string
- name: label
dtype:
class_label:
names:
'0': 'yes'
'1': 'no'
'2': up
'3': down
'4': left
'5': right
'6': 'on'
'7': 'off'
'8': stop
'9': go
'10': _silence_
'11': _unknown_
splits:
- name: train
num_bytes: 8467781
num_examples: 51094
- name: validation
num_bytes: 1126476
num_examples: 6798
- name: test
num_bytes: 510619
num_examples: 3081
download_size: 1560367713
dataset_size: 10104876
- config_name: ic
features:
- name: file
dtype: string
- name: speaker_id
dtype: string
- name: text
dtype: string
- name: action
dtype:
class_label:
names:
'0': activate
'1': bring
'2': change language
'3': deactivate
'4': decrease
'5': increase
- name: object
dtype:
class_label:
names:
'0': Chinese
'1': English
'2': German
'3': Korean
'4': heat
'5': juice
'6': lamp
'7': lights
'8': music
'9': newspaper
'10': none
'11': shoes
'12': socks
'13': volume
- name: location
dtype:
class_label:
names:
'0': bedroom
'1': kitchen
'2': none
'3': washroom
splits:
- name: train
num_bytes: 7071466
num_examples: 23132
- name: validation
num_bytes: 953622
num_examples: 3118
- name: test
num_bytes: 1158347
num_examples: 3793
download_size: 1544093324
dataset_size: 9183435
- config_name: si
features:
- name: file
dtype: string
- name: label
dtype:
class_label:
names:
'0': id10001
'1': id10002
'2': id10003
'3': id10004
'4': id10005
'5': id10006
'6': id10007
'7': id10008
'8': id10009
'9': id10010
'10': id10011
'11': id10012
'12': id10013
'13': id10014
'14': id10015
'15': id10016
'16': id10017
'17': id10018
'18': id10019
'19': id10020
'20': id10021
'21': id10022
'22': id10023
'23': id10024
'24': id10025
'25': id10026
'26': id10027
'27': id10028
'28': id10029
'29': id10030
'30': id10031
'31': id10032
'32': id10033
'33': id10034
'34': id10035
'35': id10036
'36': id10037
'37': id10038
'38': id10039
'39': id10040
'40': id10041
'41': id10042
'42': id10043
'43': id10044
'44': id10045
'45': id10046
'46': id10047
'47': id10048
'48': id10049
'49': id10050
'50': id10051
'51': id10052
'52': id10053
'53': id10054
'54': id10055
'55': id10056
'56': id10057
'57': id10058
'58': id10059
'59': id10060
'60': id10061
'61': id10062
'62': id10063
'63': id10064
'64': id10065
'65': id10066
'66': id10067
'67': id10068
'68': id10069
'69': id10070
'70': id10071
'71': id10072
'72': id10073
'73': id10074
'74': id10075
'75': id10076
'76': id10077
'77': id10078
'78': id10079
'79': id10080
'80': id10081
'81': id10082
'82': id10083
'83': id10084
'84': id10085
'85': id10086
'86': id10087
'87': id10088
'88': id10089
'89': id10090
'90': id10091
'91': id10092
'92': id10093
'93': id10094
'94': id10095
'95': id10096
'96': id10097
'97': id10098
'98': id10099
'99': id10100
'100': id10101
'101': id10102
'102': id10103
'103': id10104
'104': id10105
'105': id10106
'106': id10107
'107': id10108
'108': id10109
'109': id10110
'110': id10111
'111': id10112
'112': id10113
'113': id10114
'114': id10115
'115': id10116
'116': id10117
'117': id10118
'118': id10119
'119': id10120
'120': id10121
'121': id10122
'122': id10123
'123': id10124
'124': id10125
'125': id10126
'126': id10127
'127': id10128
'128': id10129
'129': id10130
'130': id10131
'131': id10132
'132': id10133
'133': id10134
'134': id10135
'135': id10136
'136': id10137
'137': id10138
'138': id10139
'139': id10140
'140': id10141
'141': id10142
'142': id10143
'143': id10144
'144': id10145
'145': id10146
'146': id10147
'147': id10148
'148': id10149
'149': id10150
'150': id10151
'151': id10152
'152': id10153
'153': id10154
'154': id10155
'155': id10156
'156': id10157
'157': id10158
'158': id10159
'159': id10160
'160': id10161
'161': id10162
'162': id10163
'163': id10164
'164': id10165
'165': id10166
'166': id10167
'167': id10168
'168': id10169
'169': id10170
'170': id10171
'171': id10172
'172': id10173
'173': id10174
'174': id10175
'175': id10176
'176': id10177
'177': id10178
'178': id10179
'179': id10180
'180': id10181
'181': id10182
'182': id10183
'183': id10184
'184': id10185
'185': id10186
'186': id10187
'187': id10188
'188': id10189
'189': id10190
'190': id10191
'191': id10192
'192': id10193
'193': id10194
'194': id10195
'195': id10196
'196': id10197
'197': id10198
'198': id10199
'199': id10200
'200': id10201
'201': id10202
'202': id10203
'203': id10204
'204': id10205
'205': id10206
'206': id10207
'207': id10208
'208': id10209
'209': id10210
'210': id10211
'211': id10212
'212': id10213
'213': id10214
'214': id10215
'215': id10216
'216': id10217
'217': id10218
'218': id10219
'219': id10220
'220': id10221
'221': id10222
'222': id10223
'223': id10224
'224': id10225
'225': id10226
'226': id10227
'227': id10228
'228': id10229
'229': id10230
'230': id10231
'231': id10232
'232': id10233
'233': id10234
'234': id10235
'235': id10236
'236': id10237
'237': id10238
'238': id10239
'239': id10240
'240': id10241
'241': id10242
'242': id10243
'243': id10244
'244': id10245
'245': id10246
'246': id10247
'247': id10248
'248': id10249
'249': id10250
'250': id10251
'251': id10252
'252': id10253
'253': id10254
'254': id10255
'255': id10256
'256': id10257
'257': id10258
'258': id10259
'259': id10260
'260': id10261
'261': id10262
'262': id10263
'263': id10264
'264': id10265
'265': id10266
'266': id10267
'267': id10268
'268': id10269
'269': id10270
'270': id10271
'271': id10272
'272': id10273
'273': id10274
'274': id10275
'275': id10276
'276': id10277
'277': id10278
'278': id10279
'279': id10280
'280': id10281
'281': id10282
'282': id10283
'283': id10284
'284': id10285
'285': id10286
'286': id10287
'287': id10288
'288': id10289
'289': id10290
'290': id10291
'291': id10292
'292': id10293
'293': id10294
'294': id10295
'295': id10296
'296': id10297
'297': id10298
'298': id10299
'299': id10300
'300': id10301
'301': id10302
'302': id10303
'303': id10304
'304': id10305
'305': id10306
'306': id10307
'307': id10308
'308': id10309
'309': id10310
'310': id10311
'311': id10312
'312': id10313
'313': id10314
'314': id10315
'315': id10316
'316': id10317
'317': id10318
'318': id10319
'319': id10320
'320': id10321
'321': id10322
'322': id10323
'323': id10324
'324': id10325
'325': id10326
'326': id10327
'327': id10328
'328': id10329
'329': id10330
'330': id10331
'331': id10332
'332': id10333
'333': id10334
'334': id10335
'335': id10336
'336': id10337
'337': id10338
'338': id10339
'339': id10340
'340': id10341
'341': id10342
'342': id10343
'343': id10344
'344': id10345
'345': id10346
'346': id10347
'347': id10348
'348': id10349
'349': id10350
'350': id10351
'351': id10352
'352': id10353
'353': id10354
'354': id10355
'355': id10356
'356': id10357
'357': id10358
'358': id10359
'359': id10360
'360': id10361
'361': id10362
'362': id10363
'363': id10364
'364': id10365
'365': id10366
'366': id10367
'367': id10368
'368': id10369
'369': id10370
'370': id10371
'371': id10372
'372': id10373
'373': id10374
'374': id10375
'375': id10376
'376': id10377
'377': id10378
'378': id10379
'379': id10380
'380': id10381
'381': id10382
'382': id10383
'383': id10384
'384': id10385
'385': id10386
'386': id10387
'387': id10388
'388': id10389
'389': id10390
'390': id10391
'391': id10392
'392': id10393
'393': id10394
'394': id10395
'395': id10396
'396': id10397
'397': id10398
'398': id10399
'399': id10400
'400': id10401
'401': id10402
'402': id10403
'403': id10404
'404': id10405
'405': id10406
'406': id10407
'407': id10408
'408': id10409
'409': id10410
'410': id10411
'411': id10412
'412': id10413
'413': id10414
'414': id10415
'415': id10416
'416': id10417
'417': id10418
'418': id10419
'419': id10420
'420': id10421
'421': id10422
'422': id10423
'423': id10424
'424': id10425
'425': id10426
'426': id10427
'427': id10428
'428': id10429
'429': id10430
'430': id10431
'431': id10432
'432': id10433
'433': id10434
'434': id10435
'435': id10436
'436': id10437
'437': id10438
'438': id10439
'439': id10440
'440': id10441
'441': id10442
'442': id10443
'443': id10444
'444': id10445
'445': id10446
'446': id10447
'447': id10448
'448': id10449
'449': id10450
'450': id10451
'451': id10452
'452': id10453
'453': id10454
'454': id10455
'455': id10456
'456': id10457
'457': id10458
'458': id10459
'459': id10460
'460': id10461
'461': id10462
'462': id10463
'463': id10464
'464': id10465
'465': id10466
'466': id10467
'467': id10468
'468': id10469
'469': id10470
'470': id10471
'471': id10472
'472': id10473
'473': id10474
'474': id10475
'475': id10476
'476': id10477
'477': id10478
'478': id10479
'479': id10480
'480': id10481
'481': id10482
'482': id10483
'483': id10484
'484': id10485
'485': id10486
'486': id10487
'487': id10488
'488': id10489
'489': id10490
'490': id10491
'491': id10492
'492': id10493
'493': id10494
'494': id10495
'495': id10496
'496': id10497
'497': id10498
'498': id10499
'499': id10500
'500': id10501
'501': id10502
'502': id10503
'503': id10504
'504': id10505
'505': id10506
'506': id10507
'507': id10508
'508': id10509
'509': id10510
'510': id10511
'511': id10512
'512': id10513
'513': id10514
'514': id10515
'515': id10516
'516': id10517
'517': id10518
'518': id10519
'519': id10520
'520': id10521
'521': id10522
'522': id10523
'523': id10524
'524': id10525
'525': id10526
'526': id10527
'527': id10528
'528': id10529
'529': id10530
'530': id10531
'531': id10532
'532': id10533
'533': id10534
'534': id10535
'535': id10536
'536': id10537
'537': id10538
'538': id10539
'539': id10540
'540': id10541
'541': id10542
'542': id10543
'543': id10544
'544': id10545
'545': id10546
'546': id10547
'547': id10548
'548': id10549
'549': id10550
'550': id10551
'551': id10552
'552': id10553
'553': id10554
'554': id10555
'555': id10556
'556': id10557
'557': id10558
'558': id10559
'559': id10560
'560': id10561
'561': id10562
'562': id10563
'563': id10564
'564': id10565
'565': id10566
'566': id10567
'567': id10568
'568': id10569
'569': id10570
'570': id10571
'571': id10572
'572': id10573
'573': id10574
'574': id10575
'575': id10576
'576': id10577
'577': id10578
'578': id10579
'579': id10580
'580': id10581
'581': id10582
'582': id10583
'583': id10584
'584': id10585
'585': id10586
'586': id10587
'587': id10588
'588': id10589
'589': id10590
'590': id10591
'591': id10592
'592': id10593
'593': id10594
'594': id10595
'595': id10596
'596': id10597
'597': id10598
'598': id10599
'599': id10600
'600': id10601
'601': id10602
'602': id10603
'603': id10604
'604': id10605
'605': id10606
'606': id10607
'607': id10608
'608': id10609
'609': id10610
'610': id10611
'611': id10612
'612': id10613
'613': id10614
'614': id10615
'615': id10616
'616': id10617
'617': id10618
'618': id10619
'619': id10620
'620': id10621
'621': id10622
'622': id10623
'623': id10624
'624': id10625
'625': id10626
'626': id10627
'627': id10628
'628': id10629
'629': id10630
'630': id10631
'631': id10632
'632': id10633
'633': id10634
'634': id10635
'635': id10636
'636': id10637
'637': id10638
'638': id10639
'639': id10640
'640': id10641
'641': id10642
'642': id10643
'643': id10644
'644': id10645
'645': id10646
'646': id10647
'647': id10648
'648': id10649
'649': id10650
'650': id10651
'651': id10652
'652': id10653
'653': id10654
'654': id10655
'655': id10656
'656': id10657
'657': id10658
'658': id10659
'659': id10660
'660': id10661
'661': id10662
'662': id10663
'663': id10664
'664': id10665
'665': id10666
'666': id10667
'667': id10668
'668': id10669
'669': id10670
'670': id10671
'671': id10672
'672': id10673
'673': id10674
'674': id10675
'675': id10676
'676': id10677
'677': id10678
'678': id10679
'679': id10680
'680': id10681
'681': id10682
'682': id10683
'683': id10684
'684': id10685
'685': id10686
'686': id10687
'687': id10688
'688': id10689
'689': id10690
'690': id10691
'691': id10692
'692': id10693
'693': id10694
'694': id10695
'695': id10696
'696': id10697
'697': id10698
'698': id10699
'699': id10700
'700': id10701
'701': id10702
'702': id10703
'703': id10704
'704': id10705
'705': id10706
'706': id10707
'707': id10708
'708': id10709
'709': id10710
'710': id10711
'711': id10712
'712': id10713
'713': id10714
'714': id10715
'715': id10716
'716': id10717
'717': id10718
'718': id10719
'719': id10720
'720': id10721
'721': id10722
'722': id10723
'723': id10724
'724': id10725
'725': id10726
'726': id10727
'727': id10728
'728': id10729
'729': id10730
'730': id10731
'731': id10732
'732': id10733
'733': id10734
'734': id10735
'735': id10736
'736': id10737
'737': id10738
'738': id10739
'739': id10740
'740': id10741
'741': id10742
'742': id10743
'743': id10744
'744': id10745
'745': id10746
'746': id10747
'747': id10748
'748': id10749
'749': id10750
'750': id10751
'751': id10752
'752': id10753
'753': id10754
'754': id10755
'755': id10756
'756': id10757
'757': id10758
'758': id10759
'759': id10760
'760': id10761
'761': id10762
'762': id10763
'763': id10764
'764': id10765
'765': id10766
'766': id10767
'767': id10768
'768': id10769
'769': id10770
'770': id10771
'771': id10772
'772': id10773
'773': id10774
'774': id10775
'775': id10776
'776': id10777
'777': id10778
'778': id10779
'779': id10780
'780': id10781
'781': id10782
'782': id10783
'783': id10784
'784': id10785
'785': id10786
'786': id10787
'787': id10788
'788': id10789
'789': id10790
'790': id10791
'791': id10792
'792': id10793
'793': id10794
'794': id10795
'795': id10796
'796': id10797
'797': id10798
'798': id10799
'799': id10800
'800': id10801
'801': id10802
'802': id10803
'803': id10804
'804': id10805
'805': id10806
'806': id10807
'807': id10808
'808': id10809
'809': id10810
'810': id10811
'811': id10812
'812': id10813
'813': id10814
'814': id10815
'815': id10816
'816': id10817
'817': id10818
'818': id10819
'819': id10820
'820': id10821
'821': id10822
'822': id10823
'823': id10824
'824': id10825
'825': id10826
'826': id10827
'827': id10828
'828': id10829
'829': id10830
'830': id10831
'831': id10832
'832': id10833
'833': id10834
'834': id10835
'835': id10836
'836': id10837
'837': id10838
'838': id10839
'839': id10840
'840': id10841
'841': id10842
'842': id10843
'843': id10844
'844': id10845
'845': id10846
'846': id10847
'847': id10848
'848': id10849
'849': id10850
'850': id10851
'851': id10852
'852': id10853
'853': id10854
'854': id10855
'855': id10856
'856': id10857
'857': id10858
'858': id10859
'859': id10860
'860': id10861
'861': id10862
'862': id10863
'863': id10864
'864': id10865
'865': id10866
'866': id10867
'867': id10868
'868': id10869
'869': id10870
'870': id10871
'871': id10872
'872': id10873
'873': id10874
'874': id10875
'875': id10876
'876': id10877
'877': id10878
'878': id10879
'879': id10880
'880': id10881
'881': id10882
'882': id10883
'883': id10884
'884': id10885
'885': id10886
'886': id10887
'887': id10888
'888': id10889
'889': id10890
'890': id10891
'891': id10892
'892': id10893
'893': id10894
'894': id10895
'895': id10896
'896': id10897
'897': id10898
'898': id10899
'899': id10900
'900': id10901
'901': id10902
'902': id10903
'903': id10904
'904': id10905
'905': id10906
'906': id10907
'907': id10908
'908': id10909
'909': id10910
'910': id10911
'911': id10912
'912': id10913
'913': id10914
'914': id10915
'915': id10916
'916': id10917
'917': id10918
'918': id10919
'919': id10920
'920': id10921
'921': id10922
'922': id10923
'923': id10924
'924': id10925
'925': id10926
'926': id10927
'927': id10928
'928': id10929
'929': id10930
'930': id10931
'931': id10932
'932': id10933
'933': id10934
'934': id10935
'935': id10936
'936': id10937
'937': id10938
'938': id10939
'939': id10940
'940': id10941
'941': id10942
'942': id10943
'943': id10944
'944': id10945
'945': id10946
'946': id10947
'947': id10948
'948': id10949
'949': id10950
'950': id10951
'951': id10952
'952': id10953
'953': id10954
'954': id10955
'955': id10956
'956': id10957
'957': id10958
'958': id10959
'959': id10960
'960': id10961
'961': id10962
'962': id10963
'963': id10964
'964': id10965
'965': id10966
'966': id10967
'967': id10968
'968': id10969
'969': id10970
'970': id10971
'971': id10972
'972': id10973
'973': id10974
'974': id10975
'975': id10976
'976': id10977
'977': id10978
'978': id10979
'979': id10980
'980': id10981
'981': id10982
'982': id10983
'983': id10984
'984': id10985
'985': id10986
'986': id10987
'987': id10988
'988': id10989
'989': id10990
'990': id10991
'991': id10992
'992': id10993
'993': id10994
'994': id10995
'995': id10996
'996': id10997
'997': id10998
'998': id10999
'999': id11000
'1000': id11001
'1001': id11002
'1002': id11003
'1003': id11004
'1004': id11005
'1005': id11006
'1006': id11007
'1007': id11008
'1008': id11009
'1009': id11010
'1010': id11011
'1011': id11012
'1012': id11013
'1013': id11014
'1014': id11015
'1015': id11016
'1016': id11017
'1017': id11018
'1018': id11019
'1019': id11020
'1020': id11021
'1021': id11022
'1022': id11023
'1023': id11024
'1024': id11025
'1025': id11026
'1026': id11027
'1027': id11028
'1028': id11029
'1029': id11030
'1030': id11031
'1031': id11032
'1032': id11033
'1033': id11034
'1034': id11035
'1035': id11036
'1036': id11037
'1037': id11038
'1038': id11039
'1039': id11040
'1040': id11041
'1041': id11042
'1042': id11043
'1043': id11044
'1044': id11045
'1045': id11046
'1046': id11047
'1047': id11048
'1048': id11049
'1049': id11050
'1050': id11051
'1051': id11052
'1052': id11053
'1053': id11054
'1054': id11055
'1055': id11056
'1056': id11057
'1057': id11058
'1058': id11059
'1059': id11060
'1060': id11061
'1061': id11062
'1062': id11063
'1063': id11064
'1064': id11065
'1065': id11066
'1066': id11067
'1067': id11068
'1068': id11069
'1069': id11070
'1070': id11071
'1071': id11072
'1072': id11073
'1073': id11074
'1074': id11075
'1075': id11076
'1076': id11077
'1077': id11078
'1078': id11079
'1079': id11080
'1080': id11081
'1081': id11082
'1082': id11083
'1083': id11084
'1084': id11085
'1085': id11086
'1086': id11087
'1087': id11088
'1088': id11089
'1089': id11090
'1090': id11091
'1091': id11092
'1092': id11093
'1093': id11094
'1094': id11095
'1095': id11096
'1096': id11097
'1097': id11098
'1098': id11099
'1099': id11100
'1100': id11101
'1101': id11102
'1102': id11103
'1103': id11104
'1104': id11105
'1105': id11106
'1106': id11107
'1107': id11108
'1108': id11109
'1109': id11110
'1110': id11111
'1111': id11112
'1112': id11113
'1113': id11114
'1114': id11115
'1115': id11116
'1116': id11117
'1117': id11118
'1118': id11119
'1119': id11120
'1120': id11121
'1121': id11122
'1122': id11123
'1123': id11124
'1124': id11125
'1125': id11126
'1126': id11127
'1127': id11128
'1128': id11129
'1129': id11130
'1130': id11131
'1131': id11132
'1132': id11133
'1133': id11134
'1134': id11135
'1135': id11136
'1136': id11137
'1137': id11138
'1138': id11139
'1139': id11140
'1140': id11141
'1141': id11142
'1142': id11143
'1143': id11144
'1144': id11145
'1145': id11146
'1146': id11147
'1147': id11148
'1148': id11149
'1149': id11150
'1150': id11151
'1151': id11152
'1152': id11153
'1153': id11154
'1154': id11155
'1155': id11156
'1156': id11157
'1157': id11158
'1158': id11159
'1159': id11160
'1160': id11161
'1161': id11162
'1162': id11163
'1163': id11164
'1164': id11165
'1165': id11166
'1166': id11167
'1167': id11168
'1168': id11169
'1169': id11170
'1170': id11171
'1171': id11172
'1172': id11173
'1173': id11174
'1174': id11175
'1175': id11176
'1176': id11177
'1177': id11178
'1178': id11179
'1179': id11180
'1180': id11181
'1181': id11182
'1182': id11183
'1183': id11184
'1184': id11185
'1185': id11186
'1186': id11187
'1187': id11188
'1188': id11189
'1189': id11190
'1190': id11191
'1191': id11192
'1192': id11193
'1193': id11194
'1194': id11195
'1195': id11196
'1196': id11197
'1197': id11198
'1198': id11199
'1199': id11200
'1200': id11201
'1201': id11202
'1202': id11203
'1203': id11204
'1204': id11205
'1205': id11206
'1206': id11207
'1207': id11208
'1208': id11209
'1209': id11210
'1210': id11211
'1211': id11212
'1212': id11213
'1213': id11214
'1214': id11215
'1215': id11216
'1216': id11217
'1217': id11218
'1218': id11219
'1219': id11220
'1220': id11221
'1221': id11222
'1222': id11223
'1223': id11224
'1224': id11225
'1225': id11226
'1226': id11227
'1227': id11228
'1228': id11229
'1229': id11230
'1230': id11231
'1231': id11232
'1232': id11233
'1233': id11234
'1234': id11235
'1235': id11236
'1236': id11237
'1237': id11238
'1238': id11239
'1239': id11240
'1240': id11241
'1241': id11242
'1242': id11243
'1243': id11244
'1244': id11245
'1245': id11246
'1246': id11247
'1247': id11248
'1248': id11249
'1249': id11250
'1250': id11251
splits:
- name: train
num_bytes: 12729268
num_examples: 138361
- name: validation
num_bytes: 635172
num_examples: 6904
- name: test
num_bytes: 759096
num_examples: 8251
download_size: 0
dataset_size: 14123536
---
# Dataset Card for SUPERB
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [http://superbbenchmark.org](http://superbbenchmark.org)
- **Repository:** [https://github.com/s3prl/s3prl](https://github.com/s3prl/s3prl)
- **Paper:** [SUPERB: Speech processing Universal PERformance Benchmark](https://arxiv.org/abs/2105.01051)
- **Leaderboard:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [Lewis Tunstall](mailto:lewis@huggingface.co) and [Albert Villanova](mailto:albert@huggingface.co)
### Dataset Summary
SUPERB is a leaderboard to benchmark the performance of a shared model across a wide range of speech processing tasks with minimal architecture changes and labeled data.
### Supported Tasks and Leaderboards
The SUPERB leaderboard can be found here https://superbbenchmark.org/leaderboard and consists of the following tasks:
#### pr
Phoneme Recognition (PR) transcribes an utterance into the smallest content units. This task includes alignment modeling to avoid potentially inaccurate forced alignment. [LibriSpeech](https://huggingface.co/datasets/librispeech_asr) train-clean-100/dev-clean/test-clean subsets are adopted in SUPERB for training/validation/testing. Phoneme transcriptions are obtained from the LibriSpeech official g2p-model-5 and the conversion script in Kaldi librispeech s5 recipe. The evaluation metric is phone error rate (PER).
#### asr
Automatic Speech Recognition (ASR) transcribes utterances into words. While PR analyzes the improvement in modeling phonetics, ASR reflects the significance of the improvement in a real-world scenario. [LibriSpeech](https://huggingface.co/datasets/librispeech_asr) train-clean-100/devclean/test-clean subsets are used for training/validation/testing. The evaluation metric is word error rate (WER).
#### ks
Keyword Spotting (KS) detects preregistered keywords by classifying utterances into a predefined set of words. The task is usually performed on-device for the fast response time. Thus, accuracy, model size, and inference time are all crucial. SUPERB uses the widely used [Speech Commands dataset v1.0](https://www.tensorflow.org/datasets/catalog/speech_commands) for the task. The dataset consists of ten classes of keywords, a class for silence, and an unknown class to include the false positive. The evaluation metric is accuracy (ACC)
##### Example of usage:
Use these auxillary functions to:
- load the audio file into an audio data array
- sample from long `_silence_` audio clips
For other examples of handling long `_silence_` clips see the [S3PRL](https://github.com/s3prl/s3prl/blob/099ce807a6ffa6bf2482ceecfcaf83dea23da355/s3prl/downstream/speech_commands/dataset.py#L80)
or [TFDS](https://github.com/tensorflow/datasets/blob/6b8cfdb7c3c0a04e731caaa8660ce948d0a67b1e/tensorflow_datasets/audio/speech_commands.py#L143) implementations.
```python
def map_to_array(example):
import soundfile as sf
speech_array, sample_rate = sf.read(example["file"])
example["speech"] = speech_array
example["sample_rate"] = sample_rate
return example
def sample_noise(example):
# Use this function to extract random 1 sec slices of each _silence_ utterance,
# e.g. inside `torch.utils.data.Dataset.__getitem__()`
from random import randint
if example["label"] == "_silence_":
random_offset = randint(0, len(example["speech"]) - example["sample_rate"] - 1)
example["speech"] = example["speech"][random_offset : random_offset + example["sample_rate"]]
return example
```
#### qbe
Query by Example Spoken Term Detection (QbE) detects a spoken term (query) in an audio database (documents) by binary discriminating a given pair of query and document into a match or not. The English subset in [QUESST 2014 challenge](https://github.com/s3prl/s3prl/tree/master/downstream#qbe-query-by-example-spoken-term-detection) is adopted since we focus on investigating English as the first step. The evaluation metric is maximum term weighted value (MTWV) which balances misses and false alarms.
#### ic
Intent Classification (IC) classifies utterances into predefined classes to determine the intent of speakers. SUPERB uses the [Fluent Speech Commands dataset](https://github.com/s3prl/s3prl/tree/master/downstream#ic-intent-classification---fluent-speech-commands), where each utterance is tagged with three intent labels: action, object, and location. The evaluation metric is accuracy (ACC).
#### sf
Slot Filling (SF) predicts a sequence of semantic slot-types from an utterance, like a slot-type FromLocation for a spoken word Taipei, which is known as a slot-value. Both slot-types and slot-values are essential for an SLU system to function. The evaluation metrics thus include slot-type F1 score and slotvalue CER. [Audio SNIPS](https://github.com/s3prl/s3prl/tree/master/downstream#sf-end-to-end-slot-filling) is adopted, which synthesized multi-speaker utterances for SNIPS. Following the standard split in SNIPS, US-accent speakers are further selected for training, and others are for validation/testing.
#### si
Speaker Identification (SI) classifies each utterance for its speaker identity as a multi-class classification, where speakers are in the same predefined set for both training and testing. The widely used [VoxCeleb1 dataset](https://www.robots.ox.ac.uk/~vgg/data/voxceleb/vox1.html) is adopted, and the evaluation metric is accuracy (ACC).
#### asv
Automatic Speaker Verification (ASV) verifies whether the speakers of a pair of utterances match as a binary classification, and speakers in the testing set may not appear in the training set. Thus, ASV is more challenging than SID. VoxCeleb1 is used without VoxCeleb2 training data and noise augmentation. The evaluation metric is equal error rate (EER).
#### sd
Speaker Diarization (SD) predicts *who is speaking when* for each timestamp, and multiple speakers can speak simultaneously. The model has to encode rich speaker characteristics for each frame and should be able to represent mixtures of signals. [LibriMix](https://github.com/s3prl/s3prl/tree/master/downstream#sd-speaker-diarization) is adopted where LibriSpeech train-clean-100/dev-clean/test-clean are used to generate mixtures for training/validation/testing. We focus on the two-speaker scenario as the first step. The time-coded speaker labels were generated using alignments from Kaldi LibriSpeech ASR model. The evaluation metric is diarization error rate (DER).
##### Example of usage
Use these auxiliary functions to:
- load the audio file into an audio data array
- generate the label array
```python
def load_audio_file(example, frame_shift=160):
import soundfile as sf
example["array"], example["sample_rate"] = sf.read(
example["file"], start=example["start"] * frame_shift, stop=example["end"] * frame_shift
)
return example
def generate_label(example, frame_shift=160, num_speakers=2, rate=16000):
import numpy as np
start = example["start"]
end = example["end"]
frame_num = end - start
speakers = sorted({speaker["speaker_id"] for speaker in example["speakers"]})
label = np.zeros((frame_num, num_speakers), dtype=np.int32)
for speaker in example["speakers"]:
speaker_index = speakers.index(speaker["speaker_id"])
start_frame = np.rint(speaker["start"] * rate / frame_shift).astype(int)
end_frame = np.rint(speaker["end"] * rate / frame_shift).astype(int)
rel_start = rel_end = None
if start <= start_frame < end:
rel_start = start_frame - start
if start < end_frame <= end:
rel_end = end_frame - start
if rel_start is not None or rel_end is not None:
label[rel_start:rel_end, speaker_index] = 1
example["label"] = label
return example
```
#### er
Emotion Recognition (ER) predicts an emotion class for each utterance. The most widely used ER dataset [IEMOCAP](https://github.com/s3prl/s3prl/tree/master/downstream#er-emotion-recognition) is adopted, and we follow the conventional evaluation protocol: we drop the unbalance emotion classes to leave the final four classes with a similar amount of data points and cross-validates on five folds of the standard splits. The evaluation metric is accuracy (ACC).
### Languages
The language data in SUPERB is in English (BCP-47 `en`)
## Dataset Structure
### Data Instances
#### pr
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### asr
An example from each split looks like:
```python
{'chapter_id': 1240,
'file': 'path/to/file.flac',
'audio': {'path': 'path/to/file.flac',
'array': array([-0.00048828, -0.00018311, -0.00137329, ..., 0.00079346, 0.00091553, 0.00085449], dtype=float32),
'sampling_rate': 16000},
'id': '103-1240-0000',
'speaker_id': 103,
'text': 'CHAPTER ONE MISSUS RACHEL LYNDE IS SURPRISED MISSUS RACHEL LYNDE '
'LIVED JUST WHERE THE AVONLEA MAIN ROAD DIPPED DOWN INTO A LITTLE '
'HOLLOW FRINGED WITH ALDERS AND LADIES EARDROPS AND TRAVERSED BY A '
'BROOK'}
```
#### ks
An example from each split looks like:
```python
{
'file': '/path/yes/af7a8296_nohash_1.wav',
'audio': {'path': '/path/yes/af7a8296_nohash_1.wav',
'array': array([-0.00048828, -0.00018311, -0.00137329, ..., 0.00079346, 0.00091553, 0.00085449], dtype=float32),
'sampling_rate': 16000},
'label': 0 # 'yes'
}
```
#### qbe
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### ic
```python
{
'file': "/path/wavs/speakers/2BqVo8kVB2Skwgyb/063aa8f0-4479-11e9-a9a5-5dbec3b8816a.wav",
'audio': {'path': '/path/wavs/speakers/2BqVo8kVB2Skwgyb/063aa8f0-4479-11e9-a9a5-5dbec3b8816a.wav',
'array': array([-0.00048828, -0.00018311, -0.00137329, ..., 0.00079346, 0.00091553, 0.00085449], dtype=float32),
'sampling_rate': 16000},
'speaker_id': '2BqVo8kVB2Skwgyb',
'text': 'Turn the bedroom lights off',
'action': 3, # 'deactivate'
'object': 7, # 'lights'
'location': 0 # 'bedroom'
}
```
#### sf
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### si
```python
{
'file': '/path/wav/id10003/na8-QEFmj44/00003.wav',
'audio': {'path': '/path/wav/id10003/na8-QEFmj44/00003.wav',
'array': array([-0.00048828, -0.00018311, -0.00137329, ..., 0.00079346, 0.00091553, 0.00085449], dtype=float32),
'sampling_rate': 16000},
'label': 2 # 'id10003'
}
```
#### asv
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### sd
An example from each split looks like:
```python
{
'record_id': '1578-6379-0038_6415-111615-0009',
'file': 'path/to/file.wav',
'audio': {'path': 'path/to/file.wav',
'array': array([-0.00048828, -0.00018311, -0.00137329, ..., 0.00079346, 0.00091553, 0.00085449], dtype=float32),
'sampling_rate': 16000},
'start': 0,
'end': 1590,
'speakers': [
{'speaker_id': '1578', 'start': 28, 'end': 657},
{'speaker_id': '6415', 'start': 28, 'end': 1576}
]
}
```
#### er
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Data Fields
####Note abouth the `audio` fields
When accessing the audio column: `dataset[0]["audio"]` the audio file is automatically decoded and resampled to `dataset.features["audio"].sampling_rate`. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the `"audio"` column, *i.e.* `dataset[0]["audio"]` should **always** be preferred over `dataset["audio"][0]`.
#### pr
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### asr
- `file` (`string`): Path to the WAV audio file.
- `audio` (`dict`): A dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate.
- `text` (`string`): The transcription of the audio file.
- `speaker_id` (`integer`): A unique ID of the speaker. The same speaker id can be found for multiple data samples.
- `chapter_id` (`integer`): ID of the audiobook chapter which includes the transcription.
- `id` (`string`): A unique ID of the data sample.
#### ks
- `file` (`string`): Path to the WAV audio file.
- `audio` (`dict`): A dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate.
- `label` (`ClassLabel`): Label of the spoken command. Possible values:
- `0: "yes", 1: "no", 2: "up", 3: "down", 4: "left", 5: "right", 6: "on", 7: "off", 8: "stop", 9: "go", 10: "_silence_", 11: "_unknown_"`
#### qbe
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### ic
- `file` (`string`): Path to the WAV audio file.
- `audio` (`dict`): A dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate.
- `speaker_id` (`string`): ID of the speaker.
- `text` (`string`): Transcription of the spoken command.
- `action` (`ClassLabel`): Label of the command's action. Possible values:
- `0: "activate", 1: "bring", 2: "change language", 3: "deactivate", 4: "decrease", 5: "increase"`
- `object` (`ClassLabel`): Label of the command's object. Possible values:
- `0: "Chinese", 1: "English", 2: "German", 3: "Korean", 4: "heat", 5: "juice", 6: "lamp", 7: "lights", 8: "music", 9: "newspaper", 10: "none", 11: "shoes", 12: "socks", 13: "volume"`
- `location` (`ClassLabel`): Label of the command's location. Possible values:
- `0: "bedroom", 1: "kitchen", 2: "none", 3: "washroom"`
#### sf
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### si
- `file` (`string`): Path to the WAV audio file.
- `audio` (`dict`): A dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate.
- `label` (`ClassLabel`): Label (ID) of the speaker. Possible values:
- `0: "id10001", 1: "id10002", 2: "id10003", ..., 1250: "id11251"`
#### asv
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### sd
The data fields in all splits are:
- `record_id` (`string`): ID of the record.
- `file` (`string`): Path to the WAV audio file.
- `audio` (`dict`): A dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate.
- `start` (`integer`): Start frame of the audio.
- `end` (`integer`): End frame of the audio.
- `speakers` (`list` of `dict`): List of speakers in the audio. Each item contains the fields:
- `speaker_id` (`string`): ID of the speaker.
- `start` (`integer`): Frame when the speaker starts speaking.
- `end` (`integer`): Frame when the speaker stops speaking.
#### er
- `file` (`string`): Path to the WAV audio file.
- `audio` (`dict`): A dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate.
- `label` (`ClassLabel`): Label of the speech emotion. Possible values:
- `0: "neu", 1: "hap", 2: "ang", 3: "sad"`
### Data Splits
#### pr
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### asr
| | train | validation | test |
|-----|------:|-----------:|-----:|
| asr | 28539 | 2703 | 2620 |
#### ks
| | train | validation | test |
|----|------:|-----------:|-----:|
| ks | 51094 | 6798 | 3081 |
#### qbe
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### ic
| | train | validation | test |
|----|------:|-----------:|-----:|
| ic | 23132 | 3118 | 3793 |
#### sf
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### si
| | train | validation | test |
|----|-------:|-----------:|-----:|
| si | 138361 | 6904 | 8251 |
#### asv
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### sd
The data is split into "train", "dev" and "test" sets, each containing the following number of examples:
| | train | dev | test |
|----|------:|-----:|-----:|
| sd | 13901 | 3014 | 3002 |
#### er
The data is split into 5 sets intended for 5-fold cross-validation:
| | session1 | session2 | session3 | session4 | session5 |
|----|---------:|---------:|---------:|---------:|---------:|
| er | 1085 | 1023 | 1151 | 1031 | 1241 |
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
The dataset consists of people who have donated their voice online. You agree to not attempt to determine the identity of speakers in this dataset.
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
Dataset provided for research purposes only. Please check dataset license for additional information.
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
#### pr and asr
The license for Librispeech is the Creative Commons Attribution 4.0 International license ((CC-BY-4.0)[https://creativecommons.org/licenses/by/4.0/]).
#### ks
The license for Speech Commands is [CC BY 4.0](https://creativecommons.org/licenses/by/4.0/legalcode)
#### qbe
The license for QUESST 2014 is not known.
#### ic
The license for Fluent Speech Commands dataset is the [Fluent Speech Commands Public License](https://fluent.ai/wp-content/uploads/2021/04/Fluent_Speech_Commands_Public_License.pdf)
#### sf
The license for Audio SNIPS dataset is not known.
#### si and asv
The license for VoxCeleb1 dataset is the Creative Commons Attribution 4.0 International license ([CC-BY-4.0](https://creativecommons.org/licenses/by/4.0/)).
#### sd
LibriMix is based on the LibriSpeech (see above) and Wham! noises datasets. The Wham! noises dataset is distributed under the Attribution-NonCommercial 4.0 International ([CC BY-NC 4.0](https://creativecommons.org/licenses/by-nc/4.0/)) license.
#### er
The IEMOCAP license is ditributed under [its own license](https://sail.usc.edu/iemocap/Data_Release_Form_IEMOCAP.pdf).
### Citation Information
```
@article{DBLP:journals/corr/abs-2105-01051,
author = {Shu{-}Wen Yang and
Po{-}Han Chi and
Yung{-}Sung Chuang and
Cheng{-}I Jeff Lai and
Kushal Lakhotia and
Yist Y. Lin and
Andy T. Liu and
Jiatong Shi and
Xuankai Chang and
Guan{-}Ting Lin and
Tzu{-}Hsien Huang and
Wei{-}Cheng Tseng and
Ko{-}tik Lee and
Da{-}Rong Liu and
Zili Huang and
Shuyan Dong and
Shang{-}Wen Li and
Shinji Watanabe and
Abdelrahman Mohamed and
Hung{-}yi Lee},
title = {{SUPERB:} Speech processing Universal PERformance Benchmark},
journal = {CoRR},
volume = {abs/2105.01051},
year = {2021},
url = {https://arxiv.org/abs/2105.01051},
archivePrefix = {arXiv},
eprint = {2105.01051},
timestamp = {Thu, 01 Jul 2021 13:30:22 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2105-01051.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
Note that each SUPERB dataset has its own citation. Please see the source to see
the correct citation for each contained dataset.
```
### Contributions
Thanks to [@lewtun](https://github.com/lewtun), [@albertvillanova](https://github.com/albertvillanova) and [@anton-l](https://github.com/anton-l) for adding this dataset. |
SetFit/subj | 2022-01-15T21:34:11.000Z | [
"region:us"
] | SetFit | null | null | null | 4 | 2,775 | # Subjective vs Objective
This is the SUBJ dataset as used in [SentEval](https://github.com/facebookresearch/SentEval). It contains sentences with an annotation if they sentence describes something subjective about a movie or something objective |
conllpp | 2023-04-05T10:02:29.000Z | [
"task_categories:token-classification",
"task_ids:named-entity-recognition",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:extended|conll2003",
"language:en",
"license:unknown",
"region:us"
] | null | CoNLLpp is a corrected version of the CoNLL2003 NER dataset where labels of 5.38% of the sentences in the test set
have been manually corrected. The training set and development set are included for completeness.
For more details see https://www.aclweb.org/anthology/D19-1519/ and https://github.com/ZihanWangKi/CrossWeigh | @inproceedings{wang2019crossweigh,
title={CrossWeigh: Training Named Entity Tagger from Imperfect Annotations},
author={Wang, Zihan and Shang, Jingbo and Liu, Liyuan and Lu, Lihao and Liu, Jiacheng and Han, Jiawei},
booktitle={Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)},
pages={5157--5166},
year={2019}
} | null | 5 | 2,738 | ---
annotations_creators:
- expert-generated
language_creators:
- found
language:
- en
license:
- unknown
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- extended|conll2003
task_categories:
- token-classification
task_ids:
- named-entity-recognition
paperswithcode_id: conll
pretty_name: CoNLL++
train-eval-index:
- config: conllpp
task: token-classification
task_id: entity_extraction
splits:
train_split: train
eval_split: test
col_mapping:
tokens: tokens
ner_tags: tags
metrics:
- type: seqeval
name: seqeval
dataset_info:
features:
- name: id
dtype: string
- name: tokens
sequence: string
- name: pos_tags
sequence:
class_label:
names:
0: '"'
1: ''''''
2: '#'
3: $
4: (
5: )
6: ','
7: .
8: ':'
9: '``'
10: CC
11: CD
12: DT
13: EX
14: FW
15: IN
16: JJ
17: JJR
18: JJS
19: LS
20: MD
21: NN
22: NNP
23: NNPS
24: NNS
25: NN|SYM
26: PDT
27: POS
28: PRP
29: PRP$
30: RB
31: RBR
32: RBS
33: RP
34: SYM
35: TO
36: UH
37: VB
38: VBD
39: VBG
40: VBN
41: VBP
42: VBZ
43: WDT
44: WP
45: WP$
46: WRB
- name: chunk_tags
sequence:
class_label:
names:
0: O
1: B-ADJP
2: I-ADJP
3: B-ADVP
4: I-ADVP
5: B-CONJP
6: I-CONJP
7: B-INTJ
8: I-INTJ
9: B-LST
10: I-LST
11: B-NP
12: I-NP
13: B-PP
14: I-PP
15: B-PRT
16: I-PRT
17: B-SBAR
18: I-SBAR
19: B-UCP
20: I-UCP
21: B-VP
22: I-VP
- name: ner_tags
sequence:
class_label:
names:
0: O
1: B-PER
2: I-PER
3: B-ORG
4: I-ORG
5: B-LOC
6: I-LOC
7: B-MISC
8: I-MISC
config_name: conllpp
splits:
- name: train
num_bytes: 6931393
num_examples: 14041
- name: validation
num_bytes: 1739247
num_examples: 3250
- name: test
num_bytes: 1582078
num_examples: 3453
download_size: 4859600
dataset_size: 10252718
---
# Dataset Card for "conllpp"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [Github](https://github.com/ZihanWangKi/CrossWeigh)
- **Repository:** [Github](https://github.com/ZihanWangKi/CrossWeigh)
- **Paper:** [Aclweb](https://www.aclweb.org/anthology/D19-1519)
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
CoNLLpp is a corrected version of the CoNLL2003 NER dataset where labels of 5.38% of the sentences in the test set
have been manually corrected. The training set and development set from CoNLL2003 is included for completeness. One
correction on the test set for example, is:
```
{
"tokens": ["SOCCER", "-", "JAPAN", "GET", "LUCKY", "WIN", ",", "CHINA", "IN", "SURPRISE", "DEFEAT", "."],
"original_ner_tags_in_conll2003": ["O", "O", "B-LOC", "O", "O", "O", "O", "B-PER", "O", "O", "O", "O"],
"corrected_ner_tags_in_conllpp": ["O", "O", "B-LOC", "O", "O", "O", "O", "B-LOC", "O", "O", "O", "O"],
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
#### conllpp
- **Size of downloaded dataset files:** 4.85 MB
- **Size of the generated dataset:** 10.26 MB
- **Total amount of disk used:** 15.11 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"chunk_tags": [11, 12, 12, 21, 13, 11, 11, 21, 13, 11, 12, 13, 11, 21, 22, 11, 12, 17, 11, 21, 17, 11, 12, 12, 21, 22, 22, 13, 11, 0],
"id": "0",
"ner_tags": [0, 3, 4, 0, 0, 0, 0, 0, 0, 7, 0, 0, 0, 0, 0, 7, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
"pos_tags": [12, 22, 22, 38, 15, 22, 28, 38, 15, 16, 21, 35, 24, 35, 37, 16, 21, 15, 24, 41, 15, 16, 21, 21, 20, 37, 40, 35, 21, 7],
"tokens": ["The", "European", "Commission", "said", "on", "Thursday", "it", "disagreed", "with", "German", "advice", "to", "consumers", "to", "shun", "British", "lamb", "until", "scientists", "determine", "whether", "mad", "cow", "disease", "can", "be", "transmitted", "to", "sheep", "."]
}
```
### Data Fields
The data fields are the same among all splits.
#### conllpp
- `id`: a `string` feature.
- `tokens`: a `list` of `string` features.
- `pos_tags`: a `list` of classification labels, with possible values including `"` (0), `''` (1), `#` (2), `$` (3), `(` (4).
- `chunk_tags`: a `list` of classification labels, with possible values including `O` (0), `B-ADJP` (1), `I-ADJP` (2), `B-ADVP` (3), `I-ADVP` (4).
- `ner_tags`: a `list` of classification labels, with possible values including `O` (0), `B-PER` (1), `I-PER` (2), `B-ORG` (3), `I-ORG` (4).
### Data Splits
| name |train|validation|test|
|---------|----:|---------:|---:|
|conll2003|14041| 3250|3453|
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
```
@inproceedings{wang2019crossweigh,
title={CrossWeigh: Training Named Entity Tagger from Imperfect Annotations},
author={Wang, Zihan and Shang, Jingbo and Liu, Liyuan and Lu, Lihao and Liu, Jiacheng and Han, Jiawei},
booktitle={Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)},
pages={5157--5166},
year={2019}
}
```
### Contributions
Thanks to [@ZihanWangKi](https://github.com/ZihanWangKi) for adding this dataset. |
Skylion007/openwebtext | 2023-04-05T13:36:17.000Z | [
"task_categories:text-generation",
"task_categories:fill-mask",
"task_ids:language-modeling",
"task_ids:masked-language-modeling",
"annotations_creators:no-annotation",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1M<n<10M",
"source_datasets:original",
"language:en",
"license:cc0-1.0",
"region:us"
] | Skylion007 | An open-source replication of the WebText dataset from OpenAI. | @misc{Gokaslan2019OpenWeb,
title={OpenWebText Corpus},
author={Aaron Gokaslan*, Vanya Cohen*, Ellie Pavlick, Stefanie Tellex},
howpublished{\\url{http://Skylion007.github.io/OpenWebTextCorpus}},
year={2019}
} | null | 202 | 2,720 | ---
annotations_creators:
- no-annotation
language_creators:
- found
language:
- en
license:
- cc0-1.0
multilinguality:
- monolingual
pretty_name: OpenWebText
size_categories:
- 1M<n<10M
source_datasets:
- original
task_categories:
- text-generation
- fill-mask
task_ids:
- language-modeling
- masked-language-modeling
paperswithcode_id: openwebtext
dataset_info:
features:
- name: text
dtype: string
config_name: plain_text
splits:
- name: train
num_bytes: 39769491688
num_examples: 8013769
download_size: 12880189440
dataset_size: 39769491688
---
# Dataset Card for "openwebtext"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://skylion007.github.io/OpenWebTextCorpus/](https://skylion007.github.io/OpenWebTextCorpus/)
- **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of downloaded dataset files:** 13.51 GB
- **Size of the generated dataset:** 41.70 GB
- **Total amount of disk used:** 55.21 GB
### Dataset Summary
An open-source replication of the WebText dataset from OpenAI, that was used to train GPT-2.
This distribution was created by Aaron Gokaslan and Vanya Cohen of Brown University.
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Dataset Structure
### Data Instances
#### plain_text
- **Size of downloaded dataset files:** 13.51 GB
- **Size of the generated dataset:** 41.70 GB
- **Total amount of disk used:** 55.21 GB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"text": "\"A magazine supplement with an image of Adolf Hitler and the title 'The Unreadable Book' is pictured in Berlin. No law bans “Mei..."
}
```
### Data Fields
The data fields are the same among all splits.
#### plain_text
- `text`: a `string` feature.
### Data Splits
| name | train |
|------------|--------:|
| plain_text | 8013769 |
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
The authors started by extracting all Reddit post urls from the Reddit submissions dataset. These links were deduplicated, filtered to exclude non-html content, and then shuffled randomly. The links were then distributed to several machines in parallel for download, and all web pages were extracted using the newspaper python package. Using Facebook FastText, non-English web pages were filtered out.
Subsequently, near-duplicate documents were identified using local-sensitivity hashing (LSH). Documents were hashed into sets of 5-grams and all documents that had a similarity threshold of greater than 0.5 were removed. The the remaining documents were tokenized, and documents with fewer than 128 tokens were removed. This left 38GB of text data (40GB using SI units) from 8,013,769 documents.
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
The dataset doesn't contain annotations.
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
These data are released under this licensing scheme from the original authors ([source](https://skylion007.github.io/OpenWebTextCorpus/)):
```
We do not own any of the text from which these data has been extracted.
We license the actual packaging of these parallel data under the [Creative Commons CC0 license (“no rights reserved”)](https://creativecommons.org/share-your-work/public-domain/cc0/)
```
#### Notice policy
Should you consider that our data contains material that is owned by you and should therefore not be reproduced here, please:
Clearly identify yourself, with detailed contact data such as an address, telephone number or email address at which you can be contacted.
Clearly identify the copyrighted work claimed to be infringed.
Clearly identify the material that is claimed to be infringing and information reasonably sufficient to allow us to locate the material.
And contact us at the following email address: openwebtext at gmail.com and datasets at huggingface.co
#### Take down policy
The original authors will comply to legitimate requests by removing the affected sources from the next release of the corpus.
Hugging Face will also update this repository accordingly.
### Citation Information
```
@misc{Gokaslan2019OpenWeb,
title={OpenWebText Corpus},
author={Aaron Gokaslan*, Vanya Cohen*, Ellie Pavlick, Stefanie Tellex},
howpublished{\url{http://Skylion007.github.io/OpenWebTextCorpus}},
year={2019}
}
```
### Contributions
Thanks to [@richarddwang](https://github.com/richarddwang) for adding this dataset.
|
llm-book/livedoor-news-corpus | 2023-09-30T08:44:39.000Z | [
"task_categories:summarization",
"size_categories:1K<n<10K",
"language:ja",
"news",
"region:us"
] | llm-book | null | null | null | 1 | 2,707 | ---
task_categories:
- summarization
language:
- ja
tags:
- news
pretty_name: livedoor-news-corpus
size_categories:
- 1K<n<10K
---
# Dataset Card for llm-book/ner-wikinews-dataset
書籍『大規模言語モデル入門』で使用する、株式会社ロンウイットが提供する「livedoorニュースコーパス」によるデータセットです。
[オリジナルのサイト](https://www.rondhuit.com/download.html)と同じものを使用しています。
本コーパスは、NHN Japan株式会社が運営する「livedoor ニュース」のうち、下記のクリエイティブ・コモンズライセンスが適用されるニュース記事を収集し、可能な限りHTMLタグを取り除いて作成したものです。
### Licence
Attribution-NoDerivs 2.1 Japan (CC BY-ND 2.1 JP) License |
winograd_wsc | 2023-01-25T15:02:35.000Z | [
"task_categories:multiple-choice",
"task_ids:multiple-choice-coreference-resolution",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:n<1K",
"source_datasets:original",
"language:en",
"license:cc-by-4.0",
"region:us"
] | null | A Winograd schema is a pair of sentences that differ in only one or two words and that contain an ambiguity that is
resolved in opposite ways in the two sentences and requires the use of world knowledge and reasoning for its
resolution. The schema takes its name from a well-known example by Terry Winograd:
> The city councilmen refused the demonstrators a permit because they [feared/advocated] violence.
If the word is ``feared'', then ``they'' presumably refers to the city council; if it is ``advocated'' then ``they''
presumably refers to the demonstrators. | @inproceedings{levesque2012winograd,
title={The winograd schema challenge},
author={Levesque, Hector and Davis, Ernest and Morgenstern, Leora},
booktitle={Thirteenth International Conference on the Principles of Knowledge Representation and Reasoning},
year={2012},
organization={Citeseer}
} | null | 5 | 2,696 | ---
annotations_creators:
- expert-generated
language_creators:
- expert-generated
language:
- en
license:
- cc-by-4.0
multilinguality:
- monolingual
size_categories:
- n<1K
source_datasets:
- original
task_categories:
- multiple-choice
task_ids:
- multiple-choice-coreference-resolution
paperswithcode_id: wsc
pretty_name: Winograd Schema Challenge
dataset_info:
- config_name: wsc285
features:
- name: text
dtype: string
- name: pronoun
dtype: string
- name: pronoun_loc
dtype: int32
- name: quote
dtype: string
- name: quote_loc
dtype: int32
- name: options
sequence: string
- name: label
dtype:
class_label:
names:
'0': '0'
'1': '1'
- name: source
dtype: string
splits:
- name: test
num_bytes: 52281
num_examples: 285
download_size: 113235
dataset_size: 52281
- config_name: wsc273
features:
- name: text
dtype: string
- name: pronoun
dtype: string
- name: pronoun_loc
dtype: int32
- name: quote
dtype: string
- name: quote_loc
dtype: int32
- name: options
sequence: string
- name: label
dtype:
class_label:
names:
'0': '0'
'1': '1'
- name: source
dtype: string
splits:
- name: test
num_bytes: 49674
num_examples: 273
download_size: 113235
dataset_size: 49674
---
# Dataset Card for The Winograd Schema Challenge
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://cs.nyu.edu/faculty/davise/papers/WinogradSchemas/WS.html
- **Repository:**
- **Paper:** https://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.729.9814&rep=rep1&type=pdf
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
A Winograd schema is a pair of sentences that differ in only one or two words and that contain an ambiguity that is
resolved in opposite ways in the two sentences and requires the use of world knowledge and reasoning for its
resolution. The schema takes its name from a well-known example by Terry Winograd:
> The city councilmen refused the demonstrators a permit because they [feared/advocated] violence.
If the word is ``feared'', then ``they'' presumably refers to the city council; if it is ``advocated'' then ``they''
presumably refers to the demonstrators.
### Supported Tasks and Leaderboards
From the official webpage:
> A contest, entitled the Winograd Schema Challenge was run once, in 2016. At that time, there was a cash prize
offered for achieving human-level performance in the contest. Since then, the sponsor has withdrawn; therefore NO
CASH PRIZES CAN BE OFFERED OR WILL BE AWARDED FOR ANY KIND OF PERFORMANCE OR ACHIEVEMENT ON THIS CHALLENGE.
### Languages
The dataset is in English.
[Translation of 12 WSs into Chinese ](https://cs.nyu.edu/faculty/davise/papers/WinogradSchemas/WSChinese.html)(translated by Wei Xu).
Translations into Japanese, by Soichiro Tanaka, Rafal Rzepka, and Shiho Katajima\
**Translation changing English names to Japanese **[PDF ](https://cs.nyu.edu/faculty/davise/papers/WinogradSchemas/collection_ja.pdf) [HTML](http://arakilab.media.eng.hokudai.ac.jp/~kabura/collection_ja.html)\
**Translation preserving English names** [PDF ](https://cs.nyu.edu/faculty/davise/papers/WinogradSchemas/collection_katakana.pdf) [HTML](http://arakilab.media.eng.hokudai.ac.jp/~kabura/collection_katakana.html)
[Translation into French, ](http://www.llf.cnrs.fr/winograd-fr)by Pascal Amsili and Olga Seminck
[Winograd Schemas in Portuguese](https://sol.sbc.org.br/index.php/eniac/article/view/9334) by Gabriela Melo, Vinicius Imaizumi, and Fábio Cozman.
[Mandarinograd: A Chinese Collection of Winograd Schemas](https://www.aclweb.org/anthology/2020.lrec-1.3) by Timothée Bernard and Ting Han, LREC-2020.
## Dataset Structure
### Data Instances
Each instance contains a text passage with a designated pronoun and two possible answers indicating which entity in
the passage the pronoun represents. An example instance looks like the following:
```python
{
'label': 0,
'options': ['The city councilmen', 'The demonstrators'],
'pronoun': 'they',
'pronoun_loc': 63,
'quote': 'they feared violence',
'quote_loc': 63,
'source': '(Winograd 1972)',
'text': 'The city councilmen refused the demonstrators a permit because they feared violence.'
}
```
### Data Fields
- `text` (str): The text sequence
- `options` (list[str]): The two entity options that the pronoun may be referring to
- `label` (int): The index of the correct option in the `options` field
- `pronoun` (str): The pronoun in the sequence to be resolved
- `pronoun_loc` (int): The starting position of the pronoun in the sequence
- `quote` (str): The substr with the key action or context surrounding the pronoun
- `quote_loc` (int): The starting position of the quote in the sequence
- `source` (str): A description of the source who contributed the example
### Data Splits
Only a test split is included.
## Dataset Creation
### Curation Rationale
The Winograd Schema Challenge was proposed as an automated evaluation of an AI system's commonsense linguistic
understanding. From the webpage:
> The strengths of the challenge are that it is clear-cut, in that the answer to each schema is a binary choice;
vivid, in that it is obvious to non-experts that a program that fails to get the right answers clearly has serious
gaps in its understanding; and difficult, in that it is far beyond the current state of the art.
### Source Data
#### Initial Data Collection and Normalization
This data was manually written by experts such that the schemas are:
- easily disambiguated by the human reader (ideally, so easily that the reader does not even notice that there is an ambiguity);
- not solvable by simple techniques such as selectional restrictions;
- Google-proof; that is, there is no obvious statistical test over text corpora that will reliably disambiguate these correctly.
#### Who are the source language producers?
This dataset has grown over time, and so was produced by a variety of lingustic and AI researchers. See the `source`
field for the source of each instance.
### Annotations
#### Annotation process
Annotations are produced by the experts who construct the examples.
#### Who are the annotators?
See above.
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
This dataset has grown over time, and so was produced by a variety of lingustic and AI researchers. See the `source`
field for the source of each instance.
### Licensing Information
This work is licensed under a [Creative Commons Attribution 4.0 International
License](https://creativecommons.org/licenses/by/4.0/).
### Citation Information
The Winograd Schema Challenge including many of the examples here was proposed by
[Levesque et al 2012](https://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.729.9814&rep=rep1&type=pdf):
```
@inproceedings{levesque2012winograd,
title={The winograd schema challenge},
author={Levesque, Hector and Davis, Ernest and Morgenstern, Leora},
booktitle={Thirteenth International Conference on the Principles of Knowledge Representation and Reasoning},
year={2012},
organization={Citeseer}
}
```
### Contributions
Thanks to [@joeddav](https://github.com/joeddav) for adding this dataset. |
BeIR/hotpotqa | 2022-10-23T06:02:40.000Z | [
"task_categories:text-retrieval",
"task_ids:entity-linking-retrieval",
"task_ids:fact-checking-retrieval",
"multilinguality:monolingual",
"language:en",
"license:cc-by-sa-4.0",
"region:us"
] | BeIR | null | null | null | 2 | 2,684 | ---
annotations_creators: []
language_creators: []
language:
- en
license:
- cc-by-sa-4.0
multilinguality:
- monolingual
paperswithcode_id: beir
pretty_name: BEIR Benchmark
size_categories:
msmarco:
- 1M<n<10M
trec-covid:
- 100k<n<1M
nfcorpus:
- 1K<n<10K
nq:
- 1M<n<10M
hotpotqa:
- 1M<n<10M
fiqa:
- 10K<n<100K
arguana:
- 1K<n<10K
touche-2020:
- 100K<n<1M
cqadupstack:
- 100K<n<1M
quora:
- 100K<n<1M
dbpedia:
- 1M<n<10M
scidocs:
- 10K<n<100K
fever:
- 1M<n<10M
climate-fever:
- 1M<n<10M
scifact:
- 1K<n<10K
source_datasets: []
task_categories:
- text-retrieval
- zero-shot-retrieval
- information-retrieval
- zero-shot-information-retrieval
task_ids:
- passage-retrieval
- entity-linking-retrieval
- fact-checking-retrieval
- tweet-retrieval
- citation-prediction-retrieval
- duplication-question-retrieval
- argument-retrieval
- news-retrieval
- biomedical-information-retrieval
- question-answering-retrieval
---
# Dataset Card for BEIR Benchmark
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://github.com/UKPLab/beir
- **Repository:** https://github.com/UKPLab/beir
- **Paper:** https://openreview.net/forum?id=wCu6T5xFjeJ
- **Leaderboard:** https://docs.google.com/spreadsheets/d/1L8aACyPaXrL8iEelJLGqlMqXKPX2oSP_R10pZoy77Ns
- **Point of Contact:** nandan.thakur@uwaterloo.ca
### Dataset Summary
BEIR is a heterogeneous benchmark that has been built from 18 diverse datasets representing 9 information retrieval tasks:
- Fact-checking: [FEVER](http://fever.ai), [Climate-FEVER](http://climatefever.ai), [SciFact](https://github.com/allenai/scifact)
- Question-Answering: [NQ](https://ai.google.com/research/NaturalQuestions), [HotpotQA](https://hotpotqa.github.io), [FiQA-2018](https://sites.google.com/view/fiqa/)
- Bio-Medical IR: [TREC-COVID](https://ir.nist.gov/covidSubmit/index.html), [BioASQ](http://bioasq.org), [NFCorpus](https://www.cl.uni-heidelberg.de/statnlpgroup/nfcorpus/)
- News Retrieval: [TREC-NEWS](https://trec.nist.gov/data/news2019.html), [Robust04](https://trec.nist.gov/data/robust/04.guidelines.html)
- Argument Retrieval: [Touche-2020](https://webis.de/events/touche-20/shared-task-1.html), [ArguAna](tp://argumentation.bplaced.net/arguana/data)
- Duplicate Question Retrieval: [Quora](https://www.quora.com/q/quoradata/First-Quora-Dataset-Release-Question-Pairs), [CqaDupstack](http://nlp.cis.unimelb.edu.au/resources/cqadupstack/)
- Citation-Prediction: [SCIDOCS](https://allenai.org/data/scidocs)
- Tweet Retrieval: [Signal-1M](https://research.signal-ai.com/datasets/signal1m-tweetir.html)
- Entity Retrieval: [DBPedia](https://github.com/iai-group/DBpedia-Entity/)
All these datasets have been preprocessed and can be used for your experiments.
```python
```
### Supported Tasks and Leaderboards
The dataset supports a leaderboard that evaluates models against task-specific metrics such as F1 or EM, as well as their ability to retrieve supporting information from Wikipedia.
The current best performing models can be found [here](https://eval.ai/web/challenges/challenge-page/689/leaderboard/).
### Languages
All tasks are in English (`en`).
## Dataset Structure
All BEIR datasets must contain a corpus, queries and qrels (relevance judgments file). They must be in the following format:
- `corpus` file: a `.jsonl` file (jsonlines) that contains a list of dictionaries, each with three fields `_id` with unique document identifier, `title` with document title (optional) and `text` with document paragraph or passage. For example: `{"_id": "doc1", "title": "Albert Einstein", "text": "Albert Einstein was a German-born...."}`
- `queries` file: a `.jsonl` file (jsonlines) that contains a list of dictionaries, each with two fields `_id` with unique query identifier and `text` with query text. For example: `{"_id": "q1", "text": "Who developed the mass-energy equivalence formula?"}`
- `qrels` file: a `.tsv` file (tab-seperated) that contains three columns, i.e. the `query-id`, `corpus-id` and `score` in this order. Keep 1st row as header. For example: `q1 doc1 1`
### Data Instances
A high level example of any beir dataset:
```python
corpus = {
"doc1" : {
"title": "Albert Einstein",
"text": "Albert Einstein was a German-born theoretical physicist. who developed the theory of relativity, \
one of the two pillars of modern physics (alongside quantum mechanics). His work is also known for \
its influence on the philosophy of science. He is best known to the general public for his mass–energy \
equivalence formula E = mc2, which has been dubbed 'the world's most famous equation'. He received the 1921 \
Nobel Prize in Physics 'for his services to theoretical physics, and especially for his discovery of the law \
of the photoelectric effect', a pivotal step in the development of quantum theory."
},
"doc2" : {
"title": "", # Keep title an empty string if not present
"text": "Wheat beer is a top-fermented beer which is brewed with a large proportion of wheat relative to the amount of \
malted barley. The two main varieties are German Weißbier and Belgian witbier; other types include Lambic (made\
with wild yeast), Berliner Weisse (a cloudy, sour beer), and Gose (a sour, salty beer)."
},
}
queries = {
"q1" : "Who developed the mass-energy equivalence formula?",
"q2" : "Which beer is brewed with a large proportion of wheat?"
}
qrels = {
"q1" : {"doc1": 1},
"q2" : {"doc2": 1},
}
```
### Data Fields
Examples from all configurations have the following features:
### Corpus
- `corpus`: a `dict` feature representing the document title and passage text, made up of:
- `_id`: a `string` feature representing the unique document id
- `title`: a `string` feature, denoting the title of the document.
- `text`: a `string` feature, denoting the text of the document.
### Queries
- `queries`: a `dict` feature representing the query, made up of:
- `_id`: a `string` feature representing the unique query id
- `text`: a `string` feature, denoting the text of the query.
### Qrels
- `qrels`: a `dict` feature representing the query document relevance judgements, made up of:
- `_id`: a `string` feature representing the query id
- `_id`: a `string` feature, denoting the document id.
- `score`: a `int32` feature, denoting the relevance judgement between query and document.
### Data Splits
| Dataset | Website| BEIR-Name | Type | Queries | Corpus | Rel D/Q | Down-load | md5 |
| -------- | -----| ---------| --------- | ----------- | ---------| ---------| :----------: | :------:|
| MSMARCO | [Homepage](https://microsoft.github.io/msmarco/)| ``msmarco`` | ``train``<br>``dev``<br>``test``| 6,980 | 8.84M | 1.1 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/msmarco.zip) | ``444067daf65d982533ea17ebd59501e4`` |
| TREC-COVID | [Homepage](https://ir.nist.gov/covidSubmit/index.html)| ``trec-covid``| ``test``| 50| 171K| 493.5 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/trec-covid.zip) | ``ce62140cb23feb9becf6270d0d1fe6d1`` |
| NFCorpus | [Homepage](https://www.cl.uni-heidelberg.de/statnlpgroup/nfcorpus/) | ``nfcorpus`` | ``train``<br>``dev``<br>``test``| 323 | 3.6K | 38.2 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/nfcorpus.zip) | ``a89dba18a62ef92f7d323ec890a0d38d`` |
| BioASQ | [Homepage](http://bioasq.org) | ``bioasq``| ``train``<br>``test`` | 500 | 14.91M | 8.05 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#2-bioasq) |
| NQ | [Homepage](https://ai.google.com/research/NaturalQuestions) | ``nq``| ``train``<br>``test``| 3,452 | 2.68M | 1.2 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/nq.zip) | ``d4d3d2e48787a744b6f6e691ff534307`` |
| HotpotQA | [Homepage](https://hotpotqa.github.io) | ``hotpotqa``| ``train``<br>``dev``<br>``test``| 7,405 | 5.23M | 2.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/hotpotqa.zip) | ``f412724f78b0d91183a0e86805e16114`` |
| FiQA-2018 | [Homepage](https://sites.google.com/view/fiqa/) | ``fiqa`` | ``train``<br>``dev``<br>``test``| 648 | 57K | 2.6 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/fiqa.zip) | ``17918ed23cd04fb15047f73e6c3bd9d9`` |
| Signal-1M(RT) | [Homepage](https://research.signal-ai.com/datasets/signal1m-tweetir.html)| ``signal1m`` | ``test``| 97 | 2.86M | 19.6 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#4-signal-1m) |
| TREC-NEWS | [Homepage](https://trec.nist.gov/data/news2019.html) | ``trec-news`` | ``test``| 57 | 595K | 19.6 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#1-trec-news) |
| ArguAna | [Homepage](http://argumentation.bplaced.net/arguana/data) | ``arguana``| ``test`` | 1,406 | 8.67K | 1.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/arguana.zip) | ``8ad3e3c2a5867cdced806d6503f29b99`` |
| Touche-2020| [Homepage](https://webis.de/events/touche-20/shared-task-1.html) | ``webis-touche2020``| ``test``| 49 | 382K | 19.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/webis-touche2020.zip) | ``46f650ba5a527fc69e0a6521c5a23563`` |
| CQADupstack| [Homepage](http://nlp.cis.unimelb.edu.au/resources/cqadupstack/) | ``cqadupstack``| ``test``| 13,145 | 457K | 1.4 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/cqadupstack.zip) | ``4e41456d7df8ee7760a7f866133bda78`` |
| Quora| [Homepage](https://www.quora.com/q/quoradata/First-Quora-Dataset-Release-Question-Pairs) | ``quora``| ``dev``<br>``test``| 10,000 | 523K | 1.6 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/quora.zip) | ``18fb154900ba42a600f84b839c173167`` |
| DBPedia | [Homepage](https://github.com/iai-group/DBpedia-Entity/) | ``dbpedia-entity``| ``dev``<br>``test``| 400 | 4.63M | 38.2 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/dbpedia-entity.zip) | ``c2a39eb420a3164af735795df012ac2c`` |
| SCIDOCS| [Homepage](https://allenai.org/data/scidocs) | ``scidocs``| ``test``| 1,000 | 25K | 4.9 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/scidocs.zip) | ``38121350fc3a4d2f48850f6aff52e4a9`` |
| FEVER | [Homepage](http://fever.ai) | ``fever``| ``train``<br>``dev``<br>``test``| 6,666 | 5.42M | 1.2| [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/fever.zip) | ``5a818580227bfb4b35bb6fa46d9b6c03`` |
| Climate-FEVER| [Homepage](http://climatefever.ai) | ``climate-fever``|``test``| 1,535 | 5.42M | 3.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/climate-fever.zip) | ``8b66f0a9126c521bae2bde127b4dc99d`` |
| SciFact| [Homepage](https://github.com/allenai/scifact) | ``scifact``| ``train``<br>``test``| 300 | 5K | 1.1 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/scifact.zip) | ``5f7d1de60b170fc8027bb7898e2efca1`` |
| Robust04 | [Homepage](https://trec.nist.gov/data/robust/04.guidelines.html) | ``robust04``| ``test``| 249 | 528K | 69.9 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#3-robust04) |
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
[Needs More Information]
## Considerations for Using the Data
### Social Impact of Dataset
[Needs More Information]
### Discussion of Biases
[Needs More Information]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
[Needs More Information]
### Licensing Information
[Needs More Information]
### Citation Information
Cite as:
```
@inproceedings{
thakur2021beir,
title={{BEIR}: A Heterogeneous Benchmark for Zero-shot Evaluation of Information Retrieval Models},
author={Nandan Thakur and Nils Reimers and Andreas R{\"u}ckl{\'e} and Abhishek Srivastava and Iryna Gurevych},
booktitle={Thirty-fifth Conference on Neural Information Processing Systems Datasets and Benchmarks Track (Round 2)},
year={2021},
url={https://openreview.net/forum?id=wCu6T5xFjeJ}
}
```
### Contributions
Thanks to [@Nthakur20](https://github.com/Nthakur20) for adding this dataset. |
anton-l/superb_demo | 2022-04-14T13:54:54.000Z | [
"region:us"
] | anton-l | Self-supervised learning (SSL) has proven vital for advancing research in
natural language processing (NLP) and computer vision (CV). The paradigm
pretrains a shared model on large volumes of unlabeled data and achieves
state-of-the-art (SOTA) for various tasks with minimal adaptation. However, the
speech processing community lacks a similar setup to systematically explore the
paradigm. To bridge this gap, we introduce Speech processing Universal
PERformance Benchmark (SUPERB). SUPERB is a leaderboard to benchmark the
performance of a shared model across a wide range of speech processing tasks
with minimal architecture changes and labeled data. Among multiple usages of the
shared model, we especially focus on extracting the representation learned from
SSL due to its preferable re-usability. We present a simple framework to solve
SUPERB tasks by learning task-specialized lightweight prediction heads on top of
the frozen shared model. Our results demonstrate that the framework is promising
as SSL representations show competitive generalizability and accessibility
across SUPERB tasks. We release SUPERB as a challenge with a leaderboard and a
benchmark toolkit to fuel the research in representation learning and general
speech processing.
Note that in order to limit the required storage for preparing this dataset, the
audio is stored in the .flac format and is not converted to a float32 array. To
convert, the audio file to a float32 array, please make use of the `.map()`
function as follows:
```python
import soundfile as sf
def map_to_array(batch):
speech_array, _ = sf.read(batch["file"])
batch["speech"] = speech_array
return batch
dataset = dataset.map(map_to_array, remove_columns=["file"])
``` | @article{DBLP:journals/corr/abs-2105-01051,
author = {Shu{-}Wen Yang and
Po{-}Han Chi and
Yung{-}Sung Chuang and
Cheng{-}I Jeff Lai and
Kushal Lakhotia and
Yist Y. Lin and
Andy T. Liu and
Jiatong Shi and
Xuankai Chang and
Guan{-}Ting Lin and
Tzu{-}Hsien Huang and
Wei{-}Cheng Tseng and
Ko{-}tik Lee and
Da{-}Rong Liu and
Zili Huang and
Shuyan Dong and
Shang{-}Wen Li and
Shinji Watanabe and
Abdelrahman Mohamed and
Hung{-}yi Lee},
title = {{SUPERB:} Speech processing Universal PERformance Benchmark},
journal = {CoRR},
volume = {abs/2105.01051},
year = {2021},
url = {https://arxiv.org/abs/2105.01051},
archivePrefix = {arXiv},
eprint = {2105.01051},
timestamp = {Thu, 01 Jul 2021 13:30:22 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2105-01051.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
} | null | 1 | 2,675 | # Disclaimer
This is a tiny subset of the SUPERB dataset, which is intended only for demo purposes!
See the full dataset here: https://huggingface.co/datasets/superb
|
timit_asr | 2022-10-28T16:41:41.000Z | [
"task_categories:automatic-speech-recognition",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:en",
"license:other",
"region:us"
] | null | The TIMIT corpus of reading speech has been developed to provide speech data for acoustic-phonetic research studies
and for the evaluation of automatic speech recognition systems.
TIMIT contains high quality recordings of 630 individuals/speakers with 8 different American English dialects,
with each individual reading upto 10 phonetically rich sentences.
More info on TIMIT dataset can be understood from the "README" which can be found here:
https://catalog.ldc.upenn.edu/docs/LDC93S1/readme.txt | @inproceedings{
title={TIMIT Acoustic-Phonetic Continuous Speech Corpus},
author={Garofolo, John S., et al},
ldc_catalog_no={LDC93S1},
DOI={https://doi.org/10.35111/17gk-bn40},
journal={Linguistic Data Consortium, Philadelphia},
year={1983}
} | null | 15 | 2,664 | ---
pretty_name: TIMIT
annotations_creators:
- expert-generated
language_creators:
- expert-generated
language:
- en
license:
- other
license_details: "LDC-User-Agreement-for-Non-Members"
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- automatic-speech-recognition
task_ids: []
paperswithcode_id: timit
train-eval-index:
- config: clean
task: automatic-speech-recognition
task_id: speech_recognition
splits:
train_split: train
eval_split: test
col_mapping:
file: path
text: text
metrics:
- type: wer
name: WER
- type: cer
name: CER
---
# Dataset Card for timit_asr
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [TIMIT Acoustic-Phonetic Continuous Speech Corpus](https://catalog.ldc.upenn.edu/LDC93S1)
- **Repository:** [Needs More Information]
- **Paper:** [TIMIT: Dataset designed to provide speech data for acoustic-phonetic studies and for the development and evaluation of automatic speech recognition systems.](https://catalog.ldc.upenn.edu/LDC93S1)
- **Leaderboard:** [Paperswithcode Leaderboard](https://paperswithcode.com/sota/speech-recognition-on-timit)
- **Point of Contact:** [Needs More Information]
### Dataset Summary
The TIMIT corpus of read speech is designed to provide speech data for acoustic-phonetic studies and for the development and evaluation of automatic speech recognition systems. TIMIT contains broadband recordings of 630 speakers of eight major dialects of American English, each reading ten phonetically rich sentences. The TIMIT corpus includes time-aligned orthographic, phonetic and word transcriptions as well as a 16-bit, 16kHz speech waveform file for each utterance. Corpus design was a joint effort among the Massachusetts Institute of Technology (MIT), SRI International (SRI) and Texas Instruments, Inc. (TI). The speech was recorded at TI, transcribed at MIT and verified and prepared for CD-ROM production by the National Institute of Standards and Technology (NIST).
The dataset needs to be downloaded manually from https://catalog.ldc.upenn.edu/LDC93S1:
```
To use TIMIT you have to download it manually.
Please create an account and download the dataset from https://catalog.ldc.upenn.edu/LDC93S1
Then extract all files in one folder and load the dataset with:
`datasets.load_dataset('timit_asr', data_dir='path/to/folder/folder_name')`
```
### Supported Tasks and Leaderboards
- `automatic-speech-recognition`, `speaker-identification`: The dataset can be used to train a model for Automatic Speech Recognition (ASR). The model is presented with an audio file and asked to transcribe the audio file to written text. The most common evaluation metric is the word error rate (WER). The task has an active leaderboard which can be found at https://paperswithcode.com/sota/speech-recognition-on-timit and ranks models based on their WER.
### Languages
The audio is in English.
The TIMIT corpus transcriptions have been hand verified. Test and training subsets, balanced for phonetic and dialectal coverage, are specified. Tabular computer-searchable information is included as well as written documentation.
## Dataset Structure
### Data Instances
A typical data point comprises the path to the audio file, usually called `file` and its transcription, called `text`. Some additional information about the speaker and the passage which contains the transcription is provided.
```
{
'file': '/data/TRAIN/DR4/MMDM0/SI681.WAV',
'audio': {'path': '/data/TRAIN/DR4/MMDM0/SI681.WAV',
'array': array([-0.00048828, -0.00018311, -0.00137329, ..., 0.00079346, 0.00091553, 0.00085449], dtype=float32),
'sampling_rate': 16000},
'text': 'Would such an act of refusal be useful?',
'phonetic_detail': [{'start': '0', 'stop': '1960', 'utterance': 'h#'},
{'start': '1960', 'stop': '2466', 'utterance': 'w'},
{'start': '2466', 'stop': '3480', 'utterance': 'ix'},
{'start': '3480', 'stop': '4000', 'utterance': 'dcl'},
{'start': '4000', 'stop': '5960', 'utterance': 's'},
{'start': '5960', 'stop': '7480', 'utterance': 'ah'},
{'start': '7480', 'stop': '7880', 'utterance': 'tcl'},
{'start': '7880', 'stop': '9400', 'utterance': 'ch'},
{'start': '9400', 'stop': '9960', 'utterance': 'ix'},
{'start': '9960', 'stop': '10680', 'utterance': 'n'},
{'start': '10680', 'stop': '13480', 'utterance': 'ae'},
{'start': '13480', 'stop': '15680', 'utterance': 'kcl'},
{'start': '15680', 'stop': '15880', 'utterance': 't'},
{'start': '15880', 'stop': '16920', 'utterance': 'ix'},
{'start': '16920', 'stop': '18297', 'utterance': 'v'},
{'start': '18297', 'stop': '18882', 'utterance': 'r'},
{'start': '18882', 'stop': '19480', 'utterance': 'ix'},
{'start': '19480', 'stop': '21723', 'utterance': 'f'},
{'start': '21723', 'stop': '22516', 'utterance': 'y'},
{'start': '22516', 'stop': '24040', 'utterance': 'ux'},
{'start': '24040', 'stop': '25190', 'utterance': 'zh'},
{'start': '25190', 'stop': '27080', 'utterance': 'el'},
{'start': '27080', 'stop': '28160', 'utterance': 'bcl'},
{'start': '28160', 'stop': '28560', 'utterance': 'b'},
{'start': '28560', 'stop': '30120', 'utterance': 'iy'},
{'start': '30120', 'stop': '31832', 'utterance': 'y'},
{'start': '31832', 'stop': '33240', 'utterance': 'ux'},
{'start': '33240', 'stop': '34640', 'utterance': 's'},
{'start': '34640', 'stop': '35968', 'utterance': 'f'},
{'start': '35968', 'stop': '37720', 'utterance': 'el'},
{'start': '37720', 'stop': '39920', 'utterance': 'h#'}],
'word_detail': [{'start': '1960', 'stop': '4000', 'utterance': 'would'},
{'start': '4000', 'stop': '9400', 'utterance': 'such'},
{'start': '9400', 'stop': '10680', 'utterance': 'an'},
{'start': '10680', 'stop': '15880', 'utterance': 'act'},
{'start': '15880', 'stop': '18297', 'utterance': 'of'},
{'start': '18297', 'stop': '27080', 'utterance': 'refusal'},
{'start': '27080', 'stop': '30120', 'utterance': 'be'},
{'start': '30120', 'stop': '37720', 'utterance': 'useful'}],
'dialect_region': 'DR4',
'sentence_type': 'SI',
'speaker_id': 'MMDM0',
'id': 'SI681'
}
```
### Data Fields
- file: A path to the downloaded audio file in .wav format.
- audio: A dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate. Note that when accessing the audio column: `dataset[0]["audio"]` the audio file is automatically decoded and resampled to `dataset.features["audio"].sampling_rate`. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the `"audio"` column, *i.e.* `dataset[0]["audio"]` should **always** be preferred over `dataset["audio"][0]`.
- text: The transcription of the audio file.
- phonetic_detail: The phonemes that make up the sentence. The PHONCODE.DOC contains a table of all the phonemic and phonetic symbols used in TIMIT lexicon.
- word_detail: Word level split of the transcript.
- dialect_region: The dialect code of the recording.
- sentence_type: The type of the sentence - 'SA':'Dialect', 'SX':'Compact' or 'SI':'Diverse'.
- speaker_id: Unique id of the speaker. The same speaker id can be found for multiple data samples.
- id: ID of the data sample. Contains the <SENTENCE_TYPE><SENTENCE_NUMBER>.
### Data Splits
The speech material has been subdivided into portions for training and
testing. The default train-test split will be made available on data download.
The test data alone has a core portion containing 24 speakers, 2 male and 1 female
from each dialect region. More information about the test set can
be found [here](https://catalog.ldc.upenn.edu/docs/LDC93S1/TESTSET.TXT)
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
The dataset consists of people who have donated their voice online. You agree to not attempt to determine the identity of speakers in this dataset.
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
Dataset provided for research purposes only. Please check dataset license for additional information.
## Additional Information
### Dataset Curators
The dataset was created by John S. Garofolo, Lori F. Lamel, William M. Fisher, Jonathan G. Fiscus, David S. Pallett, Nancy L. Dahlgren, Victor Zue
### Licensing Information
[LDC User Agreement for Non-Members](https://catalog.ldc.upenn.edu/license/ldc-non-members-agreement.pdf)
### Citation Information
```
@inproceedings{
title={TIMIT Acoustic-Phonetic Continuous Speech Corpus},
author={Garofolo, John S., et al},
ldc_catalog_no={LDC93S1},
DOI={https://doi.org/10.35111/17gk-bn40},
journal={Linguistic Data Consortium, Philadelphia},
year={1983}
}
```
### Contributions
Thanks to [@vrindaprabhu](https://github.com/vrindaprabhu) for adding this dataset.
|
ashraq/esc50 | 2023-01-07T08:35:28.000Z | [
"region:us"
] | ashraq | null | null | null | 3 | 2,640 | https://github.com/karolpiczak/ESC-50
The dataset is available under the terms of the Creative Commons Attribution Non-Commercial license.
K. J. Piczak. ESC: Dataset for Environmental Sound Classification. Proceedings of the 23rd Annual ACM Conference on Multimedia, Brisbane, Australia, 2015.
[DOI: http://dx.doi.org/10.1145/2733373.2806390] |
mteb/sts12-sts | 2022-09-27T19:11:50.000Z | [
"language:en",
"region:us"
] | mteb | null | null | null | 4 | 2,631 | ---
language:
- en
--- |
aeslc | 2023-04-05T08:32:58.000Z | [
"task_categories:summarization",
"annotations_creators:crowdsourced",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:en",
"license:unknown",
"aspect-based-summarization",
"conversations-summarization",
"multi-document-summarization",
"email-headline-generation",
"arxiv:1906.03497",
"region:us"
] | null | A collection of email messages of employees in the Enron Corporation.
There are two features:
- email_body: email body text.
- subject_line: email subject text. | @misc{zhang2019email,
title={This Email Could Save Your Life: Introducing the Task of Email Subject Line Generation},
author={Rui Zhang and Joel Tetreault},
year={2019},
eprint={1906.03497},
archivePrefix={arXiv},
primaryClass={cs.CL}
} | null | 4 | 2,619 | ---
annotations_creators:
- crowdsourced
language:
- en
language_creators:
- found
license:
- unknown
multilinguality:
- monolingual
pretty_name: 'AESLC: Annotated Enron Subject Line Corpus'
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- summarization
task_ids: []
paperswithcode_id: aeslc
tags:
- aspect-based-summarization
- conversations-summarization
- multi-document-summarization
- email-headline-generation
dataset_info:
features:
- name: email_body
dtype: string
- name: subject_line
dtype: string
splits:
- name: train
num_bytes: 11902668
num_examples: 14436
- name: validation
num_bytes: 1660730
num_examples: 1960
- name: test
num_bytes: 1384177
num_examples: 1906
download_size: 11643743
dataset_size: 14947575
---
# Dataset Card for "aeslc"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:** https://github.com/ryanzhumich/AESLC
- **Paper:** [This Email Could Save Your Life: Introducing the Task of Email Subject Line Generation](https://arxiv.org/abs/1906.03497)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of downloaded dataset files:** 11.64 MB
- **Size of the generated dataset:** 14.95 MB
- **Total amount of disk used:** 26.59 MB
### Dataset Summary
A collection of email messages of employees in the Enron Corporation.
There are two features:
- email_body: email body text.
- subject_line: email subject text.
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
Monolingual English (mainly en-US) with some exceptions.
## Dataset Structure
### Data Instances
#### default
- **Size of downloaded dataset files:** 11.64 MB
- **Size of the generated dataset:** 14.95 MB
- **Total amount of disk used:** 26.59 MB
An example of 'train' looks as follows.
```
{
"email_body": "B/C\n<<some doc>>\n",
"subject_line": "Service Agreement"
}
```
### Data Fields
The data fields are the same among all splits.
#### default
- `email_body`: a `string` feature.
- `subject_line`: a `string` feature.
### Data Splits
| name |train|validation|test|
|-------|----:|---------:|---:|
|default|14436| 1960|1906|
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@inproceedings{zhang-tetreault-2019-email,
title = "This Email Could Save Your Life: Introducing the Task of Email Subject Line Generation",
author = "Zhang, Rui and
Tetreault, Joel",
booktitle = "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
month = jul,
year = "2019",
address = "Florence, Italy",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/P19-1043",
doi = "10.18653/v1/P19-1043",
pages = "446--456",
}
```
### Contributions
Thanks to [@patrickvonplaten](https://github.com/patrickvonplaten), [@thomwolf](https://github.com/thomwolf), [@lewtun](https://github.com/lewtun) for adding this dataset. |
flax-sentence-embeddings/stackexchange_titlebody_best_voted_answer_jsonl | 2022-07-11T13:13:27.000Z | [
"task_categories:question-answering",
"task_ids:closed-domain-qa",
"annotations_creators:found",
"language_creators:found",
"multilinguality:multilingual",
"size_categories:unknown",
"source_datasets:original",
"language:en",
"license:cc-by-nc-sa-4.0",
"region:us"
] | flax-sentence-embeddings | This new dataset is designed to solve this great NLP task and is crafted with a lot of care. | @misc{StackExchangeDataset,
author = {Flax Sentence Embeddings Team},
title = {Stack Exchange question pairs},
year = {2021},
howpublished = {https://huggingface.co/datasets/flax-sentence-embeddings/},
} | null | 4 | 2,610 | ---
annotations_creators:
- found
language_creators:
- found
language:
- en
license:
- cc-by-nc-sa-4.0
multilinguality:
- multilingual
pretty_name: stackexchange
size_categories:
- unknown
source_datasets:
- original
task_categories:
- question-answering
task_ids:
- closed-domain-qa
---
# Dataset Card Creation Guide
## Table of Contents
- [Dataset Card Creation Guide](#dataset-card-creation-guide)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Initial Data Collection and Normalization](#initial-data-collection-and-normalization)
- [Who are the source language producers?](#who-are-the-source-language-producers)s
- [Additional Information](#additional-information)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [stackexchange](https://archive.org/details/stackexchange)
- **Repository:** [flax-sentence-embeddings](https://github.com/nreimers/flax-sentence-embeddings)
### Dataset Summary
We automatically extracted question and answer (Q&A) pairs from [Stack Exchange](https://stackexchange.com/) network. Stack Exchange gather many Q&A communities across 50 online plateform, including the well known Stack Overflow and other technical sites. 100 millon developpers consult Stack Exchange every month. The dataset is a parallel corpus with each question mapped to the top rated answer. The dataset is split given communities which cover a variety of domains from 3d printing, economics, raspberry pi or emacs. An exhaustive list of all communities is available [here](https://stackexchange.com/sites).
### Languages
Stack Exchange mainly consist of english language (en).
## Dataset Structure
### Data Instances
Each data samples is presented as follow:
```
{'title_body': 'How to determine if 3 points on a 3-D graph are collinear? Let the points $A, B$ and $C$ be $(x_1, y_1, z_1), (x_2, y_2, z_2)$ and $(x_3, y_3, z_3)$ respectively. How do I prove that the 3 points are collinear? What is the formula?',
'upvoted_answer': 'From $A(x_1,y_1,z_1),B(x_2,y_2,z_2),C(x_3,y_3,z_3)$ we can get their position vectors.\n\n$\\vec{AB}=(x_2-x_1,y_2-y_1,z_2-z_1)$ and $\\vec{AC}=(x_3-x_1,y_3-y_1,z_3-z_1)$.\n\nThen $||\\vec{AB}\\times\\vec{AC}||=0\\implies A,B,C$ collinear.',
```
This particular exampe corresponds to the [following page](https://math.stackexchange.com/questions/947555/how-to-determine-if-3-points-on-a-3-d-graph-are-collinear)
### Data Fields
The fields present in the dataset contain the following informations:
- `title_body`: This is the concatenation of the title and body from the question
- `upvoted_answer`: This is the body from the most upvoted answer
### Data Splits
We provide multiple splits for this dataset, which each refers to a given community channel. We detail the number of pail for each split below:
| | Number of pairs |
| ----- | ------ |
| apple | 92,487 |
| english | 100,640 |
| codereview | 41,748 |
| dba | 71,449 |
| mathoverflow | 85,289 |
| electronics | 129,494 |
| mathematica | 59,895 |
| drupal | 67,817 |
| magento | 79,241 |
| gaming | 82,887 |
| ell | 77,892 |
| gamedev | 40,154 |
| gis | 100,254 |
| askubuntu | 267,135 |
| diy | 52,896 |
| academia | 32,137 |
| blender | 54,153 |
| cs | 30,010 |
| chemistry | 27,061 |
| judaism | 26,085 |
| crypto | 19,404 |
| android | 38,077 |
| ja | 17,376 |
| christianity | 11,498 |
| graphicdesign | 28,083 |
| aviation | 18,755 |
| ethereum | 26,124 |
| biology | 19,277 |
| datascience | 20,503 |
| law | 16,133 |
| dsp | 17,430 |
| japanese | 20,948 |
| hermeneutics | 9,516 |
| bicycles | 15,708 |
| arduino | 16,281 |
| history | 10,766 |
| bitcoin | 22,474 |
| cooking | 22,641 |
| hinduism | 8,999 |
| codegolf | 8,211 |
| boardgames | 11,805 |
| emacs | 16,830 |
| economics | 8,844 |
| gardening | 13,246 |
| astronomy | 9,086 |
| islam | 10,052 |
| german | 13,733 |
| fitness | 8,297 |
| french | 10,578 |
| anime | 10,131 |
| craftcms | 11,236 |
| cstheory | 7,742 |
| engineering | 8,649 |
| buddhism | 6,787 |
| linguistics | 6,843 |
| ai | 5,763 |
| expressionengine | 10,742 |
| cogsci | 5,101 |
| chinese | 8,646 |
| chess | 6,392 |
| civicrm | 10,648 |
| literature | 3,539 |
| interpersonal | 3,398 |
| health | 4,494 |
| avp | 6,450 |
| earthscience | 4,396 |
| joomla | 5,887 |
| homebrew | 5,608 |
| expatriates | 4,913 |
| latin | 3,969 |
| matheducators | 2,706 |
| ham | 3,501 |
| genealogy | 2,895 |
| 3dprinting | 3,488 |
| elementaryos | 5,917 |
| bioinformatics | 3,135 |
| devops | 3,462 |
| hsm | 2,517 |
| italian | 3,101 |
| computergraphics | 2,306 |
| martialarts | 1,737 |
| bricks | 3,530 |
| freelancing | 1,663 |
| crafts | 1,659 |
| lifehacks | 2,576 |
| cseducators | 902 |
| materials | 1,101 |
| hardwarerecs | 2,050 |
| iot | 1,359 |
| eosio | 1,940 |
| languagelearning | 948 |
| korean | 1,406 |
| coffee | 1,188 |
| esperanto | 1,466 |
| beer | 1,012 |
| ebooks | 1,107 |
| iota | 775 |
| cardano | 248 |
| drones | 496 |
| conlang | 334 |
| pt | 103,277 |
| stats | 115,679 |
| unix | 155,414 |
| physics | 141,230 |
| tex | 171,628 |
| serverfault | 238,507 |
| salesforce | 87,272 |
| wordpress | 83,621 |
| softwareengineering | 51,326 |
| scifi | 54,805 |
| security | 51,355 |
| ru | 253,289 |
| superuser | 352,610 |
| sharepoint | 80,420 |
| rpg | 40,435 |
| travel | 36,533 |
| worldbuilding | 26,210 |
| meta | 1,000 |
| workplace | 24,012 |
| ux | 28,901 |
| money | 29,404 |
| webmasters | 30,370 |
| raspberrypi | 24,143 |
| photo | 23,204 |
| music | 19,936 |
| philosophy | 13,114 |
| puzzling | 17,448 |
| movies | 18,243 |
| quant | 12,933 |
| politics | 11,047 |
| space | 12,893 |
| mechanics | 18,613 |
| skeptics | 8,145 |
| rus | 16,528 |
| writers | 9,867 |
| webapps | 24,867 |
| softwarerecs | 11,761 |
| networkengineering | 12,590 |
| parenting | 5,998 |
| scicomp | 7,036 |
| sqa | 9,256 |
| sitecore | 7,838 |
| vi | 9,000 |
| spanish | 7,675 |
| pm | 5,435 |
| pets | 6,156 |
| sound | 8,303 |
| reverseengineering | 5,817 |
| outdoors | 5,278 |
| tridion | 5,907 |
| retrocomputing | 3,907 |
| robotics | 4,648 |
| quantumcomputing | 4,320 |
| sports | 4,707 |
| russian | 3,937 |
| opensource | 3,221 |
| woodworking | 2,955 |
| patents | 3,573 |
| tor | 4,167 |
| ukrainian | 1,767 |
| opendata | 3,842 |
| monero | 3,508 |
| sustainability | 1,674 |
| portuguese | 1,964 |
| mythology | 1,595 |
| musicfans | 2,431 |
| or | 1,490 |
| poker | 1,665 |
| windowsphone | 2,807 |
| moderators | 504 |
| stackapps | 1,518 |
| stellar | 1,078 |
| vegetarianism | 585 |
| tezos | 1,169 |
| total | 4,750,619 |
## Dataset Creation
### Curation Rationale
We primary designed this dataset for sentence embeddings training. Indeed sentence embeddings may be trained using a contrastive learning setup for which the model is trained to associate each sentence with its corresponding pair out of multiple proposition. Such models require many examples to be efficient and thus the dataset creation may be tedious. Community networks such as Stack Exchange allow us to build many examples semi-automatically.
### Source Data
The source data are dumps from [Stack Exchange](https://archive.org/details/stackexchange)
#### Initial Data Collection and Normalization
We collected the data from the math community.
We filtered out questions which title or body length is bellow 20 characters and questions for which body length is above 4096 characters.
When extracting most upvoted answer, we filtered to pairs for which their is at least 100 votes gap between most upvoted and downvoted answers.
#### Who are the source language producers?
Questions and answers are written by the community developpers of Stack Exchange.
## Additional Information
### Licensing Information
Please see the license information at: https://archive.org/details/stackexchange
### Citation Information
```
@misc{StackExchangeDataset,
author = {Flax Sentence Embeddings Team},
title = {Stack Exchange question pairs},
year = {2021},
howpublished = {https://huggingface.co/datasets/flax-sentence-embeddings/},
}
```
### Contributions
Thanks to the Flax Sentence Embeddings team for adding this dataset. |
HuggingFaceH4/testing_h4 | 2023-07-21T07:27:54.000Z | [
"region:us"
] | HuggingFaceH4 | null | null | null | 0 | 2,595 | ---
dataset_info:
features:
- name: chosen
list:
- name: content
dtype: string
- name: role
dtype: string
- name: rejected
list:
- name: content
dtype: string
- name: role
dtype: string
- name: prompt
dtype: string
- name: prompt_id
dtype: string
- name: messages
list:
- name: content
dtype: string
- name: role
dtype: string
splits:
- name: test_holdout_rm
num_bytes: 26133
num_examples: 10
- name: test_ift
num_bytes: 23368
num_examples: 10
- name: test_rl
num_bytes: 28324
num_examples: 10
- name: test_rm
num_bytes: 37997
num_examples: 10
- name: train_ift
num_bytes: 26975
num_examples: 10
- name: train_rl
num_bytes: 20943
num_examples: 10
- name: train_rm
num_bytes: 33531
num_examples: 10
download_size: 186492
dataset_size: 197271
---
# Dataset Card for "testing_h4"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
bigcode/starcoderdata | 2023-05-16T10:05:48.000Z | [
"task_categories:text-generation",
"language_creators:crowdsourced",
"language_creators:expert-generated",
"multilinguality:multilingual",
"size_categories:unknown",
"language:code",
"license:other",
"region:us"
] | bigcode | null | null | null | 186 | 2,589 | ---
annotations_creators: []
language_creators:
- crowdsourced
- expert-generated
language:
- code
license:
- other
multilinguality:
- multilingual
pretty_name: The-Stack
size_categories:
- unknown
source_datasets: []
task_categories:
- text-generation
extra_gated_prompt: >-
## Terms of Use for The Stack
The Stack dataset is a collection of source code in over 300 programming
languages. We ask that you read and acknowledge the following points before
using the dataset:
1. The Stack is a collection of source code from repositories with various
licenses. Any use of all or part of the code gathered in The Stack must abide
by the terms of the original licenses, including attribution clauses when
relevant. We facilitate this by providing provenance information for each data
point.
2. The Stack is regularly updated to enact validated data removal requests. By
clicking on "Access repository", you agree to update your own version of The
Stack to the most recent usable version specified by the maintainers in [the
following
thread](https://huggingface.co/datasets/bigcode/the-stack/discussions/7). If
you have questions about dataset versions and allowed uses, please also ask
them in the dataset’s [community
discussions](https://huggingface.co/datasets/bigcode/the-stack/discussions/new).
We will also notify users via email when the latest usable version changes.
3. To host, share, or otherwise provide access to The Stack dataset, you must
include [these Terms of
Use](https://huggingface.co/datasets/bigcode/the-stack#terms-of-use-for-the-stack)
and require users to agree to it.
By clicking on "Access repository" below, you accept that your contact
information (email address and username) can be shared with the dataset
maintainers as well.
extra_gated_fields:
Email: text
I have read the License and agree with its terms: checkbox
---
# StarCoder Training Dataset
## Dataset description
This is the dataset used for training [StarCoder](https://huggingface.co/bigcode/starcoder) and [StarCoderBase](https://huggingface.co/bigcode/starcoderbase). It contains 783GB of code in 86 programming languages, and includes 54GB GitHub Issues + 13GB Jupyter notebooks in scripts and text-code pairs,
and 32GB of GitHub commits, which is approximately 250 Billion tokens.
## Dataset creation
The creation and filtering of The Stack is explained in the [original dataset](https://huggingface.co/datasets/bigcode/the-stack-dedup), we additionally decontaminate and clean all 86 programming
languages in the dataset, in addition to GitHub issues, Jupyter Notebooks and GitHub commits. We also apply near-deduplication and remove PII, all details are mentionned in our [Paper: 💫 StarCoder, May The Source Be With You](https://drive.google.com/file/d/1cN-b9GnWtHzQRoE7M7gAEyivY0kl4BYs/view)
## How to use the dataset
```python
from datasets import load_dataset
# to load python for example
ds = load_dataset("bigcode/starcoderdata", data_dir="python", split="train")
```
GitHub issues, GitHub commits and Jupyter notebooks subsets have different columns from the rest so loading the entire dataset at once may fail, we suggest loading programming languages separatly from these categories.
````
jupyter-scripts-dedup-filtered
jupyter-structured-clean-dedup
github-issues-filtered-structured
git-commits-cleaned
````
|
stas/openwebtext-10k | 2021-09-15T00:18:50.000Z | [
"region:us"
] | stas | An open-source replication of the WebText dataset from OpenAI.
This is a small subset representing the first 10K records from the original dataset - created for testing.
The full 8M-record dataset is at https://huggingface.co/datasets/openwebtext | @misc{Gokaslan2019OpenWeb,
title={OpenWebText Corpus},
author={Aaron Gokaslan*, Vanya Cohen*, Ellie Pavlick, Stefanie Tellex},
howpublished{\\url{http://Skylion007.github.io/OpenWebTextCorpus}},
year={2019}
} | null | 6 | 2,585 | 10K slice of OpenWebText - An open-source replication of the WebText dataset from OpenAI.
This is a small subset representing the first 10K records from the original dataset - created for testing.
The full 8M-record dataset is [here](https://huggingface.co/datasets/openwebtext).
```
$ python -c "from datasets import load_dataset; ds=load_dataset('stas/openwebtext-10k'); print(ds)"
DatasetDict({
train: Dataset({
features: ['text'],
num_rows: 10000
})
})
```
* Records: 10,000
* compressed size: ~15MB
* uncompressed size: 50MB
To convert to jsonlines:
```
from datasets import load_dataset
dataset_name = "stas/openwebtext-10k"
name = dataset_name.split('/')[-1]
ds = load_dataset(dataset_name, split='train')
ds.to_json(f"{name}.jsonl", orient="records", lines=True)
```
To see how this subset was created, here is the [instructions file](https://huggingface.co/datasets/stas/openwebtext-10k/blob/main/process.txt).
|
facat/sci-llm-new | 2023-10-01T12:45:46.000Z | [
"region:us"
] | facat | null | null | null | 0 | 2,575 | ---
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
- split: test2
path: data/test2-*
- split: train
path: data/train-*
- split: train_attack
path: data/train_attack-*
- split: train_new
path: data/train_new-*
- split: train_60k
path: data/train_60k-*
dataset_info:
features:
- name: prompt
dtype: string
- name: context
dtype: string
- name: chosen
dtype: string
- name: A
dtype: string
- name: B
dtype: string
- name: C
dtype: string
- name: D
dtype: string
splits:
- name: test
num_bytes: 2214599
num_examples: 500
- name: test2
num_bytes: 1111116
num_examples: 200
- name: train
num_bytes: 1207315884
num_examples: 209092
- name: train_attack
num_bytes: 400564505
num_examples: 95835
- name: train_new
num_bytes: 476608148
num_examples: 66743
- name: train_60k
num_bytes: 330020705
num_examples: 60347
download_size: 1188562568
dataset_size: 2417834957
---
# Dataset Card for "sci-llm-new"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
nguha/legalbench | 2023-08-24T03:54:25.000Z | [
"task_categories:text-classification",
"task_categories:question-answering",
"task_categories:text-generation",
"size_categories:10K<n<100K",
"language:en",
"license:other",
"legal",
"law",
"finance",
"arxiv:2308.11462",
"region:us"
] | nguha | """
#TODO
_HOMEPAGE = ""
_URL = "data.tar.gz"
_CONFIGS = {}
_CONFIGS["abercrombie"] = {
"description": "Determine the *Abercrombie* classification for a mark/product pair.",
"features": {
"answer": datasets.Value("string"),
"index": datasets.Value("string"),
"text": datasets.Value("string")
},
"license": "CC BY 4.0"
}
_CONFIGS["canada_tax_court_outcomes"] = {
"description": "Classify whether an excerpt from a Canada Tax Court decision includes the outcome of the appeal, and if so, specify whether the appeal was allowed or dismissed.",
"features": {
"answer": datasets.Value("string"),
"index": datasets.Value("string"),
"text": datasets.Value("string")
},
"license": "CC BY-NC 4.0"
}
_CONFIGS["citation_prediction_classification"] = {
"description": "Given a legal statement and a case citation, determine if the citation is supportive of the legal statement.",
"features": {
"answer": datasets.Value("string"),
"citation": datasets.Value("string"),
"index": datasets.Value("string"),
"text": datasets.Value("string")
},
"license": "CC BY 4.0"
}
_CONFIGS["citation_prediction_open"] = {
"description": "Given a legal statement, predict the name of the case which best supports the statement.",
"features": {
"answer": datasets.Value("string"),
"circuit": datasets.Value("string"),
"index": datasets.Value("string"),
"text": datasets.Value("string")
},
"license": "CC BY 4.0"
}
_CONFIGS["consumer_contracts_qa"] = {
"description": "Answer yes/no questions on the rights and obligations created by clauses in terms of services agreements.",
"features": {
"answer": datasets.Value("string"),
"contract": datasets.Value("string"),
"index": datasets.Value("string"),
"question": datasets.Value("string")
},
"license": "CC BY-NC 4.0"
}
_CONFIGS["contract_nli_confidentiality_of_agreement"] = {
"description": "Identify if the clause provides that the Receiving Party shall not disclose the fact that Agreement was agreed or negotiated.",
"features": {
"answer": datasets.Value("string"),
"index": datasets.Value("string"),
"text": datasets.Value("string"),
"document_name": datasets.Value("string")
},
"license": "CC BY 4.0"
}
_CONFIGS["contract_nli_explicit_identification"] = {
"description": "Identify if the clause provides that all Confidential Information shall be expressly identified by the Disclosing Party.",
"features": {
"answer": datasets.Value("string"),
"index": datasets.Value("string"),
"text": datasets.Value("string"),
"document_name": datasets.Value("string")
},
"license": "CC BY 4.0"
}
_CONFIGS["contract_nli_inclusion_of_verbally_conveyed_information"] = {
"description": "Identify if the clause provides that Confidential Information may include verbally conveyed information.",
"features": {
"answer": datasets.Value("string"),
"index": datasets.Value("string"),
"text": datasets.Value("string"),
"document_name": datasets.Value("string")
},
"license": "CC BY 4.0"
}
_CONFIGS["contract_nli_limited_use"] = {
"description": "Identify if the clause provides that the Receiving Party shall not use any Confidential Information for any purpose other than the purposes stated in Agreement.",
"features": {
"answer": datasets.Value("string"),
"index": datasets.Value("string"),
"text": datasets.Value("string"),
"document_name": datasets.Value("string")
},
"license": "CC BY 4.0"
}
_CONFIGS["contract_nli_no_licensing"] = {
"description": "Identify if the clause provides that the Agreement shall not grant Receiving Party any right to Confidential Information.",
"features": {
"answer": datasets.Value("string"),
"index": datasets.Value("string"),
"text": datasets.Value("string"),
"document_name": datasets.Value("string")
},
"license": "CC BY 4.0"
}
_CONFIGS["contract_nli_notice_on_compelled_disclosure"] = {
"description": "Identify if the clause provides that the Receiving Party shall notify Disclosing Party in case Receiving Party is required by law, regulation or judicial process to disclose any Confidential Information.",
"features": {
"answer": datasets.Value("string"),
"index": datasets.Value("string"),
"text": datasets.Value("string"),
"document_name": datasets.Value("string")
},
"license": "CC BY 4.0"
}
_CONFIGS["contract_nli_permissible_acquirement_of_similar_information"] = {
"description": "Identify if the clause provides that the Receiving Party may acquire information similar to Confidential Information from a third party.",
"features": {
"answer": datasets.Value("string"),
"index": datasets.Value("string"),
"text": datasets.Value("string"),
"document_name": datasets.Value("string")
},
"license": "CC BY 4.0"
}
_CONFIGS["contract_nli_permissible_copy"] = {
"description": "Identify if the clause provides that the Receiving Party may create a copy of some Confidential Information in some circumstances.",
"features": {
"answer": datasets.Value("string"),
"index": datasets.Value("string"),
"text": datasets.Value("string"),
"document_name": datasets.Value("string")
},
"license": "CC BY 4.0"
}
_CONFIGS["contract_nli_permissible_development_of_similar_information"] = {
"description": "Identify if the clause provides that the Receiving Party may independently develop information similar to Confidential Information.",
"features": {
"answer": datasets.Value("string"),
"index": datasets.Value("string"),
"text": datasets.Value("string"),
"document_name": datasets.Value("string")
},
"license": "CC BY 4.0"
}
_CONFIGS["contract_nli_permissible_post-agreement_possession"] = {
"description": "Identify if the clause provides that the Receiving Party may retain some Confidential Information even after the return or destruction of Confidential Information.",
"features": {
"answer": datasets.Value("string"),
"index": datasets.Value("string"),
"text": datasets.Value("string"),
"document_name": datasets.Value("string")
},
"license": "CC BY 4.0"
}
_CONFIGS["contract_nli_return_of_confidential_information"] = {
"description": "Identify if the clause provides that the Receiving Party shall destroy or return some Confidential Information upon the termination of Agreement.",
"features": {
"answer": datasets.Value("string"),
"index": datasets.Value("string"),
"text": datasets.Value("string"),
"document_name": datasets.Value("string")
},
"license": "CC BY 4.0"
}
_CONFIGS["contract_nli_sharing_with_employees"] = {
"description": "Identify if the clause provides that the Receiving Party may share some Confidential Information with some of Receiving Party's employees.",
"features": {
"answer": datasets.Value("string"),
"index": datasets.Value("string"),
"text": datasets.Value("string"),
"document_name": datasets.Value("string")
},
"license": "CC BY 4.0"
}
_CONFIGS["contract_nli_sharing_with_third-parties"] = {
"description": "Identify if the clause provides that the Receiving Party may share some Confidential Information with some of Receiving Party's employees.",
"features": {
"answer": datasets.Value("string"),
"index": datasets.Value("string"),
"text": datasets.Value("string"),
"document_name": datasets.Value("string")
},
"license": "CC BY 4.0"
}
_CONFIGS["contract_nli_survival_of_obligations"] = {
"description": "Identify if the clause provides that ome obligations of Agreement may survive termination of Agreement.",
"features": {
"answer": datasets.Value("string"),
"index": datasets.Value("string"),
"text": datasets.Value("string"),
"document_name": datasets.Value("string")
},
"license": "CC BY 4.0"
}
_CONFIGS["contract_qa"] = {
"description": "Answer yes/no questions about whether contractual clauses discuss particular issues.",
"features": {
"answer": datasets.Value("string"),
"index": datasets.Value("string"),
"question": datasets.Value("string"),
"text": datasets.Value("string")
},
"license": "CC BY 4.0"
}
_CONFIGS["corporate_lobbying"] = {
"description": "Predict if a proposed bill is relevant to a company given information about the bill and the company.",
"features": {
"answer": datasets.Value("string"),
"bill_summary": datasets.Value("string"),
"bill_title": datasets.Value("string"),
"company_description": datasets.Value("string"),
"company_name": datasets.Value("string"),
"index": datasets.Value("string")
},
"license": "CC By 4.0"
}
_CONFIGS["cuad_affiliate_license-licensee"] = {
"description": "Classify if a clause describes a license grant to a licensee (incl. sublicensor) and the affiliates of such licensee/sublicensor.",
"features": {
"answer": datasets.Value("string"),
"index": datasets.Value("string"),
"text": datasets.Value("string"),
"document_name": datasets.Value("string")
},
"license": "CC By 4.0"
}
_CONFIGS["cuad_affiliate_license-licensor"] = {
"description": "Classify if the clause describes a license grant by affiliates of the licensor or that includes intellectual property of affiliates of the licensor.",
"features": {
"answer": datasets.Value("string"),
"index": datasets.Value("string"),
"text": datasets.Value("string"),
"document_name": datasets.Value("string")
},
"license": "CC By 4.0"
}
_CONFIGS["cuad_anti-assignment"] = {
"description": "Classify if the clause requires consent or notice of a party if the contract is assigned to a third party.",
"features": {
"answer": datasets.Value("string"),
"index": datasets.Value("string"),
"text": datasets.Value("string"),
"document_name": datasets.Value("string")
},
"license": "CC By 4.0"
}
_CONFIGS["cuad_audit_rights"] = {
"description": "Classify if the clause gives a party the right to audit the books, records, or physical locations of the counterparty to ensure compliance with the contract.",
"features": {
"answer": datasets.Value("string"),
"index": datasets.Value("string"),
"text": datasets.Value("string"),
"document_name": datasets.Value("string")
},
"license": "CC By 4.0"
}
_CONFIGS["cuad_cap_on_liability"] = {
"description": "Classify if the clause specifies a cap on liability upon the breach of a party\u2019s obligation? This includes time limitation for the counterparty to bring claims or maximum amount for recovery.",
"features": {
"answer": datasets.Value("string"),
"index": datasets.Value("string"),
"text": datasets.Value("string"),
"document_name": datasets.Value("string")
},
"license": "CC By 4.0"
}
_CONFIGS["cuad_change_of_control"] = {
"description": "Classify if the clause gives one party the right to terminate or is consent or notice required of the counterparty if such party undergoes a change of control, such as a merger, stock sale, transfer of all or substantially all of its assets or business, or assignment by operation of law.",
"features": {
"answer": datasets.Value("string"),
"index": datasets.Value("string"),
"text": datasets.Value("string"),
"document_name": datasets.Value("string")
},
"license": "CC By 4.0"
}
_CONFIGS["cuad_competitive_restriction_exception"] = {
"description": "Classify if the clause mentions exceptions or carveouts to Non-Compete, Exclusivity and No-Solicit of Customers.",
"features": {
"answer": datasets.Value("string"),
"index": datasets.Value("string"),
"text": datasets.Value("string"),
"document_name": datasets.Value("string")
},
"license": "CC By 4.0"
}
_CONFIGS["cuad_covenant_not_to_sue"] = {
"description": "Classify if the clause specifies that a party is restricted from contesting the validity of the counterparty\u2019s ownership of intellectual property or otherwise bringing a claim against the counterparty for matters unrelated to the contract.",
"features": {
"answer": datasets.Value("string"),
"index": datasets.Value("string"),
"text": datasets.Value("string"),
"document_name": datasets.Value("string")
},
"license": "CC By 4.0"
}
_CONFIGS["cuad_effective_date"] = {
"description": "Classify if the clause specifies the date upon which the agreement becomes effective.",
"features": {
"answer": datasets.Value("string"),
"index": datasets.Value("string"),
"text": datasets.Value("string"),
"document_name": datasets.Value("string")
},
"license": "CC By 4.0"
}
_CONFIGS["cuad_exclusivity"] = {
"description": "Classify if the clause specifies exclusive dealing commitment with the counterparty. This includes a commitment to procure all \u201crequirements\u201d from one party of certain technology, goods, or services or a prohibition on licensing or selling technology, goods or services to third parties, or a prohibition on collaborating or working with other parties), whether during the contract or after the contract ends (or both).",
"features": {
"answer": datasets.Value("string"),
"index": datasets.Value("string"),
"text": datasets.Value("string"),
"document_name": datasets.Value("string")
},
"license": "CC By 4.0"
}
_CONFIGS["cuad_expiration_date"] = {
"description": "Classify if the clause specifies the date upon which the initial term expires.",
"features": {
"answer": datasets.Value("string"),
"index": datasets.Value("string"),
"text": datasets.Value("string"),
"document_name": datasets.Value("string")
},
"license": "CC By 4.0"
}
_CONFIGS["cuad_governing_law"] = {
"description": "Classify if the clause specifies which state/country's law governs the contract.",
"features": {
"answer": datasets.Value("string"),
"index": datasets.Value("string"),
"text": datasets.Value("string"),
"document_name": datasets.Value("string")
},
"license": "CC By 4.0"
}
_CONFIGS["cuad_insurance"] = {
"description": "Classify if clause creates a requirement for insurance that must be maintained by one party for the benefit of the counterparty.",
"features": {
"answer": datasets.Value("string"),
"index": datasets.Value("string"),
"text": datasets.Value("string"),
"document_name": datasets.Value("string")
},
"license": "CC By 4.0"
}
_CONFIGS["cuad_ip_ownership_assignment"] = {
"description": "Classify if the clause specifies that intellectual property created by one party become the property of the counterparty, either per the terms of the contract or upon the occurrence of certain events.",
"features": {
"answer": datasets.Value("string"),
"index": datasets.Value("string"),
"text": datasets.Value("string"),
"document_name": datasets.Value("string")
},
"license": "CC By 4.0"
}
_CONFIGS["cuad_irrevocable_or_perpetual_license"] = {
"description": "Classify if the clause specifies a license grant that is irrevocable or perpetual.",
"features": {
"answer": datasets.Value("string"),
"index": datasets.Value("string"),
"text": datasets.Value("string"),
"document_name": datasets.Value("string")
},
"license": "CC By 4.0"
}
_CONFIGS["cuad_joint_ip_ownership"] = {
"description": "Classify if the clause provides for joint or shared ownership of intellectual property between the parties to the contract.",
"features": {
"answer": datasets.Value("string"),
"index": datasets.Value("string"),
"text": datasets.Value("string"),
"document_name": datasets.Value("string")
},
"license": "CC By 4.0"
}
_CONFIGS["cuad_license_grant"] = {
"description": "Classify if the clause contains a license granted by one party to its counterparty.",
"features": {
"answer": datasets.Value("string"),
"index": datasets.Value("string"),
"text": datasets.Value("string"),
"document_name": datasets.Value("string")
},
"license": "CC By 4.0"
}
_CONFIGS["cuad_liquidated_damages"] = {
"description": "Classify if the clause awards either party liquidated damages for breach or a fee upon the termination of a contract (termination fee).",
"features": {
"answer": datasets.Value("string"),
"index": datasets.Value("string"),
"text": datasets.Value("string"),
"document_name": datasets.Value("string")
},
"license": "CC By 4.0"
}
_CONFIGS["cuad_minimum_commitment"] = {
"description": "Classify if the clause specifies a minimum order size or minimum amount or units pertime period that one party must buy from the counterparty.",
"features": {
"answer": datasets.Value("string"),
"index": datasets.Value("string"),
"text": datasets.Value("string"),
"document_name": datasets.Value("string")
},
"license": "CC By 4.0"
}
_CONFIGS["cuad_most_favored_nation"] = {
"description": "Classify if the clause specifies a minimum order size or minimum amount or units pertime period that one party must buy from the counterparty.",
"features": {
"answer": datasets.Value("string"),
"index": datasets.Value("string"),
"text": datasets.Value("string"),
"document_name": datasets.Value("string")
},
"license": "CC By 4.0"
}
_CONFIGS["cuad_no-solicit_of_customers"] = {
"description": "Classify if the clause restricts a party from contracting or soliciting customers or partners of the counterparty, whether during the contract or after the contract ends (or both).",
"features": {
"answer": datasets.Value("string"),
"index": datasets.Value("string"),
"text": datasets.Value("string"),
"document_name": datasets.Value("string")
},
"license": "CC By 4.0"
}
_CONFIGS["cuad_no-solicit_of_employees"] = {
"description": "Classify if the clause restricts a party\u2019s soliciting or hiring employees and/or contractors from the counterparty, whether during the contract or after the contract ends (or both).",
"features": {
"answer": datasets.Value("string"),
"index": datasets.Value("string"),
"text": datasets.Value("string"),
"document_name": datasets.Value("string")
},
"license": "CC By 4.0"
}
_CONFIGS["cuad_non-compete"] = {
"description": "Classify if the clause restricts the ability of a party to compete with the counterparty or operate in a certain geography or business or technology sector.",
"features": {
"answer": datasets.Value("string"),
"index": datasets.Value("string"),
"text": datasets.Value("string"),
"document_name": datasets.Value("string")
},
"license": "CC By 4.0"
}
_CONFIGS["cuad_non-disparagement"] = {
"description": "Classify if the clause requires a party not to disparage the counterparty.",
"features": {
"answer": datasets.Value("string"),
"index": datasets.Value("string"),
"text": datasets.Value("string"),
"document_name": datasets.Value("string")
},
"license": "CC By 4.0"
}
_CONFIGS["cuad_non-transferable_license"] = {
"description": "Classify if the clause limits the ability of a party to transfer the license being granted to a third party.",
"features": {
"answer": datasets.Value("string"),
"index": datasets.Value("string"),
"text": datasets.Value("string"),
"document_name": datasets.Value("string")
},
"license": "CC By 4.0"
}
_CONFIGS["cuad_notice_period_to_terminate_renewal"] = {
"description": "Classify if the clause specifies a notice period required to terminate renewal.",
"features": {
"answer": datasets.Value("string"),
"index": datasets.Value("string"),
"text": datasets.Value("string"),
"document_name": datasets.Value("string")
},
"license": "CC By 4.0"
}
_CONFIGS["cuad_post-termination_services"] = {
"description": "Classify if the clause subjects a party to obligations after the termination or expiration of a contract, including any post-termination transition, payment, transfer of IP, wind-down, last-buy, or similar commitments.",
"features": {
"answer": datasets.Value("string"),
"index": datasets.Value("string"),
"text": datasets.Value("string"),
"document_name": datasets.Value("string")
},
"license": "CC By 4.0"
}
_CONFIGS["cuad_price_restrictions"] = {
"description": "Classify if the clause places a restriction on the ability of a party to raise or reduce prices of technology, goods, or services provided.",
"features": {
"answer": datasets.Value("string"),
"index": datasets.Value("string"),
"text": datasets.Value("string"),
"document_name": datasets.Value("string")
},
"license": "CC By 4.0"
}
_CONFIGS["cuad_renewal_term"] = {
"description": "Classify if the clause specifies a renewal term.",
"features": {
"answer": datasets.Value("string"),
"index": datasets.Value("string"),
"text": datasets.Value("string"),
"document_name": datasets.Value("string")
},
"license": "CC By 4.0"
}
_CONFIGS["cuad_revenue-profit_sharing"] = {
"description": "Classify if the clause require a party to share revenue or profit with the counterparty for any technology, goods, or services.",
"features": {
"answer": datasets.Value("string"),
"index": datasets.Value("string"),
"text": datasets.Value("string"),
"document_name": datasets.Value("string")
},
"license": "CC By 4.0"
}
_CONFIGS["cuad_rofr-rofo-rofn"] = {
"description": "Classify if the clause grant one party a right of first refusal, right of first offer or right of first negotiation to purchase, license, market, or distribute equity interest, technology, assets, products or services.",
"features": {
"answer": datasets.Value("string"),
"index": datasets.Value("string"),
"text": datasets.Value("string"),
"document_name": datasets.Value("string")
},
"license": "CC By 4.0"
}
_CONFIGS["cuad_source_code_escrow"] = {
"description": "Classify if the clause requires one party to deposit its source code into escrow with a third party, which can be released to the counterparty upon the occurrence of certain events (bankruptcy, insolvency, etc.).",
"features": {
"answer": datasets.Value("string"),
"index": datasets.Value("string"),
"text": datasets.Value("string"),
"document_name": datasets.Value("string")
},
"license": "CC By 4.0"
}
_CONFIGS["cuad_termination_for_convenience"] = {
"description": "Classify if the clause specifies that one party can terminate this contract without cause (solely by giving a notice and allowing a waiting period to expire).",
"features": {
"answer": datasets.Value("string"),
"index": datasets.Value("string"),
"text": datasets.Value("string"),
"document_name": datasets.Value("string")
},
"license": "CC By 4.0"
}
_CONFIGS["cuad_third_party_beneficiary"] = {
"description": "Classify if the clause specifies that that there a non-contracting party who is a beneficiary to some or all of the clauses in the contract and therefore can enforce its rights against a contracting party.",
"features": {
"answer": datasets.Value("string"),
"index": datasets.Value("string"),
"text": datasets.Value("string"),
"document_name": datasets.Value("string")
},
"license": "CC By 4.0"
}
_CONFIGS["cuad_uncapped_liability"] = {
"description": "Classify if the clause specifies that a party\u2019s liability is uncapped upon the breach of its obligation in the contract. This also includes uncap liability for a particular type of breach such as IP infringement or breach of confidentiality obligation.",
"features": {
"answer": datasets.Value("string"),
"index": datasets.Value("string"),
"text": datasets.Value("string"),
"document_name": datasets.Value("string")
},
"license": "CC By 4.0"
}
_CONFIGS["cuad_unlimited-all-you-can-eat-license"] = {
"description": "Classify if the clause grants one party an \u201centerprise,\u201d \u201call you can eat\u201d or unlimited usage license.",
"features": {
"answer": datasets.Value("string"),
"index": datasets.Value("string"),
"text": datasets.Value("string"),
"document_name": datasets.Value("string")
},
"license": "CC By 4.0"
}
_CONFIGS["cuad_volume_restriction"] = {
"description": "Classify if the clause specifies a fee increase or consent requirement, etc. if one party\u2019s use of the product/services exceeds certain threshold.",
"features": {
"answer": datasets.Value("string"),
"index": datasets.Value("string"),
"text": datasets.Value("string"),
"document_name": datasets.Value("string")
},
"license": "CC By 4.0"
}
_CONFIGS["cuad_warranty_duration"] = {
"description": "Classify if the clause specifies a duration of any warranty against defects or errors in technology, products, or services provided under the contract.",
"features": {
"answer": datasets.Value("string"),
"index": datasets.Value("string"),
"text": datasets.Value("string"),
"document_name": datasets.Value("string")
},
"license": "CC By 4.0"
}
_CONFIGS["definition_classification"] = {
"description": "Given a sentence from a Supreme Court opinion, classify whether or not that sentence offers a definition of a term.",
"features": {
"answer": datasets.Value("string"),
"index": datasets.Value("string"),
"text": datasets.Value("string")
},
"license": "CC BY-SA 4.0"
}
_CONFIGS["definition_extraction"] = {
"description": "Given a sentence from a Supreme Court opinion offering a definition of a term, extract the term being defined.",
"features": {
"answer": datasets.Value("string"),
"index": datasets.Value("string"),
"text": datasets.Value("string")
},
"license": "CC BY-SA 4.0"
}
_CONFIGS["diversity_1"] = {
"description": "Given a set of facts about the citizenships of plaintiffs and defendants and the amounts associated with claims, determine if the criteria for diversity jurisdiction have been met (variant 1).",
"features": {
"aic_is_met": datasets.Value("string"),
"answer": datasets.Value("string"),
"index": datasets.Value("string"),
"parties_are_diverse": datasets.Value("string"),
"text": datasets.Value("string")
},
"license": "CC By 4.0"
}
_CONFIGS["diversity_2"] = {
"description": "Given a set of facts about the citizenships of plaintiffs and defendants and the amounts associated with claims, determine if the criteria for diversity jurisdiction have been met (variant 2).",
"features": {
"aic_is_met": datasets.Value("string"),
"answer": datasets.Value("string"),
"index": datasets.Value("string"),
"parties_are_diverse": datasets.Value("string"),
"text": datasets.Value("string")
},
"license": "CC By 4.0"
}
_CONFIGS["diversity_3"] = {
"description": "Given a set of facts about the citizenships of plaintiffs and defendants and the amounts associated with claims, determine if the criteria for diversity jurisdiction have been met (variant 3).",
"features": {
"aic_is_met": datasets.Value("string"),
"answer": datasets.Value("string"),
"index": datasets.Value("string"),
"parties_are_diverse": datasets.Value("string"),
"text": datasets.Value("string")
},
"license": "CC By 4.0"
}
_CONFIGS["diversity_4"] = {
"description": "Given a set of facts about the citizenships of plaintiffs and defendants and the amounts associated with claims, determine if the criteria for diversity jurisdiction have been met (variant 4).",
"features": {
"aic_is_met": datasets.Value("string"),
"answer": datasets.Value("string"),
"index": datasets.Value("string"),
"parties_are_diverse": datasets.Value("string"),
"text": datasets.Value("string")
},
"license": "CC By 4.0"
}
_CONFIGS["diversity_5"] = {
"description": "Given a set of facts about the citizenships of plaintiffs and defendants and the amounts associated with claims, determine if the criteria for diversity jurisdiction have been met (variant 5).",
"features": {
"aic_is_met": datasets.Value("string"),
"answer": datasets.Value("string"),
"index": datasets.Value("string"),
"parties_are_diverse": datasets.Value("string"),
"text": datasets.Value("string")
},
"license": "CC By 4.0"
}
_CONFIGS["diversity_6"] = {
"description": "Given a set of facts about the citizenships of plaintiffs and defendants and the amounts associated with claims, determine if the criteria for diversity jurisdiction have been met (variant 6).",
"features": {
"aic_is_met": datasets.Value("string"),
"answer": datasets.Value("string"),
"index": datasets.Value("string"),
"parties_are_diverse": datasets.Value("string"),
"text": datasets.Value("string")
},
"license": "CC By 4.0"
}
_CONFIGS["function_of_decision_section"] = {
"description": "Classify the function of different sections of legal written opinions.",
"features": {
"Citation": datasets.Value("string"),
"Paragraph": datasets.Value("string"),
"answer": datasets.Value("string"),
"index": datasets.Value("string")
},
"license": "CC By 4.0"
}
_CONFIGS["hearsay"] = {
"description": "Classify if a particular piece of evidence qualifies as hearsay.",
"features": {
"answer": datasets.Value("string"),
"index": datasets.Value("string"),
"slice": datasets.Value("string"),
"text": datasets.Value("string")
},
"license": "CC By 4.0"
}
_CONFIGS["insurance_policy_interpretation"] = {
"description": "Given an insurance claim and policy, determine whether the claim is covered by the policy.",
"features": {
"answer": datasets.Value("string"),
"claim": datasets.Value("string"),
"index": datasets.Value("string"),
"policy": datasets.Value("string")
},
"license": "CC By 4.0"
}
_CONFIGS["international_citizenship_questions"] = {
"description": "Answer questions about citizenship law from across the world.",
"features": {
"answer": datasets.Value("string"),
"index": datasets.Value("string"),
"question": datasets.Value("string")
},
"license": "CC by 4.0"
}
_CONFIGS["jcrew_blocker"] = {
"description": "Classify if a clause in a loan agreement is a J.Crew blocker provision.",
"features": {
"answer": datasets.Value("string"),
"index": datasets.Value("string"),
"text": datasets.Value("string")
},
"license": "CC BY 4.0"
}
_CONFIGS["learned_hands_benefits"] = {
"description": "Classify if a user post implicates legal isssues related to benefits.",
"features": {
"answer": datasets.Value("string"),
"index": datasets.Value("string"),
"text": datasets.Value("string")
},
"license": "CC BY-NC-SA 4.0"
}
_CONFIGS["learned_hands_business"] = {
"description": "Classify if a user post implicates legal isssues related to business.",
"features": {
"answer": datasets.Value("string"),
"index": datasets.Value("string"),
"text": datasets.Value("string")
},
"license": "CC BY-NC-SA 4.0"
}
_CONFIGS["learned_hands_consumer"] = {
"description": "Classify if a user post implicates legal isssues related to consumer.",
"features": {
"answer": datasets.Value("string"),
"index": datasets.Value("string"),
"text": datasets.Value("string")
},
"license": "CC BY-NC-SA 4.0"
}
_CONFIGS["learned_hands_courts"] = {
"description": "Classify if a user post implicates legal isssues related to courts.",
"features": {
"answer": datasets.Value("string"),
"index": datasets.Value("string"),
"text": datasets.Value("string")
},
"license": "CC BY-NC-SA 4.0"
}
_CONFIGS["learned_hands_crime"] = {
"description": "Classify if a user post implicates legal isssues related to crime.",
"features": {
"answer": datasets.Value("string"),
"index": datasets.Value("string"),
"text": datasets.Value("string")
},
"license": "CC BY-NC-SA 4.0"
}
_CONFIGS["learned_hands_divorce"] = {
"description": "Classify if a user post implicates legal isssues related to divorce.",
"features": {
"answer": datasets.Value("string"),
"index": datasets.Value("string"),
"text": datasets.Value("string")
},
"license": "CC BY-NC-SA 4.0"
}
_CONFIGS["learned_hands_domestic_violence"] = {
"description": "Classify if a user post implicates legal isssues related to domestic_violence.",
"features": {
"answer": datasets.Value("string"),
"index": datasets.Value("string"),
"text": datasets.Value("string")
},
"license": "CC BY-NC-SA 4.0"
}
_CONFIGS["learned_hands_education"] = {
"description": "Classify if a user post implicates legal isssues related to education.",
"features": {
"answer": datasets.Value("string"),
"index": datasets.Value("string"),
"text": datasets.Value("string")
},
"license": "CC BY-NC-SA 4.0"
}
_CONFIGS["learned_hands_employment"] = {
"description": "Classify if a user post implicates legal isssues related to employment.",
"features": {
"answer": datasets.Value("string"),
"index": datasets.Value("string"),
"text": datasets.Value("string")
},
"license": "CC BY-NC-SA 4.0"
}
_CONFIGS["learned_hands_estates"] = {
"description": "Classify if a user post implicates legal isssues related to estates.",
"features": {
"answer": datasets.Value("string"),
"index": datasets.Value("string"),
"text": datasets.Value("string")
},
"license": "CC BY-NC-SA 4.0"
}
_CONFIGS["learned_hands_family"] = {
"description": "Classify if a user post implicates legal isssues related to family.",
"features": {
"answer": datasets.Value("string"),
"index": datasets.Value("string"),
"text": datasets.Value("string")
},
"license": "CC BY-NC-SA 4.0"
}
_CONFIGS["learned_hands_health"] = {
"description": "Classify if a user post implicates legal isssues related to health.",
"features": {
"answer": datasets.Value("string"),
"index": datasets.Value("string"),
"text": datasets.Value("string")
},
"license": "CC BY-NC-SA 4.0"
}
_CONFIGS["learned_hands_housing"] = {
"description": "Classify if a user post implicates legal isssues related to housing.",
"features": {
"answer": datasets.Value("string"),
"index": datasets.Value("string"),
"text": datasets.Value("string")
},
"license": "CC BY-NC-SA 4.0"
}
_CONFIGS["learned_hands_immigration"] = {
"description": "Classify if a user post implicates legal isssues related to immigration.",
"features": {
"answer": datasets.Value("string"),
"index": datasets.Value("string"),
"text": datasets.Value("string")
},
"license": "CC BY-NC-SA 4.0"
}
_CONFIGS["learned_hands_torts"] = {
"description": "Classify if a user post implicates legal isssues related to torts.",
"features": {
"answer": datasets.Value("string"),
"index": datasets.Value("string"),
"text": datasets.Value("string")
},
"license": "CC BY-NC-SA 4.0"
}
_CONFIGS["learned_hands_traffic"] = {
"description": "Classify if a user post implicates legal isssues related to traffic.",
"features": {
"answer": datasets.Value("string"),
"index": datasets.Value("string"),
"text": datasets.Value("string")
},
"license": "CC BY-NC-SA 4.0"
}
_CONFIGS["legal_reasoning_causality"] = {
"description": "Given an excerpt from a district court opinion, classify if it relies on statistical evidence in its reasoning.",
"features": {
"answer": datasets.Value("string"),
"index": datasets.Value("string"),
"text": datasets.Value("string")
},
"license": "CC BY 4.0"
}
_CONFIGS["maud_ability_to_consummate_concept_is_subject_to_mae_carveouts"] = {
"description": "Read an excerpt from a merger agreement and answer: is the \u201cability to consummate\u201d concept subject to Material Adverse Effect (MAE) carveouts?",
"features": {
"answer": datasets.Value("string"),
"index": datasets.Value("string"),
"text": datasets.Value("string")
},
"license": "CC By 4.0"
}
_CONFIGS["maud_accuracy_of_fundamental_target_rws_bringdown_standard"] = {
"description": "Read an excerpt from a merger agreement and answer: how accurate must the fundamental representations and warranties be according to the bring down provision?",
"features": {
"answer": datasets.Value("string"),
"index": datasets.Value("string"),
"text": datasets.Value("string")
},
"license": "CC By 4.0"
}
_CONFIGS["maud_accuracy_of_target_capitalization_rw_(outstanding_shares)_bringdown_standard_answer"] = {
"description": "Read an excerpt from a merger agreement and answer: how accurate must the capitalization representations and warranties be according to the bring down provision?",
"features": {
"answer": datasets.Value("string"),
"index": datasets.Value("string"),
"text": datasets.Value("string")
},
"license": "CC By 4.0"
}
_CONFIGS["maud_accuracy_of_target_general_rw_bringdown_timing_answer"] = {
"description": "Read an excerpt from a merger agreement and answer: when are representations and warranties required to be made according to the bring down provision?",
"features": {
"answer": datasets.Value("string"),
"index": datasets.Value("string"),
"text": datasets.Value("string")
},
"license": "CC By 4.0"
}
_CONFIGS["maud_additional_matching_rights_period_for_modifications_(cor)"] = {
"description": "Read an excerpt from a merger agreement and answer: how long is the additional matching rights period for modifications in case the board changes its recommendation?",
"features": {
"answer": datasets.Value("string"),
"index": datasets.Value("string"),
"text": datasets.Value("string")
},
"license": "CC By 4.0"
}
_CONFIGS["maud_application_of_buyer_consent_requirement_(negative_interim_covenant)"] = {
"description": "Read an excerpt from a merger agreement and answer: what negative covenants does the requirement of Buyer consent apply to?",
"features": {
"answer": datasets.Value("string"),
"index": datasets.Value("string"),
"text": datasets.Value("string")
},
"license": "CC By 4.0"
}
_CONFIGS["maud_buyer_consent_requirement_(ordinary_course)"] = {
"description": "Read an excerpt from a merger agreement and answer: in case the Buyer\u2019s consent for the acquired company\u2019s ordinary business operations is required, are there any limitations on the Buyer\u2019s right to condition, withhold, or delay their consent?",
"features": {
"answer": datasets.Value("string"),
"index": datasets.Value("string"),
"text": datasets.Value("string")
},
"license": "CC By 4.0"
}
_CONFIGS["maud_change_in_law__subject_to_disproportionate_impact_modifier"] = {
"description": "Read an excerpt from a merger agreement and answer: do changes in law that have disproportionate impact qualify for Material Adverse Effect (MAE)?",
"features": {
"answer": datasets.Value("string"),
"index": datasets.Value("string"),
"text": datasets.Value("string")
},
"license": "CC By 4.0"
}
_CONFIGS["maud_changes_in_gaap_or_other_accounting_principles__subject_to_disproportionate_impact_modifier"] = {
"description": "Read an excerpt from a merger agreement and answer: do changes in GAAP or other accounting principles that have disproportionate impact qualify for Material Adverse Effect (MAE)?",
"features": {
"answer": datasets.Value("string"),
"index": datasets.Value("string"),
"text": datasets.Value("string")
},
"license": "CC By 4.0"
}
_CONFIGS["maud_cor_permitted_in_response_to_intervening_event"] = {
"description": "Read an excerpt from a merger agreement and answer: is Change of Recommendation permitted in response to an intervening event?",
"features": {
"answer": datasets.Value("string"),
"index": datasets.Value("string"),
"text": datasets.Value("string")
},
"license": "CC By 4.0"
}
_CONFIGS["maud_cor_permitted_with_board_fiduciary_determination_only"] = {
"description": "Read an excerpt from a merger agreement and answer: is Change of Recommendation permitted as long as the board determines that such change is required to fulfill its fiduciary obligations?",
"features": {
"answer": datasets.Value("string"),
"index": datasets.Value("string"),
"text": datasets.Value("string")
},
"license": "CC By 4.0"
}
_CONFIGS["maud_cor_standard_(intervening_event)"] = {
"description": "Read an excerpt from a merger agreement and answer: what standard should the board follow when determining whether to change its recommendation in response to an intervening event?",
"features": {
"answer": datasets.Value("string"),
"index": datasets.Value("string"),
"text": datasets.Value("string")
},
"license": "CC By 4.0"
}
_CONFIGS["maud_cor_standard_(superior_offer)"] = {
"description": "Read an excerpt from a merger agreement and answer: what standard should the board follow when determining whether to change its recommendation in connection with a superior offer?",
"features": {
"answer": datasets.Value("string"),
"index": datasets.Value("string"),
"text": datasets.Value("string")
},
"license": "CC By 4.0"
}
_CONFIGS["maud_definition_contains_knowledge_requirement_-_answer"] = {
"description": "Read an excerpt from a merger agreement and answer: what is the knowledge requirement in the definition of \u201cIntervening Event\u201d?",
"features": {
"answer": datasets.Value("string"),
"index": datasets.Value("string"),
"text": datasets.Value("string")
},
"license": "CC By 4.0"
}
_CONFIGS["maud_definition_includes_asset_deals"] = {
"description": "Read an excerpt from a merger agreement and answer: what qualifies as a superior offer in terms of asset deals?",
"features": {
"answer": datasets.Value("string"),
"index": datasets.Value("string"),
"text": datasets.Value("string")
},
"license": "CC By 4.0"
}
_CONFIGS["maud_definition_includes_stock_deals"] = {
"description": "Read an excerpt from a merger agreement and answer: what qualifies as a superior offer in terms of stock deals?",
"features": {
"answer": datasets.Value("string"),
"index": datasets.Value("string"),
"text": datasets.Value("string")
},
"license": "CC By 4.0"
}
_CONFIGS["maud_fiduciary_exception__board_determination_standard"] = {
"description": "Read an excerpt from a merger agreement and answer: under what circumstances could the Board take actions on a different acquisition proposal notwithstanding the no-shop provision?",
"features": {
"answer": datasets.Value("string"),
"index": datasets.Value("string"),
"text": datasets.Value("string")
},
"license": "CC By 4.0"
}
_CONFIGS["maud_fiduciary_exception_board_determination_trigger_(no_shop)"] = {
"description": "Read an excerpt from a merger agreement and answer: what type of offer could the Board take actions on notwithstanding the no-shop provision?",
"features": {
"answer": datasets.Value("string"),
"index": datasets.Value("string"),
"text": datasets.Value("string")
},
"license": "CC By 4.0"
}
_CONFIGS["maud_financial_point_of_view_is_the_sole_consideration"] = {
"description": "Read an excerpt from a merger agreement and answer: is \u201cfinancial point of view\u201d the sole consideration when determining whether an offer is superior?",
"features": {
"answer": datasets.Value("string"),
"index": datasets.Value("string"),
"text": datasets.Value("string")
},
"license": "CC By 4.0"
}
_CONFIGS["maud_fls_(mae)_standard"] = {
"description": "Read an excerpt from a merger agreement and answer: what is the Forward Looking Standard (FLS) with respect to Material Adverse Effect (MAE)?",
"features": {
"answer": datasets.Value("string"),
"index": datasets.Value("string"),
"text": datasets.Value("string")
},
"license": "CC By 4.0"
}
_CONFIGS["maud_general_economic_and_financial_conditions_subject_to_disproportionate_impact_modifier"] = {
"description": "Read an excerpt from a merger agreement and answer: do changes caused by general economic and financial conditions that have disproportionate impact qualify for Material Adverse Effect (MAE)?",
"features": {
"answer": datasets.Value("string"),
"index": datasets.Value("string"),
"text": datasets.Value("string")
},
"license": "CC By 4.0"
}
_CONFIGS["maud_includes_consistent_with_past_practice"] = {
"description": "Read an excerpt from a merger agreement and answer: does the wording of the Efforts Covenant clause include \u201cconsistent with past practice\u201d?",
"features": {
"answer": datasets.Value("string"),
"index": datasets.Value("string"),
"text": datasets.Value("string")
},
"license": "CC By 4.0"
}
_CONFIGS["maud_initial_matching_rights_period_(cor)"] = {
"description": "Read an excerpt from a merger agreement and answer: how long is the initial matching rights period in case the board changes its recommendation?",
"features": {
"answer": datasets.Value("string"),
"index": datasets.Value("string"),
"text": datasets.Value("string")
},
"license": "CC By 4.0"
}
_CONFIGS["maud_initial_matching_rights_period_(ftr)"] = {
"description": "Read an excerpt from a merger agreement and answer: how long is the initial matching rights period in connection with the Fiduciary Termination Right (FTR)?",
"features": {
"answer": datasets.Value("string"),
"index": datasets.Value("string"),
"text": datasets.Value("string")
},
"license": "CC By 4.0"
}
_CONFIGS["maud_intervening_event_-_required_to_occur_after_signing_-_answer"] = {
"description": "Read an excerpt from a merger agreement and answer: is an \u201cIntervening Event\u201d required to occur after signing?",
"features": {
"answer": datasets.Value("string"),
"index": datasets.Value("string"),
"text": datasets.Value("string")
},
"license": "CC By 4.0"
}
_CONFIGS["maud_knowledge_definition"] = {
"description": "Read an excerpt from a merger agreement and answer: what counts as Knowledge?",
"features": {
"answer": datasets.Value("string"),
"index": datasets.Value("string"),
"text": datasets.Value("string")
},
"license": "CC By 4.0"
}
_CONFIGS["maud_liability_standard_for_no-shop_breach_by_target_non-do_representatives"] = {
"description": "Read an excerpt from a merger agreement and answer: what is the liability standard for no-shop breach by Target Non-D&O Representatives?",
"features": {
"answer": datasets.Value("string"),
"index": datasets.Value("string"),
"text": datasets.Value("string")
},
"license": "CC By 4.0"
}
_CONFIGS["maud_ordinary_course_efforts_standard"] = {
"description": "Read an excerpt from a merger agreement and answer: what is the efforts standard?",
"features": {
"answer": datasets.Value("string"),
"index": datasets.Value("string"),
"text": datasets.Value("string")
},
"license": "CC By 4.0"
}
_CONFIGS["maud_pandemic_or_other_public_health_event__subject_to_disproportionate_impact_modifier"] = {
"description": "Read an excerpt from a merger agreement and answer: do pandemics or other public health events have to have disproportionate impact to qualify for Material Adverse Effect (MAE)?",
"features": {
"answer": datasets.Value("string"),
"index": datasets.Value("string"),
"text": datasets.Value("string")
},
"license": "CC By 4.0"
}
_CONFIGS["maud_pandemic_or_other_public_health_event_specific_reference_to_pandemic-related_governmental_responses_or_measures"] = {
"description": "Read an excerpt from a merger agreement and answer: is there specific reference to pandemic-related governmental responses or measures in the clause that qualifies pandemics or other public health events for Material Adverse Effect (MAE)?",
"features": {
"answer": datasets.Value("string"),
"index": datasets.Value("string"),
"text": datasets.Value("string")
},
"license": "CC By 4.0"
}
_CONFIGS["maud_relational_language_(mae)_applies_to"] = {
"description": "Read an excerpt from a merger agreement and answer: what carveouts pertaining to Material Adverse Effect (MAE) does the relational language apply to?",
"features": {
"answer": datasets.Value("string"),
"index": datasets.Value("string"),
"text": datasets.Value("string")
},
"license": "CC By 4.0"
}
_CONFIGS["maud_specific_performance"] = {
"description": "Read an excerpt from a merger agreement and answer: what is the wording of the Specific Performance clause regarding the parties\u2019 entitlement in the event of a contractual breach?",
"features": {
"answer": datasets.Value("string"),
"index": datasets.Value("string"),
"text": datasets.Value("string")
},
"license": "CC By 4.0"
}
_CONFIGS["maud_tail_period_length"] = {
"description": "Read an excerpt from a merger agreement and answer: how long is the Tail Period?",
"features": {
"answer": datasets.Value("string"),
"index": datasets.Value("string"),
"text": datasets.Value("string")
},
"license": "CC By 4.0"
}
_CONFIGS["maud_type_of_consideration"] = {
"description": "Read an excerpt from a merger agreement and answer: what type of consideration is specified in this agreement?",
"features": {
"answer": datasets.Value("string"),
"index": datasets.Value("string"),
"text": datasets.Value("string")
},
"license": "CC By 4.0"
}
_CONFIGS["nys_judicial_ethics"] = {
"description": "Answer questions on judicial ethics from the New York State Unified Court System Advisory Committee.",
"features": {
"answer": datasets.Value("string"),
"index": datasets.Value("string"),
"question": datasets.Value("string"),
"year": datasets.Value("string")
},
"license": "MIT"
}
_CONFIGS["opp115_data_retention"] = {
"description": "Given a clause from a privacy policy, classify if the clause describes how long user information is stored.",
"features": {
"answer": datasets.Value("string"),
"index": datasets.Value("string"),
"text": datasets.Value("string")
},
"license": "Creative Commons Attribution-NonCommercial License"
}
_CONFIGS["opp115_data_security"] = {
"description": "Given a clause from a privacy policy, classify if the clause describes how user information is protected.",
"features": {
"answer": datasets.Value("string"),
"index": datasets.Value("string"),
"text": datasets.Value("string")
},
"license": "Creative Commons Attribution-NonCommercial License"
}
_CONFIGS["opp115_do_not_track"] = {
"description": "Given a clause from a privacy policy, classify if the clause describes if and how Do Not Track signals for online tracking and advertising are honored.",
"features": {
"answer": datasets.Value("string"),
"index": datasets.Value("string"),
"text": datasets.Value("string")
},
"license": "Creative Commons Attribution-NonCommercial License"
}
_CONFIGS["opp115_first_party_collection_use"] = {
"description": "Given a clause from a privacy policy, classify if the clause describes how and why a service provider collects user information.",
"features": {
"answer": datasets.Value("string"),
"index": datasets.Value("string"),
"text": datasets.Value("string")
},
"license": "Creative Commons Attribution-NonCommercial License"
}
_CONFIGS["opp115_international_and_specific_audiences"] = {
"description": "Given a clause from a privacy policy, classify if the clause describe practices that pertain only to a specific group of users (e.g., children, Europeans, or California residents).",
"features": {
"answer": datasets.Value("string"),
"index": datasets.Value("string"),
"text": datasets.Value("string")
},
"license": "Creative Commons Attribution-NonCommercial License"
}
_CONFIGS["opp115_policy_change"] = {
"description": "Given a clause from a privacy policy, classify if the clause describes if and how users will be informed about changes to the privacy policy.",
"features": {
"answer": datasets.Value("string"),
"index": datasets.Value("string"),
"text": datasets.Value("string")
},
"license": "Creative Commons Attribution-NonCommercial License"
}
_CONFIGS["opp115_third_party_sharing_collection"] = {
"description": "Given a clause from a privacy policy, classify if the clause describe how user information may be shared with or collected by third parties.",
"features": {
"answer": datasets.Value("string"),
"index": datasets.Value("string"),
"text": datasets.Value("string")
},
"license": "Creative Commons Attribution-NonCommercial License"
}
_CONFIGS["opp115_user_access,_edit_and_deletion"] = {
"description": "Given a clause from a privacy policy, classify if the clause describes if and how users may access, edit, or delete their information.",
"features": {
"answer": datasets.Value("string"),
"index": datasets.Value("string"),
"text": datasets.Value("string")
},
"license": "Creative Commons Attribution-NonCommercial License"
}
_CONFIGS["opp115_user_choice_control"] = {
"description": "Given a clause fro ma privacy policy, classify if the clause describes the choices and control options available to users.",
"features": {
"answer": datasets.Value("string"),
"index": datasets.Value("string"),
"text": datasets.Value("string")
},
"license": "Creative Commons Attribution-NonCommercial License"
}
_CONFIGS["oral_argument_question_purpose"] = {
"description": "Given a question asked during oral argument, classify the purpose of the question.",
"features": {
"Docket No.": datasets.Value("string"),
"answer": datasets.Value("string"),
"index": datasets.Value("string"),
"question": datasets.Value("string")
},
"license": "CC by 4.0"
}
_CONFIGS["overruling"] = {
"description": "Classify whether a sentence from a judicial opinion overrules a previous case.",
"features": {
"answer": datasets.Value("string"),
"index": datasets.Value("string"),
"text": datasets.Value("string")
},
"license": "CC By 4.0"
}
_CONFIGS["personal_jurisdiction"] = {
"description": "Given a fact pattern describing the set of contacts between a plaintiff, defendant, and forum, determine if a court in that forum could excercise personal jurisdiction over the defendant.",
"features": {
"answer": datasets.Value("string"),
"index": datasets.Value("string"),
"slice": datasets.Value("string"),
"text": datasets.Value("string")
},
"license": "CC by 4.0"
}
_CONFIGS["privacy_policy_entailment"] = {
"description": "Given a privacy policy clause and a description of the clause, determine if the description is correct.",
"features": {
"answer": datasets.Value("string"),
"description": datasets.Value("string"),
"index": datasets.Value("string"),
"text": datasets.Value("string")
},
"license": "CC BY-NC 3.0"
}
_CONFIGS["privacy_policy_qa"] = {
"description": "Given a question and a clause from a privacy policy, determine if the clause contains enough information to answer the question.",
"features": {
"answer": datasets.Value("string"),
"index": datasets.Value("string"),
"question": datasets.Value("string"),
"text": datasets.Value("string")
},
"license": "MIT"
}
_CONFIGS["proa"] = {
"description": "Given a statute, determine if the text contains an explicit private right of action.",
"features": {
"answer": datasets.Value("string"),
"index": datasets.Value("string"),
"text": datasets.Value("string")
},
"license": "CC by 4.0"
}
_CONFIGS["rule_qa"] = {
"description": "Answer questions about federal and state law.",
"features": {
"answer": datasets.Value("string"),
"doctrine": datasets.Value("string"),
"index": datasets.Value("string"),
"text": datasets.Value("string")
},
"license": "CC by 4.0"
}
_CONFIGS["sara_entailment"] = {
"description": "Given a statute, a fact pattern, and an assertion, determine if the assertion is \"entailed\" by the fact pattern and statute.",
"features": {
"answer": datasets.Value("string"),
"case id": datasets.Value("string"),
"description": datasets.Value("string"),
"index": datasets.Value("string"),
"question": datasets.Value("string"),
"statute": datasets.Value("string"),
"text": datasets.Value("string")
},
"license": "MIT"
}
_CONFIGS["sara_numeric"] = {
"description": "Given a statute and a set of facts, determine how much tax an individual owes.",
"features": {
"answer": datasets.Value("string"),
"case id": datasets.Value("string"),
"description": datasets.Value("string"),
"index": datasets.Value("string"),
"question": datasets.Value("string"),
"statute": datasets.Value("string"),
"text": datasets.Value("string")
},
"license": "MIT"
}
_CONFIGS["scalr"] = {
"description": "Choice Selection",
"features": {
"answer": datasets.Value("string"),
"choice_0": datasets.Value("string"),
"choice_1": datasets.Value("string"),
"choice_2": datasets.Value("string"),
"choice_3": datasets.Value("string"),
"choice_4": datasets.Value("string"),
"index": datasets.Value("string"),
"question": datasets.Value("string")
},
"license": "CC by 4.0"
}
_CONFIGS["ssla_company_defendants"] = {
"description": "Extract the identities of the company defendants from excerpts of securities class action complaints.",
"features": {
"answer": datasets.Value("string"),
"index": datasets.Value("string"),
"text": datasets.Value("string")
},
"license": "CC by 4.0"
}
_CONFIGS["ssla_individual_defendants"] = {
"description": "Extract the identities of individual defendants from excerpts of securities class action complaints.",
"features": {
"answer": datasets.Value("string"),
"index": datasets.Value("string"),
"text": datasets.Value("string")
},
"license": "CC by 4.0"
}
_CONFIGS["ssla_plaintiff"] = {
"description": "Extract the identities of the plaintiffs from excerpts of securities class action complaints.",
"features": {
"answer": datasets.Value("string"),
"index": datasets.Value("string"),
"text": datasets.Value("string")
},
"license": "CC by 4.0"
}
_CONFIGS["successor_liability"] = {
"description": "Given a fact pattern, identify the type of successor liability present.",
"features": {
"answer": datasets.Value("string"),
"index": datasets.Value("string"),
"issue": datasets.Value("string"),
"text": datasets.Value("string")
},
"license": "CC by 4.0"
}
_CONFIGS["supply_chain_disclosure_best_practice_accountability"] = {
"description": "Given a supply chain disclosure, determine if the statement discloses whether the retail seller or manufacturer maintains internal compliance procedures on company standards regarding human trafficking and slavery.",
"features": {
"answer": datasets.Value("string"),
"index": datasets.Value("string"),
"text": datasets.Value("string")
},
"license": "CC by 4.0"
}
_CONFIGS["supply_chain_disclosure_best_practice_audits"] = {
"description": "Given a supply chain disclosure, determine if the statement discloses whether the retail seller or manufacturer performs any type of audit, or reserves the right to audit.",
"features": {
"answer": datasets.Value("string"),
"index": datasets.Value("string"),
"text": datasets.Value("string")
},
"license": "CC by 4.0"
}
_CONFIGS["supply_chain_disclosure_best_practice_certification"] = {
"description": "Given a supply chain disclosure, determine if the statement discloses whether the retail seller or manufacturer requires direct suppliers to certify that they comply with labor and anti-trafficking laws.",
"features": {
"answer": datasets.Value("string"),
"index": datasets.Value("string"),
"text": datasets.Value("string")
},
"license": "CC by 4.0"
}
_CONFIGS["supply_chain_disclosure_best_practice_training"] = {
"description": "Given a supply chain disclosure, determine if the statement discloses whether the retail seller or manufacturer provides training to employees on human trafficking and slavery.",
"features": {
"answer": datasets.Value("string"),
"index": datasets.Value("string"),
"text": datasets.Value("string")
},
"license": "CC by 4.0"
}
_CONFIGS["supply_chain_disclosure_best_practice_verification"] = {
"description": "Given a supply chain disclosure, determine if the statement discloses whether the retail seller or manufacturer engages in verification and auditing as one practice, expresses that it may conduct an audit, or expressess that it is assessing supplier risks through a review of the US Dept. of Labor's List.",
"features": {
"answer": datasets.Value("string"),
"index": datasets.Value("string"),
"text": datasets.Value("string")
},
"license": "CC by 4.0"
}
_CONFIGS["supply_chain_disclosure_disclosed_accountability"] = {
"description": "Given a supply chain disclosure, determine whether the statement discloses to what extent, if any, that the retail seller or manufacturer maintains internal accountability standards and procedures for employees or contractors failing to meet company standards regarding slavery and trafficking.",
"features": {
"answer": datasets.Value("string"),
"index": datasets.Value("string"),
"text": datasets.Value("string")
},
"license": "CC by 4.0"
}
_CONFIGS["supply_chain_disclosure_disclosed_audits"] = {
"description": "Given a disclosure, determine whether the statement discloses to what extent, if any, that the retail seller or manufacturer conducts audits of suppliers to evaluate supplier compliance with company standards for trafficking and slavery in supply chains.",
"features": {
"answer": datasets.Value("string"),
"index": datasets.Value("string"),
"text": datasets.Value("string")
},
"license": "CC by 4.0"
}
_CONFIGS["supply_chain_disclosure_disclosed_certification"] = {
"description": "Given a supply chain disclosure, determine if the statement discloses to what extent, if any, that the retail seller or manufacturer requires direct suppliers to certify that materials incorporated into the product comply with the laws regarding slavery and human trafficking of the country or countries in which they are doing business.",
"features": {
"answer": datasets.Value("string"),
"index": datasets.Value("string"),
"text": datasets.Value("string")
},
"license": "CC by 4.0"
}
_CONFIGS["supply_chain_disclosure_disclosed_training"] = {
"description": "Given a supply chain disclosure, determine if the statement discloses whether the retail seller or manufacturer provides company employees and management, who have direct responsibility for supply chain management, training on human trafficking and slavery, particularly with respect to mitigating risks within the supply chains of products.",
"features": {
"answer": datasets.Value("string"),
"index": datasets.Value("string"),
"text": datasets.Value("string")
},
"license": "CC by 4.0"
}
_CONFIGS["supply_chain_disclosure_disclosed_verification"] = {
"description": "Given a supply chain disclosure, determine if the statement discloses whether the retail seller or manufacturer engages in verification of product supply chains to evaluate and address risks of human trafficking and slavery.",
"features": {
"answer": datasets.Value("string"),
"index": datasets.Value("string"),
"text": datasets.Value("string")
},
"license": "CC by 4.0"
}
_CONFIGS["telemarketing_sales_rule"] = {
"description": "Determine how 16 C.F.R. \u00a7 310.3(a)(1) and 16 C.F.R. \u00a7 310.3(a)(2) (governing deceptive practices) apply to different fact patterns.",
"features": {
"answer": datasets.Value("string"),
"index": datasets.Value("string"),
"text": datasets.Value("string")
},
"license": "CC BY 4.0"
}
_CONFIGS["textualism_tool_dictionaries"] = {
"description": "Determine if a paragraph from a judicial opinion is applying a form textualism that relies on the dictionary meaning of terms.",
"features": {
"answer": datasets.Value("string"),
"index": datasets.Value("string"),
"text": datasets.Value("string")
},
"license": "CC BY-NC 4.0"
}
_CONFIGS["textualism_tool_plain"] = {
"description": "Determine if a paragraph from a judicial opinion is applying a form textualism that relies on the ordinary (\"plain\") meaning of terms.",
"features": {
"answer": datasets.Value("string"),
"index": datasets.Value("string"),
"text": datasets.Value("string")
},
"license": "CC BY-NC 4.0"
}
_CONFIGS["ucc_v_common_law"] = {
"description": "Determine if a contract is governed by the Uniform Commercial Code (UCC) or the common law of contracts.",
"features": {
"answer": datasets.Value("string"),
"contract": datasets.Value("string"),
"index": datasets.Value("string")
},
"license": "CC By 4.0"
}
_CONFIGS["unfair_tos"] = {
"description": "Given a clause from a terms-of-service contract, determine the category the clause belongs to.",
"features": {
"answer": datasets.Value("string"),
"index": datasets.Value("string"),
"text": datasets.Value("string")
},
"license": "CC by 4.0"
}
class LegalBench(datasets.GeneratorBasedBuilder): | """
#TODO
_DESCRIPTION = | null | 18 | 2,565 | ---
license: other
task_categories:
- text-classification
- question-answering
- text-generation
language:
- en
tags:
- legal
- law
- finance
size_categories:
- 10K<n<100K
---
# Dataset Card for Dataset Name
- **Homepage: https://hazyresearch.stanford.edu/legalbench/**
- **Repository: https://github.com/HazyResearch/legalbench/**
- **Paper: https://arxiv.org/abs/2308.11462**
## Dataset Description
### Dataset Summary
The LegalBench project is an ongoing open science effort to collaboratively curate tasks for evaluating legal reasoning in English large language models (LLMs). The benchmark currently consists of 162 tasks gathered from 40 contributors.
If you have questions about the project or would like to get involved, please see the website for more information.
### Supported Tasks and Leaderboards
LegalBench tasks span multiple types (binary classification, multi-class classification, extraction, generation, entailment), multiple types of text (statutes, judicial opinions, contracts, etc.), and multiple areas of law (evidence, contracts, civil procedure, etc.). For more information on tasks, we recommend visiting the website, where you can search through task descriptions, or the Github repository, which contains more granular task descriptions. We also recommend reading the paper, which provides more background on task significance and construction process.
### Languages
All LegalBench tasks are in English.
## Dataset Structure
### Data Instances
Detailed descriptions of the instances for each task can be found on the Github. An example of an instance, for the `abercrombie` task, is provided below:
```
{
"text": "The mark "Ivory" for a product made of elephant tusks.",
"label": "generic"
"idx": 0
}
```
A substantial number of LegalBench tasks are binary classification tasks, which require the LLM to determine if a piece of text has some legal attribute. Because these are framed as Yes/No questions, the label space is "Yes" or "No".
### Data Fields
Detailed descriptions of the instances for each task can be found on the Github.
### Data Splits
Each task has a training and evaluation split. Following [RAFT](https://huggingface.co/datasets/ought/raft), train splits only consists of a few-labeled instances, reflecting the few-shot nature of most LLMs.
## Dataset Creation
### Curation Rationale
LegalBench was created to enable researchers to better benchmark the legal reasoning capabilities of LLMs.
### Source Data
#### Initial Data Collection and Normalization
Broadly, LegalBench tasks are drawn from three sources. The first source of tasks are existing available datasets and corpora. Most of these were originally released for non-LLM evaluation settings. In creating tasks for LegalBench from these sources, we often significantly reformatted data and restructured the prediction objective. For instance, the original [CUAD dataset](https://github.com/TheAtticusProject/cuad) contains annotations on long-documents and is intended for evaluating extraction with span-prediction models. We restructure this corpora to generate a binary classification task for each type of contractual clause. While the original corpus emphasized the long-document aspects of contracts, our restructured tasks emphasize whether LLMs can identify the distinguishing features of different types of clauses. The second source of tasks are datasets that were previously constructed by legal professionals but never released. This primarily includes datasets hand-coded by legal scholars as part of prior empirical legal projects. The last category of tasks are those that were developed specifically for \name, by the authors of this paper. Overall, tasks are drawn from 36 distinct corpora. Please see the Appendix of the paper for more details.
#### Who are the source language producers?
LegalBench data was created by humans. Demographic information for these individuals is not available.
### Annotations
#### Annotation process
Please see the paper for more information on the annotation process used in the creation of each task.
#### Who are the annotators?
Please see the paper for more information on the identity of annotators for each task.
### Personal and Sensitive Information
Data in this benchmark has either been synthetically generated, or derived from an already public source (e.g., contracts from the EDGAR database).
Several tasks have been derived from the LearnedHands corpus, which consists of public posts on /r/LegalAdvice. Some posts may discuss sensitive issues.
## Considerations for Using the Data
### Social Impact of Dataset
Please see the original paper for a discussion of social impact.
### Discussion of Biases
Please see the original paper for a discussion of social impact.
### Other Known Limitations
LegalBench primarily contains tasks corresponding to American law.
## Additional Information
### Dataset Curators
Please see the website for a full list of participants in the LegalBench project.
### Licensing Information
LegalBench tasks are subject to different licenses. Please see the paper for a description of the licenses.
### Citation Information
If you intend to reference LegalBench broadly, please use the citation below. If you are working with a particular task, please use the citation below in addition to the task specific citation (which can be found on the task page on the website or Github).
```
@misc{guha2023legalbench,
title={LegalBench: A Collaboratively Built Benchmark for Measuring Legal Reasoning in Large Language Models},
author={Neel Guha and Julian Nyarko and Daniel E. Ho and Christopher Ré and Adam Chilton and Aditya Narayana and Alex Chohlas-Wood and Austin Peters and Brandon Waldon and Daniel N. Rockmore and Diego Zambrano and Dmitry Talisman and Enam Hoque and Faiz Surani and Frank Fagan and Galit Sarfaty and Gregory M. Dickinson and Haggai Porat and Jason Hegland and Jessica Wu and Joe Nudell and Joel Niklaus and John Nay and Jonathan H. Choi and Kevin Tobia and Margaret Hagan and Megan Ma and Michael Livermore and Nikon Rasumov-Rahe and Nils Holzenberger and Noam Kolt and Peter Henderson and Sean Rehaag and Sharad Goel and Shang Gao and Spencer Williams and Sunny Gandhi and Tom Zur and Varun Iyer and Zehua Li},
year={2023},
eprint={2308.11462},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
@article{koreeda2021contractnli,
title={ContractNLI: A dataset for document-level natural language inference for contracts},
author={Koreeda, Yuta and Manning, Christopher D},
journal={arXiv preprint arXiv:2110.01799},
year={2021}
}
@article{hendrycks2021cuad,
title={Cuad: An expert-annotated nlp dataset for legal contract review},
author={Hendrycks, Dan and Burns, Collin and Chen, Anya and Ball, Spencer},
journal={arXiv preprint arXiv:2103.06268},
year={2021}
}
@article{wang2023maud,
title={MAUD: An Expert-Annotated Legal NLP Dataset for Merger Agreement Understanding},
author={Wang, Steven H and Scardigli, Antoine and Tang, Leonard and Chen, Wei and Levkin, Dimitry and Chen, Anya and Ball, Spencer and Woodside, Thomas and Zhang, Oliver and Hendrycks, Dan},
journal={arXiv preprint arXiv:2301.00876},
year={2023}
}
@inproceedings{wilson2016creation,
title={The creation and analysis of a website privacy policy corpus},
author={Wilson, Shomir and Schaub, Florian and Dara, Aswarth Abhilash and Liu, Frederick and Cherivirala, Sushain and Leon, Pedro Giovanni and Andersen, Mads Schaarup and Zimmeck, Sebastian and Sathyendra, Kanthashree Mysore and Russell, N Cameron and others},
booktitle={Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)},
pages={1330--1340},
year={2016}
}
@inproceedings{zheng2021does,
title={When does pretraining help? assessing self-supervised learning for law and the casehold dataset of 53,000+ legal holdings},
author={Zheng, Lucia and Guha, Neel and Anderson, Brandon R and Henderson, Peter and Ho, Daniel E},
booktitle={Proceedings of the eighteenth international conference on artificial intelligence and law},
pages={159--168},
year={2021}
}
@article{zimmeck2019maps,
title={Maps: Scaling privacy compliance analysis to a million apps},
author={Zimmeck, Sebastian and Story, Peter and Smullen, Daniel and Ravichander, Abhilasha and Wang, Ziqi and Reidenberg, Joel R and Russell, N Cameron and Sadeh, Norman},
journal={Proc. Priv. Enhancing Tech.},
volume={2019},
pages={66},
year={2019}
}
@article{ravichander2019question,
title={Question answering for privacy policies: Combining computational and legal perspectives},
author={Ravichander, Abhilasha and Black, Alan W and Wilson, Shomir and Norton, Thomas and Sadeh, Norman},
journal={arXiv preprint arXiv:1911.00841},
year={2019}
}
@article{holzenberger2021factoring,
title={Factoring statutory reasoning as language understanding challenges},
author={Holzenberger, Nils and Van Durme, Benjamin},
journal={arXiv preprint arXiv:2105.07903},
year={2021}
}
@article{lippi2019claudette,
title={CLAUDETTE: an automated detector of potentially unfair clauses in online terms of service},
author={Lippi, Marco and Pa{\l}ka, Przemys{\l}aw and Contissa, Giuseppe and Lagioia, Francesca and Micklitz, Hans-Wolfgang and Sartor, Giovanni and Torroni, Paolo},
journal={Artificial Intelligence and Law},
volume={27},
pages={117--139},
year={2019},
publisher={Springer}
}
``` |
explodinggradients/fiqa | 2023-06-08T16:54:14.000Z | [
"task_categories:question-answering",
"size_categories:10K<n<100K",
"language:en",
"license:cc-by-sa-4.0",
"region:us"
] | explodinggradients | FiQA dataset formated in a way that is easier for doing RAG experiments | @InProceedings{huggingface:dataset,
title = {A great new dataset},
author={huggingface, Inc.
},
year={2020}
} | null | 2 | 2,533 | ---
license: cc-by-sa-4.0
task_categories:
- question-answering
language:
- en
size_categories:
- 10K<n<100K
--- |
alzoubi36/privacy_qa | 2023-06-24T07:54:51.000Z | [
"region:us"
] | alzoubi36 | null | null | null | 0 | 2,527 | ---
dataset_info:
features:
- name: question
dtype: string
- name: text
dtype: string
- name: label
dtype: int64
splits:
- name: train
num_bytes: 31955449
num_examples: 157420
- name: validation
num_bytes: 5661628
num_examples: 27780
- name: test
num_bytes: 13381983
num_examples: 62150
download_size: 17138117
dataset_size: 50999060
---
# Dataset for the PrivacyQA task in the [PrivacyGLUE](https://github.com/infsys-lab/privacy-glue) dataset
|
BeIR/scidocs-qrels | 2022-10-23T06:07:54.000Z | [
"task_categories:text-retrieval",
"task_ids:entity-linking-retrieval",
"task_ids:fact-checking-retrieval",
"multilinguality:monolingual",
"language:en",
"license:cc-by-sa-4.0",
"region:us"
] | BeIR | null | null | null | 0 | 2,513 | ---
annotations_creators: []
language_creators: []
language:
- en
license:
- cc-by-sa-4.0
multilinguality:
- monolingual
paperswithcode_id: beir
pretty_name: BEIR Benchmark
size_categories:
msmarco:
- 1M<n<10M
trec-covid:
- 100k<n<1M
nfcorpus:
- 1K<n<10K
nq:
- 1M<n<10M
hotpotqa:
- 1M<n<10M
fiqa:
- 10K<n<100K
arguana:
- 1K<n<10K
touche-2020:
- 100K<n<1M
cqadupstack:
- 100K<n<1M
quora:
- 100K<n<1M
dbpedia:
- 1M<n<10M
scidocs:
- 10K<n<100K
fever:
- 1M<n<10M
climate-fever:
- 1M<n<10M
scifact:
- 1K<n<10K
source_datasets: []
task_categories:
- text-retrieval
- zero-shot-retrieval
- information-retrieval
- zero-shot-information-retrieval
task_ids:
- passage-retrieval
- entity-linking-retrieval
- fact-checking-retrieval
- tweet-retrieval
- citation-prediction-retrieval
- duplication-question-retrieval
- argument-retrieval
- news-retrieval
- biomedical-information-retrieval
- question-answering-retrieval
---
# Dataset Card for BEIR Benchmark
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://github.com/UKPLab/beir
- **Repository:** https://github.com/UKPLab/beir
- **Paper:** https://openreview.net/forum?id=wCu6T5xFjeJ
- **Leaderboard:** https://docs.google.com/spreadsheets/d/1L8aACyPaXrL8iEelJLGqlMqXKPX2oSP_R10pZoy77Ns
- **Point of Contact:** nandan.thakur@uwaterloo.ca
### Dataset Summary
BEIR is a heterogeneous benchmark that has been built from 18 diverse datasets representing 9 information retrieval tasks:
- Fact-checking: [FEVER](http://fever.ai), [Climate-FEVER](http://climatefever.ai), [SciFact](https://github.com/allenai/scifact)
- Question-Answering: [NQ](https://ai.google.com/research/NaturalQuestions), [HotpotQA](https://hotpotqa.github.io), [FiQA-2018](https://sites.google.com/view/fiqa/)
- Bio-Medical IR: [TREC-COVID](https://ir.nist.gov/covidSubmit/index.html), [BioASQ](http://bioasq.org), [NFCorpus](https://www.cl.uni-heidelberg.de/statnlpgroup/nfcorpus/)
- News Retrieval: [TREC-NEWS](https://trec.nist.gov/data/news2019.html), [Robust04](https://trec.nist.gov/data/robust/04.guidelines.html)
- Argument Retrieval: [Touche-2020](https://webis.de/events/touche-20/shared-task-1.html), [ArguAna](tp://argumentation.bplaced.net/arguana/data)
- Duplicate Question Retrieval: [Quora](https://www.quora.com/q/quoradata/First-Quora-Dataset-Release-Question-Pairs), [CqaDupstack](http://nlp.cis.unimelb.edu.au/resources/cqadupstack/)
- Citation-Prediction: [SCIDOCS](https://allenai.org/data/scidocs)
- Tweet Retrieval: [Signal-1M](https://research.signal-ai.com/datasets/signal1m-tweetir.html)
- Entity Retrieval: [DBPedia](https://github.com/iai-group/DBpedia-Entity/)
All these datasets have been preprocessed and can be used for your experiments.
```python
```
### Supported Tasks and Leaderboards
The dataset supports a leaderboard that evaluates models against task-specific metrics such as F1 or EM, as well as their ability to retrieve supporting information from Wikipedia.
The current best performing models can be found [here](https://eval.ai/web/challenges/challenge-page/689/leaderboard/).
### Languages
All tasks are in English (`en`).
## Dataset Structure
All BEIR datasets must contain a corpus, queries and qrels (relevance judgments file). They must be in the following format:
- `corpus` file: a `.jsonl` file (jsonlines) that contains a list of dictionaries, each with three fields `_id` with unique document identifier, `title` with document title (optional) and `text` with document paragraph or passage. For example: `{"_id": "doc1", "title": "Albert Einstein", "text": "Albert Einstein was a German-born...."}`
- `queries` file: a `.jsonl` file (jsonlines) that contains a list of dictionaries, each with two fields `_id` with unique query identifier and `text` with query text. For example: `{"_id": "q1", "text": "Who developed the mass-energy equivalence formula?"}`
- `qrels` file: a `.tsv` file (tab-seperated) that contains three columns, i.e. the `query-id`, `corpus-id` and `score` in this order. Keep 1st row as header. For example: `q1 doc1 1`
### Data Instances
A high level example of any beir dataset:
```python
corpus = {
"doc1" : {
"title": "Albert Einstein",
"text": "Albert Einstein was a German-born theoretical physicist. who developed the theory of relativity, \
one of the two pillars of modern physics (alongside quantum mechanics). His work is also known for \
its influence on the philosophy of science. He is best known to the general public for his mass–energy \
equivalence formula E = mc2, which has been dubbed 'the world's most famous equation'. He received the 1921 \
Nobel Prize in Physics 'for his services to theoretical physics, and especially for his discovery of the law \
of the photoelectric effect', a pivotal step in the development of quantum theory."
},
"doc2" : {
"title": "", # Keep title an empty string if not present
"text": "Wheat beer is a top-fermented beer which is brewed with a large proportion of wheat relative to the amount of \
malted barley. The two main varieties are German Weißbier and Belgian witbier; other types include Lambic (made\
with wild yeast), Berliner Weisse (a cloudy, sour beer), and Gose (a sour, salty beer)."
},
}
queries = {
"q1" : "Who developed the mass-energy equivalence formula?",
"q2" : "Which beer is brewed with a large proportion of wheat?"
}
qrels = {
"q1" : {"doc1": 1},
"q2" : {"doc2": 1},
}
```
### Data Fields
Examples from all configurations have the following features:
### Corpus
- `corpus`: a `dict` feature representing the document title and passage text, made up of:
- `_id`: a `string` feature representing the unique document id
- `title`: a `string` feature, denoting the title of the document.
- `text`: a `string` feature, denoting the text of the document.
### Queries
- `queries`: a `dict` feature representing the query, made up of:
- `_id`: a `string` feature representing the unique query id
- `text`: a `string` feature, denoting the text of the query.
### Qrels
- `qrels`: a `dict` feature representing the query document relevance judgements, made up of:
- `_id`: a `string` feature representing the query id
- `_id`: a `string` feature, denoting the document id.
- `score`: a `int32` feature, denoting the relevance judgement between query and document.
### Data Splits
| Dataset | Website| BEIR-Name | Type | Queries | Corpus | Rel D/Q | Down-load | md5 |
| -------- | -----| ---------| --------- | ----------- | ---------| ---------| :----------: | :------:|
| MSMARCO | [Homepage](https://microsoft.github.io/msmarco/)| ``msmarco`` | ``train``<br>``dev``<br>``test``| 6,980 | 8.84M | 1.1 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/msmarco.zip) | ``444067daf65d982533ea17ebd59501e4`` |
| TREC-COVID | [Homepage](https://ir.nist.gov/covidSubmit/index.html)| ``trec-covid``| ``test``| 50| 171K| 493.5 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/trec-covid.zip) | ``ce62140cb23feb9becf6270d0d1fe6d1`` |
| NFCorpus | [Homepage](https://www.cl.uni-heidelberg.de/statnlpgroup/nfcorpus/) | ``nfcorpus`` | ``train``<br>``dev``<br>``test``| 323 | 3.6K | 38.2 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/nfcorpus.zip) | ``a89dba18a62ef92f7d323ec890a0d38d`` |
| BioASQ | [Homepage](http://bioasq.org) | ``bioasq``| ``train``<br>``test`` | 500 | 14.91M | 8.05 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#2-bioasq) |
| NQ | [Homepage](https://ai.google.com/research/NaturalQuestions) | ``nq``| ``train``<br>``test``| 3,452 | 2.68M | 1.2 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/nq.zip) | ``d4d3d2e48787a744b6f6e691ff534307`` |
| HotpotQA | [Homepage](https://hotpotqa.github.io) | ``hotpotqa``| ``train``<br>``dev``<br>``test``| 7,405 | 5.23M | 2.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/hotpotqa.zip) | ``f412724f78b0d91183a0e86805e16114`` |
| FiQA-2018 | [Homepage](https://sites.google.com/view/fiqa/) | ``fiqa`` | ``train``<br>``dev``<br>``test``| 648 | 57K | 2.6 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/fiqa.zip) | ``17918ed23cd04fb15047f73e6c3bd9d9`` |
| Signal-1M(RT) | [Homepage](https://research.signal-ai.com/datasets/signal1m-tweetir.html)| ``signal1m`` | ``test``| 97 | 2.86M | 19.6 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#4-signal-1m) |
| TREC-NEWS | [Homepage](https://trec.nist.gov/data/news2019.html) | ``trec-news`` | ``test``| 57 | 595K | 19.6 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#1-trec-news) |
| ArguAna | [Homepage](http://argumentation.bplaced.net/arguana/data) | ``arguana``| ``test`` | 1,406 | 8.67K | 1.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/arguana.zip) | ``8ad3e3c2a5867cdced806d6503f29b99`` |
| Touche-2020| [Homepage](https://webis.de/events/touche-20/shared-task-1.html) | ``webis-touche2020``| ``test``| 49 | 382K | 19.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/webis-touche2020.zip) | ``46f650ba5a527fc69e0a6521c5a23563`` |
| CQADupstack| [Homepage](http://nlp.cis.unimelb.edu.au/resources/cqadupstack/) | ``cqadupstack``| ``test``| 13,145 | 457K | 1.4 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/cqadupstack.zip) | ``4e41456d7df8ee7760a7f866133bda78`` |
| Quora| [Homepage](https://www.quora.com/q/quoradata/First-Quora-Dataset-Release-Question-Pairs) | ``quora``| ``dev``<br>``test``| 10,000 | 523K | 1.6 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/quora.zip) | ``18fb154900ba42a600f84b839c173167`` |
| DBPedia | [Homepage](https://github.com/iai-group/DBpedia-Entity/) | ``dbpedia-entity``| ``dev``<br>``test``| 400 | 4.63M | 38.2 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/dbpedia-entity.zip) | ``c2a39eb420a3164af735795df012ac2c`` |
| SCIDOCS| [Homepage](https://allenai.org/data/scidocs) | ``scidocs``| ``test``| 1,000 | 25K | 4.9 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/scidocs.zip) | ``38121350fc3a4d2f48850f6aff52e4a9`` |
| FEVER | [Homepage](http://fever.ai) | ``fever``| ``train``<br>``dev``<br>``test``| 6,666 | 5.42M | 1.2| [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/fever.zip) | ``5a818580227bfb4b35bb6fa46d9b6c03`` |
| Climate-FEVER| [Homepage](http://climatefever.ai) | ``climate-fever``|``test``| 1,535 | 5.42M | 3.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/climate-fever.zip) | ``8b66f0a9126c521bae2bde127b4dc99d`` |
| SciFact| [Homepage](https://github.com/allenai/scifact) | ``scifact``| ``train``<br>``test``| 300 | 5K | 1.1 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/scifact.zip) | ``5f7d1de60b170fc8027bb7898e2efca1`` |
| Robust04 | [Homepage](https://trec.nist.gov/data/robust/04.guidelines.html) | ``robust04``| ``test``| 249 | 528K | 69.9 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#3-robust04) |
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
[Needs More Information]
## Considerations for Using the Data
### Social Impact of Dataset
[Needs More Information]
### Discussion of Biases
[Needs More Information]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
[Needs More Information]
### Licensing Information
[Needs More Information]
### Citation Information
Cite as:
```
@inproceedings{
thakur2021beir,
title={{BEIR}: A Heterogeneous Benchmark for Zero-shot Evaluation of Information Retrieval Models},
author={Nandan Thakur and Nils Reimers and Andreas R{\"u}ckl{\'e} and Abhishek Srivastava and Iryna Gurevych},
booktitle={Thirty-fifth Conference on Neural Information Processing Systems Datasets and Benchmarks Track (Round 2)},
year={2021},
url={https://openreview.net/forum?id=wCu6T5xFjeJ}
}
```
### Contributions
Thanks to [@Nthakur20](https://github.com/Nthakur20) for adding this dataset. |
open-llm-leaderboard/details_ashercn97__manatee-7b | 2023-09-17T18:42:54.000Z | [
"region:us"
] | open-llm-leaderboard | null | null | null | 0 | 2,478 | ---
pretty_name: Evaluation run of ashercn97/manatee-7b
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [ashercn97/manatee-7b](https://huggingface.co/ashercn97/manatee-7b) on the [Open\
\ LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 64 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the agregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_ashercn97__manatee-7b\"\
,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\
These are the [latest results from run 2023-09-17T18:42:42.384089](https://huggingface.co/datasets/open-llm-leaderboard/details_ashercn97__manatee-7b/blob/main/results_2023-09-17T18-42-42.384089.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.0030411073825503355,\n\
\ \"em_stderr\": 0.0005638896908753201,\n \"f1\": 0.059899328859060456,\n\
\ \"f1_stderr\": 0.001397556369094792,\n \"acc\": 0.4077875240923591,\n\
\ \"acc_stderr\": 0.009650175391680019\n },\n \"harness|drop|3\": {\n\
\ \"em\": 0.0030411073825503355,\n \"em_stderr\": 0.0005638896908753201,\n\
\ \"f1\": 0.059899328859060456,\n \"f1_stderr\": 0.001397556369094792\n\
\ },\n \"harness|gsm8k|5\": {\n \"acc\": 0.07050796057619409,\n \
\ \"acc_stderr\": 0.0070515438139836135\n },\n \"harness|winogrande|5\"\
: {\n \"acc\": 0.745067087608524,\n \"acc_stderr\": 0.012248806969376422\n\
\ }\n}\n```"
repo_url: https://huggingface.co/ashercn97/manatee-7b
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_arc_challenge_25
data_files:
- split: 2023_08_02T16_08_56.879142
path:
- '**/details_harness|arc:challenge|25_2023-08-02T16:08:56.879142.parquet'
- split: latest
path:
- '**/details_harness|arc:challenge|25_2023-08-02T16:08:56.879142.parquet'
- config_name: harness_drop_3
data_files:
- split: 2023_09_17T18_42_42.384089
path:
- '**/details_harness|drop|3_2023-09-17T18-42-42.384089.parquet'
- split: latest
path:
- '**/details_harness|drop|3_2023-09-17T18-42-42.384089.parquet'
- config_name: harness_gsm8k_5
data_files:
- split: 2023_09_17T18_42_42.384089
path:
- '**/details_harness|gsm8k|5_2023-09-17T18-42-42.384089.parquet'
- split: latest
path:
- '**/details_harness|gsm8k|5_2023-09-17T18-42-42.384089.parquet'
- config_name: harness_hellaswag_10
data_files:
- split: 2023_08_02T16_08_56.879142
path:
- '**/details_harness|hellaswag|10_2023-08-02T16:08:56.879142.parquet'
- split: latest
path:
- '**/details_harness|hellaswag|10_2023-08-02T16:08:56.879142.parquet'
- config_name: harness_hendrycksTest_5
data_files:
- split: 2023_08_02T16_08_56.879142
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-08-02T16:08:56.879142.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2023-08-02T16:08:56.879142.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2023-08-02T16:08:56.879142.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2023-08-02T16:08:56.879142.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-08-02T16:08:56.879142.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2023-08-02T16:08:56.879142.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-08-02T16:08:56.879142.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-08-02T16:08:56.879142.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-08-02T16:08:56.879142.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2023-08-02T16:08:56.879142.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2023-08-02T16:08:56.879142.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2023-08-02T16:08:56.879142.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-08-02T16:08:56.879142.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2023-08-02T16:08:56.879142.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-08-02T16:08:56.879142.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-08-02T16:08:56.879142.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2023-08-02T16:08:56.879142.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2023-08-02T16:08:56.879142.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-08-02T16:08:56.879142.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-08-02T16:08:56.879142.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-08-02T16:08:56.879142.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-08-02T16:08:56.879142.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-08-02T16:08:56.879142.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-08-02T16:08:56.879142.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-08-02T16:08:56.879142.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-08-02T16:08:56.879142.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-08-02T16:08:56.879142.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-08-02T16:08:56.879142.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-08-02T16:08:56.879142.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-08-02T16:08:56.879142.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-08-02T16:08:56.879142.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-08-02T16:08:56.879142.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2023-08-02T16:08:56.879142.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-08-02T16:08:56.879142.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2023-08-02T16:08:56.879142.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-08-02T16:08:56.879142.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-08-02T16:08:56.879142.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2023-08-02T16:08:56.879142.parquet'
- '**/details_harness|hendrycksTest-management|5_2023-08-02T16:08:56.879142.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2023-08-02T16:08:56.879142.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-08-02T16:08:56.879142.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-08-02T16:08:56.879142.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-08-02T16:08:56.879142.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-08-02T16:08:56.879142.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2023-08-02T16:08:56.879142.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2023-08-02T16:08:56.879142.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2023-08-02T16:08:56.879142.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-08-02T16:08:56.879142.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2023-08-02T16:08:56.879142.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-08-02T16:08:56.879142.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-08-02T16:08:56.879142.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2023-08-02T16:08:56.879142.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2023-08-02T16:08:56.879142.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2023-08-02T16:08:56.879142.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-08-02T16:08:56.879142.parquet'
- '**/details_harness|hendrycksTest-virology|5_2023-08-02T16:08:56.879142.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2023-08-02T16:08:56.879142.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-08-02T16:08:56.879142.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2023-08-02T16:08:56.879142.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2023-08-02T16:08:56.879142.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2023-08-02T16:08:56.879142.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-08-02T16:08:56.879142.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2023-08-02T16:08:56.879142.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-08-02T16:08:56.879142.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-08-02T16:08:56.879142.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-08-02T16:08:56.879142.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2023-08-02T16:08:56.879142.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2023-08-02T16:08:56.879142.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2023-08-02T16:08:56.879142.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-08-02T16:08:56.879142.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2023-08-02T16:08:56.879142.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-08-02T16:08:56.879142.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-08-02T16:08:56.879142.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2023-08-02T16:08:56.879142.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2023-08-02T16:08:56.879142.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-08-02T16:08:56.879142.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-08-02T16:08:56.879142.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-08-02T16:08:56.879142.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-08-02T16:08:56.879142.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-08-02T16:08:56.879142.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-08-02T16:08:56.879142.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-08-02T16:08:56.879142.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-08-02T16:08:56.879142.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-08-02T16:08:56.879142.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-08-02T16:08:56.879142.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-08-02T16:08:56.879142.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-08-02T16:08:56.879142.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-08-02T16:08:56.879142.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-08-02T16:08:56.879142.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2023-08-02T16:08:56.879142.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-08-02T16:08:56.879142.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2023-08-02T16:08:56.879142.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-08-02T16:08:56.879142.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-08-02T16:08:56.879142.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2023-08-02T16:08:56.879142.parquet'
- '**/details_harness|hendrycksTest-management|5_2023-08-02T16:08:56.879142.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2023-08-02T16:08:56.879142.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-08-02T16:08:56.879142.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-08-02T16:08:56.879142.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-08-02T16:08:56.879142.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-08-02T16:08:56.879142.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2023-08-02T16:08:56.879142.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2023-08-02T16:08:56.879142.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2023-08-02T16:08:56.879142.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-08-02T16:08:56.879142.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2023-08-02T16:08:56.879142.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-08-02T16:08:56.879142.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-08-02T16:08:56.879142.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2023-08-02T16:08:56.879142.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2023-08-02T16:08:56.879142.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2023-08-02T16:08:56.879142.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-08-02T16:08:56.879142.parquet'
- '**/details_harness|hendrycksTest-virology|5_2023-08-02T16:08:56.879142.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2023-08-02T16:08:56.879142.parquet'
- config_name: harness_hendrycksTest_abstract_algebra_5
data_files:
- split: 2023_08_02T16_08_56.879142
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-08-02T16:08:56.879142.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-08-02T16:08:56.879142.parquet'
- config_name: harness_hendrycksTest_anatomy_5
data_files:
- split: 2023_08_02T16_08_56.879142
path:
- '**/details_harness|hendrycksTest-anatomy|5_2023-08-02T16:08:56.879142.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-anatomy|5_2023-08-02T16:08:56.879142.parquet'
- config_name: harness_hendrycksTest_astronomy_5
data_files:
- split: 2023_08_02T16_08_56.879142
path:
- '**/details_harness|hendrycksTest-astronomy|5_2023-08-02T16:08:56.879142.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-astronomy|5_2023-08-02T16:08:56.879142.parquet'
- config_name: harness_hendrycksTest_business_ethics_5
data_files:
- split: 2023_08_02T16_08_56.879142
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2023-08-02T16:08:56.879142.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2023-08-02T16:08:56.879142.parquet'
- config_name: harness_hendrycksTest_clinical_knowledge_5
data_files:
- split: 2023_08_02T16_08_56.879142
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-08-02T16:08:56.879142.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-08-02T16:08:56.879142.parquet'
- config_name: harness_hendrycksTest_college_biology_5
data_files:
- split: 2023_08_02T16_08_56.879142
path:
- '**/details_harness|hendrycksTest-college_biology|5_2023-08-02T16:08:56.879142.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_biology|5_2023-08-02T16:08:56.879142.parquet'
- config_name: harness_hendrycksTest_college_chemistry_5
data_files:
- split: 2023_08_02T16_08_56.879142
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-08-02T16:08:56.879142.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-08-02T16:08:56.879142.parquet'
- config_name: harness_hendrycksTest_college_computer_science_5
data_files:
- split: 2023_08_02T16_08_56.879142
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-08-02T16:08:56.879142.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-08-02T16:08:56.879142.parquet'
- config_name: harness_hendrycksTest_college_mathematics_5
data_files:
- split: 2023_08_02T16_08_56.879142
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-08-02T16:08:56.879142.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-08-02T16:08:56.879142.parquet'
- config_name: harness_hendrycksTest_college_medicine_5
data_files:
- split: 2023_08_02T16_08_56.879142
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2023-08-02T16:08:56.879142.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2023-08-02T16:08:56.879142.parquet'
- config_name: harness_hendrycksTest_college_physics_5
data_files:
- split: 2023_08_02T16_08_56.879142
path:
- '**/details_harness|hendrycksTest-college_physics|5_2023-08-02T16:08:56.879142.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_physics|5_2023-08-02T16:08:56.879142.parquet'
- config_name: harness_hendrycksTest_computer_security_5
data_files:
- split: 2023_08_02T16_08_56.879142
path:
- '**/details_harness|hendrycksTest-computer_security|5_2023-08-02T16:08:56.879142.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-computer_security|5_2023-08-02T16:08:56.879142.parquet'
- config_name: harness_hendrycksTest_conceptual_physics_5
data_files:
- split: 2023_08_02T16_08_56.879142
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-08-02T16:08:56.879142.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-08-02T16:08:56.879142.parquet'
- config_name: harness_hendrycksTest_econometrics_5
data_files:
- split: 2023_08_02T16_08_56.879142
path:
- '**/details_harness|hendrycksTest-econometrics|5_2023-08-02T16:08:56.879142.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-econometrics|5_2023-08-02T16:08:56.879142.parquet'
- config_name: harness_hendrycksTest_electrical_engineering_5
data_files:
- split: 2023_08_02T16_08_56.879142
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-08-02T16:08:56.879142.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-08-02T16:08:56.879142.parquet'
- config_name: harness_hendrycksTest_elementary_mathematics_5
data_files:
- split: 2023_08_02T16_08_56.879142
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-08-02T16:08:56.879142.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-08-02T16:08:56.879142.parquet'
- config_name: harness_hendrycksTest_formal_logic_5
data_files:
- split: 2023_08_02T16_08_56.879142
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2023-08-02T16:08:56.879142.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2023-08-02T16:08:56.879142.parquet'
- config_name: harness_hendrycksTest_global_facts_5
data_files:
- split: 2023_08_02T16_08_56.879142
path:
- '**/details_harness|hendrycksTest-global_facts|5_2023-08-02T16:08:56.879142.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-global_facts|5_2023-08-02T16:08:56.879142.parquet'
- config_name: harness_hendrycksTest_high_school_biology_5
data_files:
- split: 2023_08_02T16_08_56.879142
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-08-02T16:08:56.879142.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-08-02T16:08:56.879142.parquet'
- config_name: harness_hendrycksTest_high_school_chemistry_5
data_files:
- split: 2023_08_02T16_08_56.879142
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-08-02T16:08:56.879142.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-08-02T16:08:56.879142.parquet'
- config_name: harness_hendrycksTest_high_school_computer_science_5
data_files:
- split: 2023_08_02T16_08_56.879142
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-08-02T16:08:56.879142.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-08-02T16:08:56.879142.parquet'
- config_name: harness_hendrycksTest_high_school_european_history_5
data_files:
- split: 2023_08_02T16_08_56.879142
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-08-02T16:08:56.879142.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-08-02T16:08:56.879142.parquet'
- config_name: harness_hendrycksTest_high_school_geography_5
data_files:
- split: 2023_08_02T16_08_56.879142
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-08-02T16:08:56.879142.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-08-02T16:08:56.879142.parquet'
- config_name: harness_hendrycksTest_high_school_government_and_politics_5
data_files:
- split: 2023_08_02T16_08_56.879142
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-08-02T16:08:56.879142.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-08-02T16:08:56.879142.parquet'
- config_name: harness_hendrycksTest_high_school_macroeconomics_5
data_files:
- split: 2023_08_02T16_08_56.879142
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-08-02T16:08:56.879142.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-08-02T16:08:56.879142.parquet'
- config_name: harness_hendrycksTest_high_school_mathematics_5
data_files:
- split: 2023_08_02T16_08_56.879142
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-08-02T16:08:56.879142.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-08-02T16:08:56.879142.parquet'
- config_name: harness_hendrycksTest_high_school_microeconomics_5
data_files:
- split: 2023_08_02T16_08_56.879142
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-08-02T16:08:56.879142.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-08-02T16:08:56.879142.parquet'
- config_name: harness_hendrycksTest_high_school_physics_5
data_files:
- split: 2023_08_02T16_08_56.879142
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-08-02T16:08:56.879142.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-08-02T16:08:56.879142.parquet'
- config_name: harness_hendrycksTest_high_school_psychology_5
data_files:
- split: 2023_08_02T16_08_56.879142
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-08-02T16:08:56.879142.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-08-02T16:08:56.879142.parquet'
- config_name: harness_hendrycksTest_high_school_statistics_5
data_files:
- split: 2023_08_02T16_08_56.879142
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-08-02T16:08:56.879142.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-08-02T16:08:56.879142.parquet'
- config_name: harness_hendrycksTest_high_school_us_history_5
data_files:
- split: 2023_08_02T16_08_56.879142
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-08-02T16:08:56.879142.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-08-02T16:08:56.879142.parquet'
- config_name: harness_hendrycksTest_high_school_world_history_5
data_files:
- split: 2023_08_02T16_08_56.879142
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-08-02T16:08:56.879142.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-08-02T16:08:56.879142.parquet'
- config_name: harness_hendrycksTest_human_aging_5
data_files:
- split: 2023_08_02T16_08_56.879142
path:
- '**/details_harness|hendrycksTest-human_aging|5_2023-08-02T16:08:56.879142.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_aging|5_2023-08-02T16:08:56.879142.parquet'
- config_name: harness_hendrycksTest_human_sexuality_5
data_files:
- split: 2023_08_02T16_08_56.879142
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-08-02T16:08:56.879142.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-08-02T16:08:56.879142.parquet'
- config_name: harness_hendrycksTest_international_law_5
data_files:
- split: 2023_08_02T16_08_56.879142
path:
- '**/details_harness|hendrycksTest-international_law|5_2023-08-02T16:08:56.879142.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-international_law|5_2023-08-02T16:08:56.879142.parquet'
- config_name: harness_hendrycksTest_jurisprudence_5
data_files:
- split: 2023_08_02T16_08_56.879142
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-08-02T16:08:56.879142.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-08-02T16:08:56.879142.parquet'
- config_name: harness_hendrycksTest_logical_fallacies_5
data_files:
- split: 2023_08_02T16_08_56.879142
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-08-02T16:08:56.879142.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-08-02T16:08:56.879142.parquet'
- config_name: harness_hendrycksTest_machine_learning_5
data_files:
- split: 2023_08_02T16_08_56.879142
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2023-08-02T16:08:56.879142.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2023-08-02T16:08:56.879142.parquet'
- config_name: harness_hendrycksTest_management_5
data_files:
- split: 2023_08_02T16_08_56.879142
path:
- '**/details_harness|hendrycksTest-management|5_2023-08-02T16:08:56.879142.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-management|5_2023-08-02T16:08:56.879142.parquet'
- config_name: harness_hendrycksTest_marketing_5
data_files:
- split: 2023_08_02T16_08_56.879142
path:
- '**/details_harness|hendrycksTest-marketing|5_2023-08-02T16:08:56.879142.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-marketing|5_2023-08-02T16:08:56.879142.parquet'
- config_name: harness_hendrycksTest_medical_genetics_5
data_files:
- split: 2023_08_02T16_08_56.879142
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-08-02T16:08:56.879142.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-08-02T16:08:56.879142.parquet'
- config_name: harness_hendrycksTest_miscellaneous_5
data_files:
- split: 2023_08_02T16_08_56.879142
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-08-02T16:08:56.879142.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-08-02T16:08:56.879142.parquet'
- config_name: harness_hendrycksTest_moral_disputes_5
data_files:
- split: 2023_08_02T16_08_56.879142
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-08-02T16:08:56.879142.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-08-02T16:08:56.879142.parquet'
- config_name: harness_hendrycksTest_moral_scenarios_5
data_files:
- split: 2023_08_02T16_08_56.879142
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-08-02T16:08:56.879142.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-08-02T16:08:56.879142.parquet'
- config_name: harness_hendrycksTest_nutrition_5
data_files:
- split: 2023_08_02T16_08_56.879142
path:
- '**/details_harness|hendrycksTest-nutrition|5_2023-08-02T16:08:56.879142.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-nutrition|5_2023-08-02T16:08:56.879142.parquet'
- config_name: harness_hendrycksTest_philosophy_5
data_files:
- split: 2023_08_02T16_08_56.879142
path:
- '**/details_harness|hendrycksTest-philosophy|5_2023-08-02T16:08:56.879142.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-philosophy|5_2023-08-02T16:08:56.879142.parquet'
- config_name: harness_hendrycksTest_prehistory_5
data_files:
- split: 2023_08_02T16_08_56.879142
path:
- '**/details_harness|hendrycksTest-prehistory|5_2023-08-02T16:08:56.879142.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-prehistory|5_2023-08-02T16:08:56.879142.parquet'
- config_name: harness_hendrycksTest_professional_accounting_5
data_files:
- split: 2023_08_02T16_08_56.879142
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-08-02T16:08:56.879142.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-08-02T16:08:56.879142.parquet'
- config_name: harness_hendrycksTest_professional_law_5
data_files:
- split: 2023_08_02T16_08_56.879142
path:
- '**/details_harness|hendrycksTest-professional_law|5_2023-08-02T16:08:56.879142.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_law|5_2023-08-02T16:08:56.879142.parquet'
- config_name: harness_hendrycksTest_professional_medicine_5
data_files:
- split: 2023_08_02T16_08_56.879142
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-08-02T16:08:56.879142.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-08-02T16:08:56.879142.parquet'
- config_name: harness_hendrycksTest_professional_psychology_5
data_files:
- split: 2023_08_02T16_08_56.879142
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-08-02T16:08:56.879142.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-08-02T16:08:56.879142.parquet'
- config_name: harness_hendrycksTest_public_relations_5
data_files:
- split: 2023_08_02T16_08_56.879142
path:
- '**/details_harness|hendrycksTest-public_relations|5_2023-08-02T16:08:56.879142.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-public_relations|5_2023-08-02T16:08:56.879142.parquet'
- config_name: harness_hendrycksTest_security_studies_5
data_files:
- split: 2023_08_02T16_08_56.879142
path:
- '**/details_harness|hendrycksTest-security_studies|5_2023-08-02T16:08:56.879142.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-security_studies|5_2023-08-02T16:08:56.879142.parquet'
- config_name: harness_hendrycksTest_sociology_5
data_files:
- split: 2023_08_02T16_08_56.879142
path:
- '**/details_harness|hendrycksTest-sociology|5_2023-08-02T16:08:56.879142.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-sociology|5_2023-08-02T16:08:56.879142.parquet'
- config_name: harness_hendrycksTest_us_foreign_policy_5
data_files:
- split: 2023_08_02T16_08_56.879142
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-08-02T16:08:56.879142.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-08-02T16:08:56.879142.parquet'
- config_name: harness_hendrycksTest_virology_5
data_files:
- split: 2023_08_02T16_08_56.879142
path:
- '**/details_harness|hendrycksTest-virology|5_2023-08-02T16:08:56.879142.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-virology|5_2023-08-02T16:08:56.879142.parquet'
- config_name: harness_hendrycksTest_world_religions_5
data_files:
- split: 2023_08_02T16_08_56.879142
path:
- '**/details_harness|hendrycksTest-world_religions|5_2023-08-02T16:08:56.879142.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-world_religions|5_2023-08-02T16:08:56.879142.parquet'
- config_name: harness_truthfulqa_mc_0
data_files:
- split: 2023_08_02T16_08_56.879142
path:
- '**/details_harness|truthfulqa:mc|0_2023-08-02T16:08:56.879142.parquet'
- split: latest
path:
- '**/details_harness|truthfulqa:mc|0_2023-08-02T16:08:56.879142.parquet'
- config_name: harness_winogrande_5
data_files:
- split: 2023_09_17T18_42_42.384089
path:
- '**/details_harness|winogrande|5_2023-09-17T18-42-42.384089.parquet'
- split: latest
path:
- '**/details_harness|winogrande|5_2023-09-17T18-42-42.384089.parquet'
- config_name: results
data_files:
- split: 2023_08_02T16_08_56.879142
path:
- results_2023-08-02T16:08:56.879142.parquet
- split: 2023_09_17T18_42_42.384089
path:
- results_2023-09-17T18-42-42.384089.parquet
- split: latest
path:
- results_2023-09-17T18-42-42.384089.parquet
---
# Dataset Card for Evaluation run of ashercn97/manatee-7b
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/ashercn97/manatee-7b
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** clementine@hf.co
### Dataset Summary
Dataset automatically created during the evaluation run of model [ashercn97/manatee-7b](https://huggingface.co/ashercn97/manatee-7b) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_ashercn97__manatee-7b",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-09-17T18:42:42.384089](https://huggingface.co/datasets/open-llm-leaderboard/details_ashercn97__manatee-7b/blob/main/results_2023-09-17T18-42-42.384089.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"em": 0.0030411073825503355,
"em_stderr": 0.0005638896908753201,
"f1": 0.059899328859060456,
"f1_stderr": 0.001397556369094792,
"acc": 0.4077875240923591,
"acc_stderr": 0.009650175391680019
},
"harness|drop|3": {
"em": 0.0030411073825503355,
"em_stderr": 0.0005638896908753201,
"f1": 0.059899328859060456,
"f1_stderr": 0.001397556369094792
},
"harness|gsm8k|5": {
"acc": 0.07050796057619409,
"acc_stderr": 0.0070515438139836135
},
"harness|winogrande|5": {
"acc": 0.745067087608524,
"acc_stderr": 0.012248806969376422
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] |
open-llm-leaderboard/details_medalpaca__medalpaca-7b | 2023-08-27T12:32:23.000Z | [
"region:us"
] | open-llm-leaderboard | null | null | null | 0 | 2,470 | ---
pretty_name: Evaluation run of medalpaca/medalpaca-7b
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [medalpaca/medalpaca-7b](https://huggingface.co/medalpaca/medalpaca-7b) on the\
\ [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 61 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the agregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_medalpaca__medalpaca-7b\"\
,\n\t\"harness_truthfulqa_mc_0\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\
\nThese are the [latest results from run 2023-07-19T16:30:25.304813](https://huggingface.co/datasets/open-llm-leaderboard/details_medalpaca__medalpaca-7b/blob/main/results_2023-07-19T16%3A30%3A25.304813.json)\
\ (note that their might be results for other tasks in the repos if successive evals\
\ didn't cover the same tasks. You find each in the results and the \"latest\" split\
\ for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.4193625530628919,\n\
\ \"acc_stderr\": 0.03474006835088891,\n \"acc_norm\": 0.42342868811455614,\n\
\ \"acc_norm_stderr\": 0.034724119967275764,\n \"mc1\": 0.25703794369645044,\n\
\ \"mc1_stderr\": 0.015298077509485076,\n \"mc2\": 0.4046224421319521,\n\
\ \"mc2_stderr\": 0.015012572023050848\n },\n \"harness|arc:challenge|25\"\
: {\n \"acc\": 0.48976109215017066,\n \"acc_stderr\": 0.014608326906285019,\n\
\ \"acc_norm\": 0.5409556313993175,\n \"acc_norm_stderr\": 0.01456229107360123\n\
\ },\n \"harness|hellaswag|10\": {\n \"acc\": 0.6155148376817368,\n\
\ \"acc_stderr\": 0.004854791378656995,\n \"acc_norm\": 0.8042222664807808,\n\
\ \"acc_norm_stderr\": 0.003959872578165267\n },\n \"harness|hendrycksTest-abstract_algebra|5\"\
: {\n \"acc\": 0.26,\n \"acc_stderr\": 0.044084400227680814,\n \
\ \"acc_norm\": 0.26,\n \"acc_norm_stderr\": 0.044084400227680814\n \
\ },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.4962962962962963,\n\
\ \"acc_stderr\": 0.04319223625811331,\n \"acc_norm\": 0.4962962962962963,\n\
\ \"acc_norm_stderr\": 0.04319223625811331\n },\n \"harness|hendrycksTest-astronomy|5\"\
: {\n \"acc\": 0.2894736842105263,\n \"acc_stderr\": 0.03690677986137282,\n\
\ \"acc_norm\": 0.2894736842105263,\n \"acc_norm_stderr\": 0.03690677986137282\n\
\ },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.42,\n\
\ \"acc_stderr\": 0.049604496374885836,\n \"acc_norm\": 0.42,\n \
\ \"acc_norm_stderr\": 0.049604496374885836\n },\n \"harness|hendrycksTest-clinical_knowledge|5\"\
: {\n \"acc\": 0.4716981132075472,\n \"acc_stderr\": 0.0307235352490061,\n\
\ \"acc_norm\": 0.4716981132075472,\n \"acc_norm_stderr\": 0.0307235352490061\n\
\ },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.4513888888888889,\n\
\ \"acc_stderr\": 0.04161402398403279,\n \"acc_norm\": 0.4513888888888889,\n\
\ \"acc_norm_stderr\": 0.04161402398403279\n },\n \"harness|hendrycksTest-college_chemistry|5\"\
: {\n \"acc\": 0.27,\n \"acc_stderr\": 0.0446196043338474,\n \
\ \"acc_norm\": 0.27,\n \"acc_norm_stderr\": 0.0446196043338474\n },\n\
\ \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\": 0.21,\n\
\ \"acc_stderr\": 0.040936018074033256,\n \"acc_norm\": 0.21,\n \
\ \"acc_norm_stderr\": 0.040936018074033256\n },\n \"harness|hendrycksTest-college_mathematics|5\"\
: {\n \"acc\": 0.23,\n \"acc_stderr\": 0.042295258468165065,\n \
\ \"acc_norm\": 0.23,\n \"acc_norm_stderr\": 0.042295258468165065\n \
\ },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.42196531791907516,\n\
\ \"acc_stderr\": 0.0376574669386515,\n \"acc_norm\": 0.42196531791907516,\n\
\ \"acc_norm_stderr\": 0.0376574669386515\n },\n \"harness|hendrycksTest-college_physics|5\"\
: {\n \"acc\": 0.27450980392156865,\n \"acc_stderr\": 0.04440521906179328,\n\
\ \"acc_norm\": 0.27450980392156865,\n \"acc_norm_stderr\": 0.04440521906179328\n\
\ },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\":\
\ 0.48,\n \"acc_stderr\": 0.050211673156867795,\n \"acc_norm\": 0.48,\n\
\ \"acc_norm_stderr\": 0.050211673156867795\n },\n \"harness|hendrycksTest-conceptual_physics|5\"\
: {\n \"acc\": 0.4127659574468085,\n \"acc_stderr\": 0.03218471141400352,\n\
\ \"acc_norm\": 0.4127659574468085,\n \"acc_norm_stderr\": 0.03218471141400352\n\
\ },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.23684210526315788,\n\
\ \"acc_stderr\": 0.039994238792813365,\n \"acc_norm\": 0.23684210526315788,\n\
\ \"acc_norm_stderr\": 0.039994238792813365\n },\n \"harness|hendrycksTest-electrical_engineering|5\"\
: {\n \"acc\": 0.35172413793103446,\n \"acc_stderr\": 0.0397923663749741,\n\
\ \"acc_norm\": 0.35172413793103446,\n \"acc_norm_stderr\": 0.0397923663749741\n\
\ },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\"\
: 0.24867724867724866,\n \"acc_stderr\": 0.02226181769240018,\n \"\
acc_norm\": 0.24867724867724866,\n \"acc_norm_stderr\": 0.02226181769240018\n\
\ },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.19047619047619047,\n\
\ \"acc_stderr\": 0.035122074123020514,\n \"acc_norm\": 0.19047619047619047,\n\
\ \"acc_norm_stderr\": 0.035122074123020514\n },\n \"harness|hendrycksTest-global_facts|5\"\
: {\n \"acc\": 0.34,\n \"acc_stderr\": 0.047609522856952365,\n \
\ \"acc_norm\": 0.34,\n \"acc_norm_stderr\": 0.047609522856952365\n \
\ },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\"\
: 0.5032258064516129,\n \"acc_stderr\": 0.02844341422643833,\n \"\
acc_norm\": 0.5032258064516129,\n \"acc_norm_stderr\": 0.02844341422643833\n\
\ },\n \"harness|hendrycksTest-high_school_chemistry|5\": {\n \"acc\"\
: 0.3399014778325123,\n \"acc_stderr\": 0.033327690684107895,\n \"\
acc_norm\": 0.3399014778325123,\n \"acc_norm_stderr\": 0.033327690684107895\n\
\ },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \
\ \"acc\": 0.38,\n \"acc_stderr\": 0.048783173121456316,\n \"acc_norm\"\
: 0.38,\n \"acc_norm_stderr\": 0.048783173121456316\n },\n \"harness|hendrycksTest-high_school_european_history|5\"\
: {\n \"acc\": 0.6060606060606061,\n \"acc_stderr\": 0.038154943086889305,\n\
\ \"acc_norm\": 0.6060606060606061,\n \"acc_norm_stderr\": 0.038154943086889305\n\
\ },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\"\
: 0.35858585858585856,\n \"acc_stderr\": 0.03416903640391521,\n \"\
acc_norm\": 0.35858585858585856,\n \"acc_norm_stderr\": 0.03416903640391521\n\
\ },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n\
\ \"acc\": 0.5129533678756477,\n \"acc_stderr\": 0.036072280610477486,\n\
\ \"acc_norm\": 0.5129533678756477,\n \"acc_norm_stderr\": 0.036072280610477486\n\
\ },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \
\ \"acc\": 0.32564102564102565,\n \"acc_stderr\": 0.02375966576741229,\n\
\ \"acc_norm\": 0.32564102564102565,\n \"acc_norm_stderr\": 0.02375966576741229\n\
\ },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"\
acc\": 0.23333333333333334,\n \"acc_stderr\": 0.025787874220959333,\n \
\ \"acc_norm\": 0.23333333333333334,\n \"acc_norm_stderr\": 0.025787874220959333\n\
\ },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \
\ \"acc\": 0.3067226890756303,\n \"acc_stderr\": 0.029953823891887037,\n\
\ \"acc_norm\": 0.3067226890756303,\n \"acc_norm_stderr\": 0.029953823891887037\n\
\ },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\"\
: 0.25165562913907286,\n \"acc_stderr\": 0.035433042343899844,\n \"\
acc_norm\": 0.25165562913907286,\n \"acc_norm_stderr\": 0.035433042343899844\n\
\ },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\"\
: 0.6275229357798165,\n \"acc_stderr\": 0.020728368457638497,\n \"\
acc_norm\": 0.6275229357798165,\n \"acc_norm_stderr\": 0.020728368457638497\n\
\ },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\"\
: 0.21296296296296297,\n \"acc_stderr\": 0.027920963147993662,\n \"\
acc_norm\": 0.21296296296296297,\n \"acc_norm_stderr\": 0.027920963147993662\n\
\ },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\"\
: 0.6127450980392157,\n \"acc_stderr\": 0.03418931233833344,\n \"\
acc_norm\": 0.6127450980392157,\n \"acc_norm_stderr\": 0.03418931233833344\n\
\ },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"\
acc\": 0.5780590717299579,\n \"acc_stderr\": 0.032148146302403695,\n \
\ \"acc_norm\": 0.5780590717299579,\n \"acc_norm_stderr\": 0.032148146302403695\n\
\ },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.5381165919282511,\n\
\ \"acc_stderr\": 0.033460150119732274,\n \"acc_norm\": 0.5381165919282511,\n\
\ \"acc_norm_stderr\": 0.033460150119732274\n },\n \"harness|hendrycksTest-human_sexuality|5\"\
: {\n \"acc\": 0.5419847328244275,\n \"acc_stderr\": 0.04369802690578756,\n\
\ \"acc_norm\": 0.5419847328244275,\n \"acc_norm_stderr\": 0.04369802690578756\n\
\ },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\":\
\ 0.5619834710743802,\n \"acc_stderr\": 0.04529146804435792,\n \"\
acc_norm\": 0.5619834710743802,\n \"acc_norm_stderr\": 0.04529146804435792\n\
\ },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.46296296296296297,\n\
\ \"acc_stderr\": 0.04820403072760627,\n \"acc_norm\": 0.46296296296296297,\n\
\ \"acc_norm_stderr\": 0.04820403072760627\n },\n \"harness|hendrycksTest-logical_fallacies|5\"\
: {\n \"acc\": 0.4294478527607362,\n \"acc_stderr\": 0.038890666191127216,\n\
\ \"acc_norm\": 0.4294478527607362,\n \"acc_norm_stderr\": 0.038890666191127216\n\
\ },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.3392857142857143,\n\
\ \"acc_stderr\": 0.044939490686135376,\n \"acc_norm\": 0.3392857142857143,\n\
\ \"acc_norm_stderr\": 0.044939490686135376\n },\n \"harness|hendrycksTest-management|5\"\
: {\n \"acc\": 0.4077669902912621,\n \"acc_stderr\": 0.048657775704107675,\n\
\ \"acc_norm\": 0.4077669902912621,\n \"acc_norm_stderr\": 0.048657775704107675\n\
\ },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.6196581196581197,\n\
\ \"acc_stderr\": 0.03180425204384099,\n \"acc_norm\": 0.6196581196581197,\n\
\ \"acc_norm_stderr\": 0.03180425204384099\n },\n \"harness|hendrycksTest-medical_genetics|5\"\
: {\n \"acc\": 0.56,\n \"acc_stderr\": 0.04988876515698589,\n \
\ \"acc_norm\": 0.56,\n \"acc_norm_stderr\": 0.04988876515698589\n \
\ },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.5504469987228607,\n\
\ \"acc_stderr\": 0.017788725283507337,\n \"acc_norm\": 0.5504469987228607,\n\
\ \"acc_norm_stderr\": 0.017788725283507337\n },\n \"harness|hendrycksTest-moral_disputes|5\"\
: {\n \"acc\": 0.430635838150289,\n \"acc_stderr\": 0.026658800273672373,\n\
\ \"acc_norm\": 0.430635838150289,\n \"acc_norm_stderr\": 0.026658800273672373\n\
\ },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.2424581005586592,\n\
\ \"acc_stderr\": 0.014333522059217889,\n \"acc_norm\": 0.2424581005586592,\n\
\ \"acc_norm_stderr\": 0.014333522059217889\n },\n \"harness|hendrycksTest-nutrition|5\"\
: {\n \"acc\": 0.5196078431372549,\n \"acc_stderr\": 0.028607893699576066,\n\
\ \"acc_norm\": 0.5196078431372549,\n \"acc_norm_stderr\": 0.028607893699576066\n\
\ },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.4340836012861736,\n\
\ \"acc_stderr\": 0.028150232244535608,\n \"acc_norm\": 0.4340836012861736,\n\
\ \"acc_norm_stderr\": 0.028150232244535608\n },\n \"harness|hendrycksTest-prehistory|5\"\
: {\n \"acc\": 0.4444444444444444,\n \"acc_stderr\": 0.02764847787741332,\n\
\ \"acc_norm\": 0.4444444444444444,\n \"acc_norm_stderr\": 0.02764847787741332\n\
\ },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"\
acc\": 0.31560283687943264,\n \"acc_stderr\": 0.027724989449509314,\n \
\ \"acc_norm\": 0.31560283687943264,\n \"acc_norm_stderr\": 0.027724989449509314\n\
\ },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.3428943937418514,\n\
\ \"acc_stderr\": 0.012123463271585895,\n \"acc_norm\": 0.3428943937418514,\n\
\ \"acc_norm_stderr\": 0.012123463271585895\n },\n \"harness|hendrycksTest-professional_medicine|5\"\
: {\n \"acc\": 0.5992647058823529,\n \"acc_stderr\": 0.029768263528933105,\n\
\ \"acc_norm\": 0.5992647058823529,\n \"acc_norm_stderr\": 0.029768263528933105\n\
\ },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"\
acc\": 0.5,\n \"acc_stderr\": 0.020227834851568375,\n \"acc_norm\"\
: 0.5,\n \"acc_norm_stderr\": 0.020227834851568375\n },\n \"harness|hendrycksTest-public_relations|5\"\
: {\n \"acc\": 0.509090909090909,\n \"acc_stderr\": 0.04788339768702861,\n\
\ \"acc_norm\": 0.509090909090909,\n \"acc_norm_stderr\": 0.04788339768702861\n\
\ },\n \"harness|hendrycksTest-security_studies|5\": {\n \"acc\": 0.3877551020408163,\n\
\ \"acc_stderr\": 0.031192230726795656,\n \"acc_norm\": 0.3877551020408163,\n\
\ \"acc_norm_stderr\": 0.031192230726795656\n },\n \"harness|hendrycksTest-sociology|5\"\
: {\n \"acc\": 0.4577114427860697,\n \"acc_stderr\": 0.035228658640995975,\n\
\ \"acc_norm\": 0.4577114427860697,\n \"acc_norm_stderr\": 0.035228658640995975\n\
\ },\n \"harness|hendrycksTest-us_foreign_policy|5\": {\n \"acc\":\
\ 0.61,\n \"acc_stderr\": 0.04902071300001974,\n \"acc_norm\": 0.61,\n\
\ \"acc_norm_stderr\": 0.04902071300001974\n },\n \"harness|hendrycksTest-virology|5\"\
: {\n \"acc\": 0.4939759036144578,\n \"acc_stderr\": 0.03892212195333045,\n\
\ \"acc_norm\": 0.4939759036144578,\n \"acc_norm_stderr\": 0.03892212195333045\n\
\ },\n \"harness|hendrycksTest-world_religions|5\": {\n \"acc\": 0.4327485380116959,\n\
\ \"acc_stderr\": 0.03799978644370608,\n \"acc_norm\": 0.4327485380116959,\n\
\ \"acc_norm_stderr\": 0.03799978644370608\n },\n \"harness|truthfulqa:mc|0\"\
: {\n \"mc1\": 0.25703794369645044,\n \"mc1_stderr\": 0.015298077509485076,\n\
\ \"mc2\": 0.4046224421319521,\n \"mc2_stderr\": 0.015012572023050848\n\
\ }\n}\n```"
repo_url: https://huggingface.co/medalpaca/medalpaca-7b
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_arc_challenge_25
data_files:
- split: 2023_07_19T16_30_25.304813
path:
- '**/details_harness|arc:challenge|25_2023-07-19T16:30:25.304813.parquet'
- split: latest
path:
- '**/details_harness|arc:challenge|25_2023-07-19T16:30:25.304813.parquet'
- config_name: harness_hellaswag_10
data_files:
- split: 2023_07_19T16_30_25.304813
path:
- '**/details_harness|hellaswag|10_2023-07-19T16:30:25.304813.parquet'
- split: latest
path:
- '**/details_harness|hellaswag|10_2023-07-19T16:30:25.304813.parquet'
- config_name: harness_hendrycksTest_5
data_files:
- split: 2023_07_19T16_30_25.304813
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-07-19T16:30:25.304813.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2023-07-19T16:30:25.304813.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2023-07-19T16:30:25.304813.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2023-07-19T16:30:25.304813.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-07-19T16:30:25.304813.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2023-07-19T16:30:25.304813.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-07-19T16:30:25.304813.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-07-19T16:30:25.304813.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-07-19T16:30:25.304813.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2023-07-19T16:30:25.304813.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2023-07-19T16:30:25.304813.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2023-07-19T16:30:25.304813.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-07-19T16:30:25.304813.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2023-07-19T16:30:25.304813.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-07-19T16:30:25.304813.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-07-19T16:30:25.304813.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2023-07-19T16:30:25.304813.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2023-07-19T16:30:25.304813.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-07-19T16:30:25.304813.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-07-19T16:30:25.304813.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-07-19T16:30:25.304813.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-07-19T16:30:25.304813.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-07-19T16:30:25.304813.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-07-19T16:30:25.304813.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-07-19T16:30:25.304813.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-07-19T16:30:25.304813.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-07-19T16:30:25.304813.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-07-19T16:30:25.304813.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-07-19T16:30:25.304813.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-07-19T16:30:25.304813.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-07-19T16:30:25.304813.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-07-19T16:30:25.304813.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2023-07-19T16:30:25.304813.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-07-19T16:30:25.304813.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2023-07-19T16:30:25.304813.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-07-19T16:30:25.304813.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-07-19T16:30:25.304813.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2023-07-19T16:30:25.304813.parquet'
- '**/details_harness|hendrycksTest-management|5_2023-07-19T16:30:25.304813.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2023-07-19T16:30:25.304813.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-07-19T16:30:25.304813.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-07-19T16:30:25.304813.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-07-19T16:30:25.304813.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-07-19T16:30:25.304813.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2023-07-19T16:30:25.304813.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2023-07-19T16:30:25.304813.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2023-07-19T16:30:25.304813.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-07-19T16:30:25.304813.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2023-07-19T16:30:25.304813.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-07-19T16:30:25.304813.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-07-19T16:30:25.304813.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2023-07-19T16:30:25.304813.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2023-07-19T16:30:25.304813.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2023-07-19T16:30:25.304813.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-07-19T16:30:25.304813.parquet'
- '**/details_harness|hendrycksTest-virology|5_2023-07-19T16:30:25.304813.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2023-07-19T16:30:25.304813.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-07-19T16:30:25.304813.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2023-07-19T16:30:25.304813.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2023-07-19T16:30:25.304813.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2023-07-19T16:30:25.304813.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-07-19T16:30:25.304813.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2023-07-19T16:30:25.304813.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-07-19T16:30:25.304813.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-07-19T16:30:25.304813.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-07-19T16:30:25.304813.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2023-07-19T16:30:25.304813.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2023-07-19T16:30:25.304813.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2023-07-19T16:30:25.304813.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-07-19T16:30:25.304813.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2023-07-19T16:30:25.304813.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-07-19T16:30:25.304813.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-07-19T16:30:25.304813.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2023-07-19T16:30:25.304813.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2023-07-19T16:30:25.304813.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-07-19T16:30:25.304813.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-07-19T16:30:25.304813.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-07-19T16:30:25.304813.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-07-19T16:30:25.304813.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-07-19T16:30:25.304813.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-07-19T16:30:25.304813.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-07-19T16:30:25.304813.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-07-19T16:30:25.304813.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-07-19T16:30:25.304813.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-07-19T16:30:25.304813.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-07-19T16:30:25.304813.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-07-19T16:30:25.304813.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-07-19T16:30:25.304813.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-07-19T16:30:25.304813.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2023-07-19T16:30:25.304813.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-07-19T16:30:25.304813.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2023-07-19T16:30:25.304813.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-07-19T16:30:25.304813.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-07-19T16:30:25.304813.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2023-07-19T16:30:25.304813.parquet'
- '**/details_harness|hendrycksTest-management|5_2023-07-19T16:30:25.304813.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2023-07-19T16:30:25.304813.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-07-19T16:30:25.304813.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-07-19T16:30:25.304813.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-07-19T16:30:25.304813.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-07-19T16:30:25.304813.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2023-07-19T16:30:25.304813.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2023-07-19T16:30:25.304813.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2023-07-19T16:30:25.304813.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-07-19T16:30:25.304813.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2023-07-19T16:30:25.304813.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-07-19T16:30:25.304813.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-07-19T16:30:25.304813.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2023-07-19T16:30:25.304813.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2023-07-19T16:30:25.304813.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2023-07-19T16:30:25.304813.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-07-19T16:30:25.304813.parquet'
- '**/details_harness|hendrycksTest-virology|5_2023-07-19T16:30:25.304813.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2023-07-19T16:30:25.304813.parquet'
- config_name: harness_hendrycksTest_abstract_algebra_5
data_files:
- split: 2023_07_19T16_30_25.304813
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-07-19T16:30:25.304813.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-07-19T16:30:25.304813.parquet'
- config_name: harness_hendrycksTest_anatomy_5
data_files:
- split: 2023_07_19T16_30_25.304813
path:
- '**/details_harness|hendrycksTest-anatomy|5_2023-07-19T16:30:25.304813.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-anatomy|5_2023-07-19T16:30:25.304813.parquet'
- config_name: harness_hendrycksTest_astronomy_5
data_files:
- split: 2023_07_19T16_30_25.304813
path:
- '**/details_harness|hendrycksTest-astronomy|5_2023-07-19T16:30:25.304813.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-astronomy|5_2023-07-19T16:30:25.304813.parquet'
- config_name: harness_hendrycksTest_business_ethics_5
data_files:
- split: 2023_07_19T16_30_25.304813
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2023-07-19T16:30:25.304813.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2023-07-19T16:30:25.304813.parquet'
- config_name: harness_hendrycksTest_clinical_knowledge_5
data_files:
- split: 2023_07_19T16_30_25.304813
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-07-19T16:30:25.304813.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-07-19T16:30:25.304813.parquet'
- config_name: harness_hendrycksTest_college_biology_5
data_files:
- split: 2023_07_19T16_30_25.304813
path:
- '**/details_harness|hendrycksTest-college_biology|5_2023-07-19T16:30:25.304813.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_biology|5_2023-07-19T16:30:25.304813.parquet'
- config_name: harness_hendrycksTest_college_chemistry_5
data_files:
- split: 2023_07_19T16_30_25.304813
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-07-19T16:30:25.304813.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-07-19T16:30:25.304813.parquet'
- config_name: harness_hendrycksTest_college_computer_science_5
data_files:
- split: 2023_07_19T16_30_25.304813
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-07-19T16:30:25.304813.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-07-19T16:30:25.304813.parquet'
- config_name: harness_hendrycksTest_college_mathematics_5
data_files:
- split: 2023_07_19T16_30_25.304813
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-07-19T16:30:25.304813.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-07-19T16:30:25.304813.parquet'
- config_name: harness_hendrycksTest_college_medicine_5
data_files:
- split: 2023_07_19T16_30_25.304813
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2023-07-19T16:30:25.304813.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2023-07-19T16:30:25.304813.parquet'
- config_name: harness_hendrycksTest_college_physics_5
data_files:
- split: 2023_07_19T16_30_25.304813
path:
- '**/details_harness|hendrycksTest-college_physics|5_2023-07-19T16:30:25.304813.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_physics|5_2023-07-19T16:30:25.304813.parquet'
- config_name: harness_hendrycksTest_computer_security_5
data_files:
- split: 2023_07_19T16_30_25.304813
path:
- '**/details_harness|hendrycksTest-computer_security|5_2023-07-19T16:30:25.304813.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-computer_security|5_2023-07-19T16:30:25.304813.parquet'
- config_name: harness_hendrycksTest_conceptual_physics_5
data_files:
- split: 2023_07_19T16_30_25.304813
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-07-19T16:30:25.304813.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-07-19T16:30:25.304813.parquet'
- config_name: harness_hendrycksTest_econometrics_5
data_files:
- split: 2023_07_19T16_30_25.304813
path:
- '**/details_harness|hendrycksTest-econometrics|5_2023-07-19T16:30:25.304813.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-econometrics|5_2023-07-19T16:30:25.304813.parquet'
- config_name: harness_hendrycksTest_electrical_engineering_5
data_files:
- split: 2023_07_19T16_30_25.304813
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-07-19T16:30:25.304813.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-07-19T16:30:25.304813.parquet'
- config_name: harness_hendrycksTest_elementary_mathematics_5
data_files:
- split: 2023_07_19T16_30_25.304813
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-07-19T16:30:25.304813.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-07-19T16:30:25.304813.parquet'
- config_name: harness_hendrycksTest_formal_logic_5
data_files:
- split: 2023_07_19T16_30_25.304813
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2023-07-19T16:30:25.304813.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2023-07-19T16:30:25.304813.parquet'
- config_name: harness_hendrycksTest_global_facts_5
data_files:
- split: 2023_07_19T16_30_25.304813
path:
- '**/details_harness|hendrycksTest-global_facts|5_2023-07-19T16:30:25.304813.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-global_facts|5_2023-07-19T16:30:25.304813.parquet'
- config_name: harness_hendrycksTest_high_school_biology_5
data_files:
- split: 2023_07_19T16_30_25.304813
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-07-19T16:30:25.304813.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-07-19T16:30:25.304813.parquet'
- config_name: harness_hendrycksTest_high_school_chemistry_5
data_files:
- split: 2023_07_19T16_30_25.304813
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-07-19T16:30:25.304813.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-07-19T16:30:25.304813.parquet'
- config_name: harness_hendrycksTest_high_school_computer_science_5
data_files:
- split: 2023_07_19T16_30_25.304813
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-07-19T16:30:25.304813.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-07-19T16:30:25.304813.parquet'
- config_name: harness_hendrycksTest_high_school_european_history_5
data_files:
- split: 2023_07_19T16_30_25.304813
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-07-19T16:30:25.304813.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-07-19T16:30:25.304813.parquet'
- config_name: harness_hendrycksTest_high_school_geography_5
data_files:
- split: 2023_07_19T16_30_25.304813
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-07-19T16:30:25.304813.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-07-19T16:30:25.304813.parquet'
- config_name: harness_hendrycksTest_high_school_government_and_politics_5
data_files:
- split: 2023_07_19T16_30_25.304813
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-07-19T16:30:25.304813.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-07-19T16:30:25.304813.parquet'
- config_name: harness_hendrycksTest_high_school_macroeconomics_5
data_files:
- split: 2023_07_19T16_30_25.304813
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-07-19T16:30:25.304813.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-07-19T16:30:25.304813.parquet'
- config_name: harness_hendrycksTest_high_school_mathematics_5
data_files:
- split: 2023_07_19T16_30_25.304813
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-07-19T16:30:25.304813.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-07-19T16:30:25.304813.parquet'
- config_name: harness_hendrycksTest_high_school_microeconomics_5
data_files:
- split: 2023_07_19T16_30_25.304813
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-07-19T16:30:25.304813.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-07-19T16:30:25.304813.parquet'
- config_name: harness_hendrycksTest_high_school_physics_5
data_files:
- split: 2023_07_19T16_30_25.304813
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-07-19T16:30:25.304813.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-07-19T16:30:25.304813.parquet'
- config_name: harness_hendrycksTest_high_school_psychology_5
data_files:
- split: 2023_07_19T16_30_25.304813
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-07-19T16:30:25.304813.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-07-19T16:30:25.304813.parquet'
- config_name: harness_hendrycksTest_high_school_statistics_5
data_files:
- split: 2023_07_19T16_30_25.304813
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-07-19T16:30:25.304813.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-07-19T16:30:25.304813.parquet'
- config_name: harness_hendrycksTest_high_school_us_history_5
data_files:
- split: 2023_07_19T16_30_25.304813
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-07-19T16:30:25.304813.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-07-19T16:30:25.304813.parquet'
- config_name: harness_hendrycksTest_high_school_world_history_5
data_files:
- split: 2023_07_19T16_30_25.304813
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-07-19T16:30:25.304813.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-07-19T16:30:25.304813.parquet'
- config_name: harness_hendrycksTest_human_aging_5
data_files:
- split: 2023_07_19T16_30_25.304813
path:
- '**/details_harness|hendrycksTest-human_aging|5_2023-07-19T16:30:25.304813.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_aging|5_2023-07-19T16:30:25.304813.parquet'
- config_name: harness_hendrycksTest_human_sexuality_5
data_files:
- split: 2023_07_19T16_30_25.304813
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-07-19T16:30:25.304813.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-07-19T16:30:25.304813.parquet'
- config_name: harness_hendrycksTest_international_law_5
data_files:
- split: 2023_07_19T16_30_25.304813
path:
- '**/details_harness|hendrycksTest-international_law|5_2023-07-19T16:30:25.304813.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-international_law|5_2023-07-19T16:30:25.304813.parquet'
- config_name: harness_hendrycksTest_jurisprudence_5
data_files:
- split: 2023_07_19T16_30_25.304813
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-07-19T16:30:25.304813.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-07-19T16:30:25.304813.parquet'
- config_name: harness_hendrycksTest_logical_fallacies_5
data_files:
- split: 2023_07_19T16_30_25.304813
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-07-19T16:30:25.304813.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-07-19T16:30:25.304813.parquet'
- config_name: harness_hendrycksTest_machine_learning_5
data_files:
- split: 2023_07_19T16_30_25.304813
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2023-07-19T16:30:25.304813.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2023-07-19T16:30:25.304813.parquet'
- config_name: harness_hendrycksTest_management_5
data_files:
- split: 2023_07_19T16_30_25.304813
path:
- '**/details_harness|hendrycksTest-management|5_2023-07-19T16:30:25.304813.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-management|5_2023-07-19T16:30:25.304813.parquet'
- config_name: harness_hendrycksTest_marketing_5
data_files:
- split: 2023_07_19T16_30_25.304813
path:
- '**/details_harness|hendrycksTest-marketing|5_2023-07-19T16:30:25.304813.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-marketing|5_2023-07-19T16:30:25.304813.parquet'
- config_name: harness_hendrycksTest_medical_genetics_5
data_files:
- split: 2023_07_19T16_30_25.304813
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-07-19T16:30:25.304813.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-07-19T16:30:25.304813.parquet'
- config_name: harness_hendrycksTest_miscellaneous_5
data_files:
- split: 2023_07_19T16_30_25.304813
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-07-19T16:30:25.304813.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-07-19T16:30:25.304813.parquet'
- config_name: harness_hendrycksTest_moral_disputes_5
data_files:
- split: 2023_07_19T16_30_25.304813
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-07-19T16:30:25.304813.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-07-19T16:30:25.304813.parquet'
- config_name: harness_hendrycksTest_moral_scenarios_5
data_files:
- split: 2023_07_19T16_30_25.304813
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-07-19T16:30:25.304813.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-07-19T16:30:25.304813.parquet'
- config_name: harness_hendrycksTest_nutrition_5
data_files:
- split: 2023_07_19T16_30_25.304813
path:
- '**/details_harness|hendrycksTest-nutrition|5_2023-07-19T16:30:25.304813.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-nutrition|5_2023-07-19T16:30:25.304813.parquet'
- config_name: harness_hendrycksTest_philosophy_5
data_files:
- split: 2023_07_19T16_30_25.304813
path:
- '**/details_harness|hendrycksTest-philosophy|5_2023-07-19T16:30:25.304813.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-philosophy|5_2023-07-19T16:30:25.304813.parquet'
- config_name: harness_hendrycksTest_prehistory_5
data_files:
- split: 2023_07_19T16_30_25.304813
path:
- '**/details_harness|hendrycksTest-prehistory|5_2023-07-19T16:30:25.304813.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-prehistory|5_2023-07-19T16:30:25.304813.parquet'
- config_name: harness_hendrycksTest_professional_accounting_5
data_files:
- split: 2023_07_19T16_30_25.304813
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-07-19T16:30:25.304813.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-07-19T16:30:25.304813.parquet'
- config_name: harness_hendrycksTest_professional_law_5
data_files:
- split: 2023_07_19T16_30_25.304813
path:
- '**/details_harness|hendrycksTest-professional_law|5_2023-07-19T16:30:25.304813.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_law|5_2023-07-19T16:30:25.304813.parquet'
- config_name: harness_hendrycksTest_professional_medicine_5
data_files:
- split: 2023_07_19T16_30_25.304813
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-07-19T16:30:25.304813.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-07-19T16:30:25.304813.parquet'
- config_name: harness_hendrycksTest_professional_psychology_5
data_files:
- split: 2023_07_19T16_30_25.304813
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-07-19T16:30:25.304813.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-07-19T16:30:25.304813.parquet'
- config_name: harness_hendrycksTest_public_relations_5
data_files:
- split: 2023_07_19T16_30_25.304813
path:
- '**/details_harness|hendrycksTest-public_relations|5_2023-07-19T16:30:25.304813.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-public_relations|5_2023-07-19T16:30:25.304813.parquet'
- config_name: harness_hendrycksTest_security_studies_5
data_files:
- split: 2023_07_19T16_30_25.304813
path:
- '**/details_harness|hendrycksTest-security_studies|5_2023-07-19T16:30:25.304813.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-security_studies|5_2023-07-19T16:30:25.304813.parquet'
- config_name: harness_hendrycksTest_sociology_5
data_files:
- split: 2023_07_19T16_30_25.304813
path:
- '**/details_harness|hendrycksTest-sociology|5_2023-07-19T16:30:25.304813.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-sociology|5_2023-07-19T16:30:25.304813.parquet'
- config_name: harness_hendrycksTest_us_foreign_policy_5
data_files:
- split: 2023_07_19T16_30_25.304813
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-07-19T16:30:25.304813.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-07-19T16:30:25.304813.parquet'
- config_name: harness_hendrycksTest_virology_5
data_files:
- split: 2023_07_19T16_30_25.304813
path:
- '**/details_harness|hendrycksTest-virology|5_2023-07-19T16:30:25.304813.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-virology|5_2023-07-19T16:30:25.304813.parquet'
- config_name: harness_hendrycksTest_world_religions_5
data_files:
- split: 2023_07_19T16_30_25.304813
path:
- '**/details_harness|hendrycksTest-world_religions|5_2023-07-19T16:30:25.304813.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-world_religions|5_2023-07-19T16:30:25.304813.parquet'
- config_name: harness_truthfulqa_mc_0
data_files:
- split: 2023_07_19T16_30_25.304813
path:
- '**/details_harness|truthfulqa:mc|0_2023-07-19T16:30:25.304813.parquet'
- split: latest
path:
- '**/details_harness|truthfulqa:mc|0_2023-07-19T16:30:25.304813.parquet'
- config_name: results
data_files:
- split: 2023_07_19T16_30_25.304813
path:
- results_2023-07-19T16:30:25.304813.parquet
- split: latest
path:
- results_2023-07-19T16:30:25.304813.parquet
---
# Dataset Card for Evaluation run of medalpaca/medalpaca-7b
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/medalpaca/medalpaca-7b
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** clementine@hf.co
### Dataset Summary
Dataset automatically created during the evaluation run of model [medalpaca/medalpaca-7b](https://huggingface.co/medalpaca/medalpaca-7b) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 61 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_medalpaca__medalpaca-7b",
"harness_truthfulqa_mc_0",
split="train")
```
## Latest results
These are the [latest results from run 2023-07-19T16:30:25.304813](https://huggingface.co/datasets/open-llm-leaderboard/details_medalpaca__medalpaca-7b/blob/main/results_2023-07-19T16%3A30%3A25.304813.json) (note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.4193625530628919,
"acc_stderr": 0.03474006835088891,
"acc_norm": 0.42342868811455614,
"acc_norm_stderr": 0.034724119967275764,
"mc1": 0.25703794369645044,
"mc1_stderr": 0.015298077509485076,
"mc2": 0.4046224421319521,
"mc2_stderr": 0.015012572023050848
},
"harness|arc:challenge|25": {
"acc": 0.48976109215017066,
"acc_stderr": 0.014608326906285019,
"acc_norm": 0.5409556313993175,
"acc_norm_stderr": 0.01456229107360123
},
"harness|hellaswag|10": {
"acc": 0.6155148376817368,
"acc_stderr": 0.004854791378656995,
"acc_norm": 0.8042222664807808,
"acc_norm_stderr": 0.003959872578165267
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.26,
"acc_stderr": 0.044084400227680814,
"acc_norm": 0.26,
"acc_norm_stderr": 0.044084400227680814
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.4962962962962963,
"acc_stderr": 0.04319223625811331,
"acc_norm": 0.4962962962962963,
"acc_norm_stderr": 0.04319223625811331
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.2894736842105263,
"acc_stderr": 0.03690677986137282,
"acc_norm": 0.2894736842105263,
"acc_norm_stderr": 0.03690677986137282
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.42,
"acc_stderr": 0.049604496374885836,
"acc_norm": 0.42,
"acc_norm_stderr": 0.049604496374885836
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.4716981132075472,
"acc_stderr": 0.0307235352490061,
"acc_norm": 0.4716981132075472,
"acc_norm_stderr": 0.0307235352490061
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.4513888888888889,
"acc_stderr": 0.04161402398403279,
"acc_norm": 0.4513888888888889,
"acc_norm_stderr": 0.04161402398403279
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.27,
"acc_stderr": 0.0446196043338474,
"acc_norm": 0.27,
"acc_norm_stderr": 0.0446196043338474
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.21,
"acc_stderr": 0.040936018074033256,
"acc_norm": 0.21,
"acc_norm_stderr": 0.040936018074033256
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.23,
"acc_stderr": 0.042295258468165065,
"acc_norm": 0.23,
"acc_norm_stderr": 0.042295258468165065
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.42196531791907516,
"acc_stderr": 0.0376574669386515,
"acc_norm": 0.42196531791907516,
"acc_norm_stderr": 0.0376574669386515
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.27450980392156865,
"acc_stderr": 0.04440521906179328,
"acc_norm": 0.27450980392156865,
"acc_norm_stderr": 0.04440521906179328
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.48,
"acc_stderr": 0.050211673156867795,
"acc_norm": 0.48,
"acc_norm_stderr": 0.050211673156867795
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.4127659574468085,
"acc_stderr": 0.03218471141400352,
"acc_norm": 0.4127659574468085,
"acc_norm_stderr": 0.03218471141400352
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.23684210526315788,
"acc_stderr": 0.039994238792813365,
"acc_norm": 0.23684210526315788,
"acc_norm_stderr": 0.039994238792813365
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.35172413793103446,
"acc_stderr": 0.0397923663749741,
"acc_norm": 0.35172413793103446,
"acc_norm_stderr": 0.0397923663749741
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.24867724867724866,
"acc_stderr": 0.02226181769240018,
"acc_norm": 0.24867724867724866,
"acc_norm_stderr": 0.02226181769240018
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.19047619047619047,
"acc_stderr": 0.035122074123020514,
"acc_norm": 0.19047619047619047,
"acc_norm_stderr": 0.035122074123020514
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.34,
"acc_stderr": 0.047609522856952365,
"acc_norm": 0.34,
"acc_norm_stderr": 0.047609522856952365
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.5032258064516129,
"acc_stderr": 0.02844341422643833,
"acc_norm": 0.5032258064516129,
"acc_norm_stderr": 0.02844341422643833
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.3399014778325123,
"acc_stderr": 0.033327690684107895,
"acc_norm": 0.3399014778325123,
"acc_norm_stderr": 0.033327690684107895
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.38,
"acc_stderr": 0.048783173121456316,
"acc_norm": 0.38,
"acc_norm_stderr": 0.048783173121456316
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.6060606060606061,
"acc_stderr": 0.038154943086889305,
"acc_norm": 0.6060606060606061,
"acc_norm_stderr": 0.038154943086889305
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.35858585858585856,
"acc_stderr": 0.03416903640391521,
"acc_norm": 0.35858585858585856,
"acc_norm_stderr": 0.03416903640391521
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.5129533678756477,
"acc_stderr": 0.036072280610477486,
"acc_norm": 0.5129533678756477,
"acc_norm_stderr": 0.036072280610477486
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.32564102564102565,
"acc_stderr": 0.02375966576741229,
"acc_norm": 0.32564102564102565,
"acc_norm_stderr": 0.02375966576741229
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.23333333333333334,
"acc_stderr": 0.025787874220959333,
"acc_norm": 0.23333333333333334,
"acc_norm_stderr": 0.025787874220959333
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.3067226890756303,
"acc_stderr": 0.029953823891887037,
"acc_norm": 0.3067226890756303,
"acc_norm_stderr": 0.029953823891887037
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.25165562913907286,
"acc_stderr": 0.035433042343899844,
"acc_norm": 0.25165562913907286,
"acc_norm_stderr": 0.035433042343899844
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.6275229357798165,
"acc_stderr": 0.020728368457638497,
"acc_norm": 0.6275229357798165,
"acc_norm_stderr": 0.020728368457638497
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.21296296296296297,
"acc_stderr": 0.027920963147993662,
"acc_norm": 0.21296296296296297,
"acc_norm_stderr": 0.027920963147993662
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.6127450980392157,
"acc_stderr": 0.03418931233833344,
"acc_norm": 0.6127450980392157,
"acc_norm_stderr": 0.03418931233833344
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.5780590717299579,
"acc_stderr": 0.032148146302403695,
"acc_norm": 0.5780590717299579,
"acc_norm_stderr": 0.032148146302403695
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.5381165919282511,
"acc_stderr": 0.033460150119732274,
"acc_norm": 0.5381165919282511,
"acc_norm_stderr": 0.033460150119732274
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.5419847328244275,
"acc_stderr": 0.04369802690578756,
"acc_norm": 0.5419847328244275,
"acc_norm_stderr": 0.04369802690578756
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.5619834710743802,
"acc_stderr": 0.04529146804435792,
"acc_norm": 0.5619834710743802,
"acc_norm_stderr": 0.04529146804435792
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.46296296296296297,
"acc_stderr": 0.04820403072760627,
"acc_norm": 0.46296296296296297,
"acc_norm_stderr": 0.04820403072760627
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.4294478527607362,
"acc_stderr": 0.038890666191127216,
"acc_norm": 0.4294478527607362,
"acc_norm_stderr": 0.038890666191127216
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.3392857142857143,
"acc_stderr": 0.044939490686135376,
"acc_norm": 0.3392857142857143,
"acc_norm_stderr": 0.044939490686135376
},
"harness|hendrycksTest-management|5": {
"acc": 0.4077669902912621,
"acc_stderr": 0.048657775704107675,
"acc_norm": 0.4077669902912621,
"acc_norm_stderr": 0.048657775704107675
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.6196581196581197,
"acc_stderr": 0.03180425204384099,
"acc_norm": 0.6196581196581197,
"acc_norm_stderr": 0.03180425204384099
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.56,
"acc_stderr": 0.04988876515698589,
"acc_norm": 0.56,
"acc_norm_stderr": 0.04988876515698589
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.5504469987228607,
"acc_stderr": 0.017788725283507337,
"acc_norm": 0.5504469987228607,
"acc_norm_stderr": 0.017788725283507337
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.430635838150289,
"acc_stderr": 0.026658800273672373,
"acc_norm": 0.430635838150289,
"acc_norm_stderr": 0.026658800273672373
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.2424581005586592,
"acc_stderr": 0.014333522059217889,
"acc_norm": 0.2424581005586592,
"acc_norm_stderr": 0.014333522059217889
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.5196078431372549,
"acc_stderr": 0.028607893699576066,
"acc_norm": 0.5196078431372549,
"acc_norm_stderr": 0.028607893699576066
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.4340836012861736,
"acc_stderr": 0.028150232244535608,
"acc_norm": 0.4340836012861736,
"acc_norm_stderr": 0.028150232244535608
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.4444444444444444,
"acc_stderr": 0.02764847787741332,
"acc_norm": 0.4444444444444444,
"acc_norm_stderr": 0.02764847787741332
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.31560283687943264,
"acc_stderr": 0.027724989449509314,
"acc_norm": 0.31560283687943264,
"acc_norm_stderr": 0.027724989449509314
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.3428943937418514,
"acc_stderr": 0.012123463271585895,
"acc_norm": 0.3428943937418514,
"acc_norm_stderr": 0.012123463271585895
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.5992647058823529,
"acc_stderr": 0.029768263528933105,
"acc_norm": 0.5992647058823529,
"acc_norm_stderr": 0.029768263528933105
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.5,
"acc_stderr": 0.020227834851568375,
"acc_norm": 0.5,
"acc_norm_stderr": 0.020227834851568375
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.509090909090909,
"acc_stderr": 0.04788339768702861,
"acc_norm": 0.509090909090909,
"acc_norm_stderr": 0.04788339768702861
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.3877551020408163,
"acc_stderr": 0.031192230726795656,
"acc_norm": 0.3877551020408163,
"acc_norm_stderr": 0.031192230726795656
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.4577114427860697,
"acc_stderr": 0.035228658640995975,
"acc_norm": 0.4577114427860697,
"acc_norm_stderr": 0.035228658640995975
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.61,
"acc_stderr": 0.04902071300001974,
"acc_norm": 0.61,
"acc_norm_stderr": 0.04902071300001974
},
"harness|hendrycksTest-virology|5": {
"acc": 0.4939759036144578,
"acc_stderr": 0.03892212195333045,
"acc_norm": 0.4939759036144578,
"acc_norm_stderr": 0.03892212195333045
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.4327485380116959,
"acc_stderr": 0.03799978644370608,
"acc_norm": 0.4327485380116959,
"acc_norm_stderr": 0.03799978644370608
},
"harness|truthfulqa:mc|0": {
"mc1": 0.25703794369645044,
"mc1_stderr": 0.015298077509485076,
"mc2": 0.4046224421319521,
"mc2_stderr": 0.015012572023050848
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] |
togethercomputer/Long-Data-Collections | 2023-07-26T17:03:50.000Z | [
"license:other",
"region:us"
] | togethercomputer | null | null | null | 43 | 2,468 | ---
license: other
---
# Dataset Summary
This collection is a compilation of long context datasets, specifically designed for tasks requiring extensive comprehension and inference from large text inputs.
Currently, it encompasses data intended for training a robust base model, which can be found in the pretrain/ directory. Additionally, it includes datasets tailored for specific needs, located in the fine-tune/ directory. These specialized datasets include multi-passage question answering, derived from Natural Questions, and long-context summarization, exemplified by the BookSum dataset.
# Detailed Description
## Pretrain Data
The pretraining data is a collection of diverse datasets utilized to train the AI model. These datasets include a variety of sources that provide a wide range of information, from books to scientific papers, and instruction data. Here's a detailed look at each:
### RedPajama-Book
This dataset is a specific slice of the larger RedPajama-Data-1T. The RedPajama-Book subset specifically focuses on data extracted from books. This broad and diverse range of literary content helps the model to understand and generate text in a wide variety of styles, genres, and topics, and especially in a wide range of context.
### RedPajama-ArXiv
The RedPajama-ArXiv dataset is another specific slice of RedPajama-Data-1T. In this dataset, the abstract corresponding to each paper is appended after the paper, providing a summary of the paper's content. This helps the model to leverage the long-range context.
### UL2 Oscar
This dataset is generated with LAION-AI's Open-Instruction-Generalist, asking the model to fill in missing chunks, or complete the text.
### RedPajama
This is a subset of the RedPajama-Data-1T. The RedPajama dataset is a large and diverse dataset that includes a wide variety of data sources. The specific subset used in this case (togethercomputer/RedPajama-Data-1T-Sample) is a representative sample of the larger dataset, providing a broad overview of the types of data included in RedPajama-Data-1T.
### NI
The Materialized Natural Instruction (NI) data is a dataset that focuses on natural language instructions. This dataset has been decontaminated against HELM core scenarios, meaning any data that matches specific scenarios outlined in the HELM core has been removed to avoid bias or overfitting. This dataset aids the model in understanding and generating instructional text.
### P3
The Materialized Public Pool of Prompts (P3) data is a dataset that includes a wide variety of user-generated prompts. This dataset has also been decontaminated against HELM core scenarios. The P3 dataset helps the model in understanding a broad set of user prompts and generating appropriate responses.
### Pile
The Pile dataset is a large and diverse dataset that includes a wide variety of data sources. The specific subset used in this case is a subsample of the larger Pile dataset.
## Fine-tune Data
### Multi-passage QA from Natural Questions:
This dataset is a multi-passage question answering dataset derived from the original Natural Questions (NQ) dataset by Google. The NQ dataset consists of real user queries issued to Google's search engine, paired with high-quality answers. In this derived version, each example consists of a question along with multiple (10-200) Wiki passages, from which the model must infer the correct answer. This dataset is designed to challenge and evaluate models on their ability to handle complex, multi-passage question answering.
### BookSum:
BookSum is a dataset for long context summarization. It includes a vast collection of books from various genres, and the task is to generate a coherent and concise summary given a long context from the book. This dataset is designed to test and train models on their ability to understand and summarize long, complex narratives.
# Dataset Limitations and Future Work
While these datasets provide a robust platform for training and evaluating models on long context tasks, they may still contain some limitations. For instance, the datasets might be biased towards the types of questions asked in Google's search engine and the genres of books included in the BookSum dataset. In the future, we plan to expand this collection to include more diverse datasets for a wider range of long context tasks.
# Licensing Information
Please refer to the original sources of the datasets for information on their respective licenses. |
LabHC/bias_in_bios | 2023-09-10T15:41:38.000Z | [
"task_categories:text-classification",
"language:en",
"license:mit",
"region:us"
] | LabHC | null | null | null | 0 | 2,466 | ---
license: mit
task_categories:
- text-classification
language:
- en
dataset_info:
features:
- name: hard_text
dtype: string
- name: profession
dtype: int64
- name: gender
dtype: int64
splits:
- name: train
num_bytes: 107487885
num_examples: 257478
- name: test
num_bytes: 41312256
num_examples: 99069
- name: dev
num_bytes: 16504417
num_examples: 39642
download_size: 99808338
dataset_size: 165304558
---
# Bias in Bios
Bias in Bios was created by (De-Artega et al., 2019) and published under the MIT license (https://github.com/microsoft/biosbias). The dataset is used to investigate bias in NLP models. It consists of textual biographies used to predict professional occupations, the sensitive attribute is the gender (binary).
The version shared here is the version proposed by (Ravgofel et al., 2020) which slightly smaller due to the unavailability of 5,557 biographies.
The dataset is divided between train (257,000 samples), test (99,000 samples) and dev (40,000 samples) sets.
To load each all splits ('train', 'dev', 'test'), use the following code :
```python
train_dataset = load_dataset("LabHC/bias_in_bios", split='train')
test_dataset = load_dataset("LabHC/bias_in_bios", split='test')
dev_dataset = load_dataset("LabHC/bias_in_bios", split='dev')
```
Below are presented the classifiaction and sensitive attribtues labels and their proportion. Distributions are similar through the three sets.
#### Classification labels
| Profession | Numerical label | Proportion (%)| | Profession | Numerical label | Proportion (%)|
|---|---|---|---|---|---|---|
accountant | 0 | 1.42 | | nurse | 13 | 4.78
architect | 1 | 2.55 | | painter | 14 | 1.95
attorney | 2 | 8.22 | | paralegal | 15 | 0.45
chiropractor | 3 | 0.67 | | pastor | 16 | 0.64
comedian | 4 | 0.71 | | personal_trainer | 17 | 0.36
composer | 5 | 1.41 | | photographer | 18 | 6.13
dentist | 6 | 3.68 | | physician | 19 | 10.35
dietitian | 7 | 1.0 | | poet | 20 | 1.77
dj | 8 | 0.38 | | professor | 21 | 29.8
filmmaker | 9 | 1.77 | | psychologist | 22 | 4.64
interior_designer | 10 | 0.37 | | rapper | 23 | 0.35
journalist | 11 | 5.03 | | software_engineer | 24 | 1.74
model | 12 | 1.89 | | surgeon | 25 | 3.43
nurse | 13 | 4.78 | | teacher | 26 | 4.09
painter | 14 | 1.95 | | yoga_teacher | 27 | 0.42
#### Sensitive attributes
| Gender | Numerical label | Proportion (%)|
|---|---|---|
Male | 0 | 53.9 |
Female | 1 | 46.1
---
(De-Artega et al., 2019) Maria De-Arteaga, Alexey Romanov, Hanna Wallach, Jennifer Chayes, Christian Borgs, Alexandra Chouldechova, Sahin Geyik, Krishnaram Kenthapadi, and Adam Tauman Kalai. 2019. Bias in Bios: A Case Study of Semantic Representation Bias in a High-Stakes Setting. In Proceedings of the Conference on Fairness, Accountability, and Transparency (FAT* '19). Association for Computing Machinery, New York, NY, USA, 120–128. https://doi.org/10.1145/3287560.3287572
(Ravgofel et al., 2020) Shauli Ravfogel, Yanai Elazar, Hila Gonen, Michael Twiton, and Yoav Goldberg. 2020. Null It Out: Guarding Protected Attributes by Iterative Nullspace Projection. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 7237–7256, Online. Association for Computational Linguistics. |
go_emotions | 2023-06-01T14:59:54.000Z | [
"task_categories:text-classification",
"task_ids:multi-class-classification",
"task_ids:multi-label-classification",
"annotations_creators:crowdsourced",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:en",
"license:apache-2.0",
"emotion",
"arxiv:2005.00547",
"region:us"
] | null | The GoEmotions dataset contains 58k carefully curated Reddit comments labeled for 27 emotion categories or Neutral.
The emotion categories are admiration, amusement, anger, annoyance, approval, caring, confusion, curiosity, desire,
disappointment, disapproval, disgust, embarrassment, excitement, fear, gratitude, grief, joy, love, nervousness,
optimism, pride, realization, relief, remorse, sadness, surprise. | @inproceedings{demszky2020goemotions,
author = {Demszky, Dorottya and Movshovitz-Attias, Dana and Ko, Jeongwoo and Cowen, Alan and Nemade, Gaurav and Ravi, Sujith},
booktitle = {58th Annual Meeting of the Association for Computational Linguistics (ACL)},
title = {{GoEmotions: A Dataset of Fine-Grained Emotions}},
year = {2020}
} | null | 57 | 2,465 | ---
annotations_creators:
- crowdsourced
language_creators:
- found
language:
- en
license:
- apache-2.0
multilinguality:
- monolingual
size_categories:
- 100K<n<1M
- 10K<n<100K
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- multi-class-classification
- multi-label-classification
paperswithcode_id: goemotions
pretty_name: GoEmotions
tags:
- emotion
dataset_info:
- config_name: raw
features:
- name: text
dtype: string
- name: id
dtype: string
- name: author
dtype: string
- name: subreddit
dtype: string
- name: link_id
dtype: string
- name: parent_id
dtype: string
- name: created_utc
dtype: float32
- name: rater_id
dtype: int32
- name: example_very_unclear
dtype: bool
- name: admiration
dtype: int32
- name: amusement
dtype: int32
- name: anger
dtype: int32
- name: annoyance
dtype: int32
- name: approval
dtype: int32
- name: caring
dtype: int32
- name: confusion
dtype: int32
- name: curiosity
dtype: int32
- name: desire
dtype: int32
- name: disappointment
dtype: int32
- name: disapproval
dtype: int32
- name: disgust
dtype: int32
- name: embarrassment
dtype: int32
- name: excitement
dtype: int32
- name: fear
dtype: int32
- name: gratitude
dtype: int32
- name: grief
dtype: int32
- name: joy
dtype: int32
- name: love
dtype: int32
- name: nervousness
dtype: int32
- name: optimism
dtype: int32
- name: pride
dtype: int32
- name: realization
dtype: int32
- name: relief
dtype: int32
- name: remorse
dtype: int32
- name: sadness
dtype: int32
- name: surprise
dtype: int32
- name: neutral
dtype: int32
splits:
- name: train
num_bytes: 55343630
num_examples: 211225
download_size: 42742918
dataset_size: 55343630
- config_name: simplified
features:
- name: text
dtype: string
- name: labels
sequence:
class_label:
names:
'0': admiration
'1': amusement
'2': anger
'3': annoyance
'4': approval
'5': caring
'6': confusion
'7': curiosity
'8': desire
'9': disappointment
'10': disapproval
'11': disgust
'12': embarrassment
'13': excitement
'14': fear
'15': gratitude
'16': grief
'17': joy
'18': love
'19': nervousness
'20': optimism
'21': pride
'22': realization
'23': relief
'24': remorse
'25': sadness
'26': surprise
'27': neutral
- name: id
dtype: string
splits:
- name: train
num_bytes: 4224198
num_examples: 43410
- name: validation
num_bytes: 527131
num_examples: 5426
- name: test
num_bytes: 524455
num_examples: 5427
download_size: 4394818
dataset_size: 5275784
config_names:
- raw
- simplified
---
# Dataset Card for GoEmotions
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://github.com/google-research/google-research/tree/master/goemotions
- **Repository:** https://github.com/google-research/google-research/tree/master/goemotions
- **Paper:** https://arxiv.org/abs/2005.00547
- **Leaderboard:**
- **Point of Contact:** [Dora Demszky](https://nlp.stanford.edu/~ddemszky/index.html)
### Dataset Summary
The GoEmotions dataset contains 58k carefully curated Reddit comments labeled for 27 emotion categories or Neutral.
The raw data is included as well as the smaller, simplified version of the dataset with predefined train/val/test
splits.
### Supported Tasks and Leaderboards
This dataset is intended for multi-class, multi-label emotion classification.
### Languages
The data is in English.
## Dataset Structure
### Data Instances
Each instance is a reddit comment with a corresponding ID and one or more emotion annotations (or neutral).
### Data Fields
The simplified configuration includes:
- `text`: the reddit comment
- `labels`: the emotion annotations
- `comment_id`: unique identifier of the comment (can be used to look up the entry in the raw dataset)
In addition to the above, the raw data includes:
* `author`: The Reddit username of the comment's author.
* `subreddit`: The subreddit that the comment belongs to.
* `link_id`: The link id of the comment.
* `parent_id`: The parent id of the comment.
* `created_utc`: The timestamp of the comment.
* `rater_id`: The unique id of the annotator.
* `example_very_unclear`: Whether the annotator marked the example as being very unclear or difficult to label (in this
case they did not choose any emotion labels).
In the raw data, labels are listed as their own columns with binary 0/1 entries rather than a list of ids as in the
simplified data.
### Data Splits
The simplified data includes a set of train/val/test splits with 43,410, 5426, and 5427 examples respectively.
## Dataset Creation
### Curation Rationale
From the paper abstract:
> Understanding emotion expressed in language has a wide range of applications, from building empathetic chatbots to
detecting harmful online behavior. Advancement in this area can be improved using large-scale datasets with a
fine-grained typology, adaptable to multiple downstream tasks.
### Source Data
#### Initial Data Collection and Normalization
Data was collected from Reddit comments via a variety of automated methods discussed in 3.1 of the paper.
#### Who are the source language producers?
English-speaking Reddit users.
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
Annotations were produced by 3 English-speaking crowdworkers in India.
### Personal and Sensitive Information
This dataset includes the original usernames of the Reddit users who posted each comment. Although Reddit usernames
are typically disasociated from personal real-world identities, this is not always the case. It may therefore be
possible to discover the identities of the individuals who created this content in some cases.
## Considerations for Using the Data
### Social Impact of Dataset
Emotion detection is a worthwhile problem which can potentially lead to improvements such as better human/computer
interaction. However, emotion detection algorithms (particularly in computer vision) have been abused in some cases
to make erroneous inferences in human monitoring and assessment applications such as hiring decisions, insurance
pricing, and student attentiveness (see
[this article](https://www.unite.ai/ai-now-institute-warns-about-misuse-of-emotion-detection-software-and-other-ethical-issues/)).
### Discussion of Biases
From the authors' github page:
> Potential biases in the data include: Inherent biases in Reddit and user base biases, the offensive/vulgar word lists used for data filtering, inherent or unconscious bias in assessment of offensive identity labels, annotators were all native English speakers from India. All these likely affect labelling, precision, and recall for a trained model. Anyone using this dataset should be aware of these limitations of the dataset.
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
Researchers at Amazon Alexa, Google Research, and Stanford. See the [author list](https://arxiv.org/abs/2005.00547).
### Licensing Information
The GitHub repository which houses this dataset has an
[Apache License 2.0](https://github.com/google-research/google-research/blob/master/LICENSE).
### Citation Information
@inproceedings{demszky2020goemotions,
author = {Demszky, Dorottya and Movshovitz-Attias, Dana and Ko, Jeongwoo and Cowen, Alan and Nemade, Gaurav and Ravi, Sujith},
booktitle = {58th Annual Meeting of the Association for Computational Linguistics (ACL)},
title = {{GoEmotions: A Dataset of Fine-Grained Emotions}},
year = {2020}
}
### Contributions
Thanks to [@joeddav](https://github.com/joeddav) for adding this dataset. |
EleutherAI/arithmetic | 2023-03-09T17:58:16.000Z | [
"arxiv:2005.14165",
"region:us"
] | EleutherAI | A small battery of 10 tests that involve asking language models a simple arithmetic
problem in natural language. | @inproceedings{NEURIPS2020_1457c0d6,
author = {Brown, Tom and Mann, Benjamin and Ryder, Nick and Subbiah, Melanie and Kaplan, Jared D and Dhariwal, Prafulla and Neelakantan, Arvind and Shyam, Pranav and Sastry, Girish and Askell, Amanda and Agarwal, Sandhini and Herbert-Voss, Ariel and Krueger, Gretchen and Henighan, Tom and Child, Rewon and Ramesh, Aditya and Ziegler, Daniel and Wu, Jeffrey and Winter, Clemens and Hesse, Chris and Chen, Mark and Sigler, Eric and Litwin, Mateusz and Gray, Scott and Chess, Benjamin and Clark, Jack and Berner, Christopher and McCandlish, Sam and Radford, Alec and Sutskever, Ilya and Amodei, Dario},
booktitle = {Advances in Neural Information Processing Systems},
editor = {H. Larochelle and M. Ranzato and R. Hadsell and M. F. Balcan and H. Lin},
pages = {1877--1901},
publisher = {Curran Associates, Inc.},
title = {Language Models are Few-Shot Learners},
url = {https://proceedings.neurips.cc/paper/2020/file/1457c0d6bfcb4967418bfb8ac142f64a-Paper.pdf},
volume = {33},
year = {2020}
} | null | 1 | 2,440 | ### Dataset Summary
A small battery of 10 tests that involve asking language models a simple arithmetic problem in natural language.
### Languages
English
### Source Data
Obtained from [https://github.com/openai/gpt-3/tree/master/data](https://github.com/openai/gpt-3/tree/master/data)
### Citation
```
@article{brown2020language,
title={Language Models are Few-Shot Learners},
author={Tom B. Brown and Benjamin Mann and Nick Ryder and Melanie Subbiah and Jared Kaplan and Prafulla Dhariwal and Arvind Neelakantan and Pranav Shyam and Girish Sastry and Amanda Askell and Sandhini Agarwal and Ariel Herbert-Voss and Gretchen Krueger and Tom Henighan and Rewon Child and Aditya Ramesh and Daniel M. Ziegler and Jeffrey Wu and Clemens Winter and Christopher Hesse and Mark Chen and Eric Sigler and Mateusz Litwin and Scott Gray and Benjamin Chess and Jack Clark and Christopher Berner and Sam McCandlish and Alec Radford and Ilya Sutskever and Dario Amodei},
year={2020},
eprint={2005.14165},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` |
danbider/codegen | 2023-07-21T01:53:30.000Z | [
"region:us"
] | danbider | null | null | null | 0 | 2,420 | Entry not found |
dell-research-harvard/AmericanStories | 2023-09-08T18:33:32.000Z | [
"task_categories:text-classification",
"task_categories:text-generation",
"task_categories:text-retrieval",
"task_categories:summarization",
"task_categories:question-answering",
"size_categories:100M<n<1B",
"language:en",
"license:cc-by-4.0",
"social science",
"economics",
"news",
"newspaper",
"large language modeling",
"nlp",
"lam",
"doi:10.57967/hf/0757",
"region:us"
] | dell-research-harvard | American Stories offers high-quality structured data from historical newspapers suitable for pre-training large language models to enhance the understanding of historical English and world knowledge. It can also be integrated into external databases of retrieval-augmented language models, enabling broader access to historical information, including interpretations of political events and intricate details about people's ancestors. Additionally, the structured article texts facilitate the application of transformer-based methods for popular tasks like detecting reproduced content, significantly improving accuracy compared to traditional OCR methods. American Stories serves as a substantial and valuable dataset for advancing multimodal layout analysis models and other multimodal applications. | Coming Soon | null | 76 | 2,415 | ---
license: cc-by-4.0
task_categories:
- text-classification
- text-generation
- text-retrieval
- summarization
- question-answering
language:
- en
tags:
- social science
- economics
- news
- newspaper
- large language modeling
- nlp
- lam
pretty_name: AmericanStories
size_categories:
- 100M<n<1B
---
# Dataset Card for the American Stories dataset
## Dataset Description
- **Homepage:** Coming Soon
- **Repository:** https://github.com/dell-research-harvard/AmericanStories
- **Paper:** Coming Soon
=- **Point of Contact:** melissa.dell@gmail.com
### Dataset Summary
The American Stories dataset is a collection of full article texts extracted from historical U.S. newspaper images. It includes nearly 20 million scans from the public domain Chronicling America collection maintained by the Library of Congress. The dataset is designed to address the challenges posed by complex layouts and low OCR quality in existing newspaper datasets.
It was created using a novel deep learning pipeline that incorporates layout detection, legibility classification, custom OCR, and the association of article texts spanning multiple bounding boxes. It employs efficient architectures specifically designed for mobile phones to ensure high scalability.
The dataset offers high-quality data that can be utilized for various purposes. It can be used to pre-train large language models and improve their understanding of historical English and world knowledge.
The dataset can also be integrated into retrieval-augmented language models, making historical information more accessible, including interpretations of political events and details about people's ancestors.
Additionally, the structured article texts in the dataset enable the use of transformer-based methods for applications such as detecting reproduced content. This significantly enhances accuracy compared to relying solely on existing OCR techniques.
The American Stories dataset serves as an invaluable resource for developing multimodal layout analysis models and other multimodal applications. Its vast size and silver quality make it ideal for innovation and research in this domain.
### Languages
English (en)
## Dataset Structure
The raw data on this repo contains compressed chunks of newspaper scans for each year. Each scan has its own JSON file named as the {scan_id}.json.
The data loading script takes care of the downloading, extraction, and parsing to outputs of two kinds :
+ Article-Level Output: The unit of the Dataset Dict is an associated article
+ Scan Level Output: The unit of the Dataset Dict is an entire scan with all the raw unparsed data
### Data Instances
Here are some examples of what the output looks like.
#### Article level
```
{
'article_id': '1_1870-01-01_p1_sn82014899_00211105483_1870010101_0773',
'newspaper_name': 'The weekly Arizona miner.',
'edition': '01', 'date': '1870-01-01',
'page': 'p1',
'headline': '',
'byline': '',
'article': 'PREyors 10 leaving San Francisco for Wash ington City, our Governor, A. r. K. Saford. called upon Generals Thomas and Ord and nt the carrying out of what (truncated)'
}
```
#### Scan level
```
{'raw_data_string': '{"lccn": {"title": "The Massachusetts spy, or, Thomas\'s Boston journal.", "geonames_ids": ["4930956"],....other_keys:values}
```
### Data Fields
#### Article Level
+ "article_id": Unique Id for an associated article
+ "newspaper_name": Newspaper Name
+ "edition": Edition number
+ "date": Date of publication
+ "page": Page number
+ "headline": Headline Text
+ "byline": Byline Text
+ "article": Article Text
#### Scan Level
"raw_data_string": Unparsed scan-level data that contains scan metadata from Library of Congress, all content regions with their bounding boxes, OCR text and legibility classification
### Data Splits
There are no train, test or val splits. Since the dataset has a massive number of units (articles or newspaper scans), we have split the data by year. Once the dataset is loaded,
instead of the usual way of accessing a split as dataset["train"], specific years can be accessed using the syntax dataset["year"] where year can be any year between 1774-1963 as long as there is at least one scan for the year.
The data loading script provides options to download both a subset of years and all years at a time.
### Accessing the Data
There are 4 config options that can be used to access the data depending upon the use-case.
```
from datasets import load_dataset
# Download data for the year 1809 at the associated article level (Default)
dataset = load_dataset("dell-research-harvard/AmericanStories",
"subset_years",
year_list=["1809", "1810"]
)
# Download and process data for all years at the article level
dataset = load_dataset("dell-research-harvard/AmericanStories",
"all_years"
)
# Download and process data for 1809 at the scan level
dataset = load_dataset("dell-research-harvard/AmericanStories",
"subset_years_content_regions",
year_list=["1809"]
)
# Download ad process data for all years at the scan level
dataset = load_dataset("dell-research-harvard/AmericanStories",
"all_years_content_regions")
```
## Dataset Creation
### Curation Rationale
The dataset was created to provide researchers with a large, high-quality corpus of structured and transcribed newspaper article texts from historical local American newspapers.
These texts provide a massive repository of information about topics ranging from political polarization to the construction of national and cultural identities to the minutiae of the daily lives of people's ancestors.
The dataset will be useful to a wide variety of researchers including historians, other social scientists, and NLP practitioners.
### Source Data
#### Initial Data Collection and Normalization
The dataset is drawn entirely from image scans in the public domain that are freely available for download from the Library of Congress's website.
We processed all images as described in the associated paper.
#### Who are the source language producers?
The source language was produced by people - by newspaper editors, columnists, and other sources.
### Annotations
#### Annotation process
Not Applicable
#### Who are the annotators?
Not Applicable
### Personal and Sensitive Information
Not Applicable
## Considerations for Using the Data
### Social Impact of Dataset
This dataset provides high-quality data that could be used for pre-training a large language model to achieve better understanding of historical English and historical world knowledge.
The dataset could also be added to the external database of a retrieval-augmented language model to make historical information - ranging from interpretations of political events to minutiae about the lives of people's ancestors - more widely accessible.
Furthermore, structured article texts that it provides can facilitate using transformer-based methods for popular applications like detection of reproduced content, significantly improving accuracy relative to using the existing OCR.
It can also be used for innovating multimodal layout analysis models and other multimodal applications.
### Discussion of Biases
This dataset contains unfiltered content composed by newspaper editors, columnists, and other sources.
In addition to other potentially harmful content, the corpus may contain factual errors and intentional misrepresentations of news events.
All content should be viewed as individuals' opinions and not as a purely factual account of events of the day.
## Additional Information
### Dataset Curators
Melissa Dell (Harvard), Jacob Carlson (Harvard), Tom Bryan (Harvard) , Emily Silcock (Harvard), Abhishek Arora (Harvard), Zejiang Shen (MIT), Luca D'Amico-Wong (Harvard), Quan Le (Princeton), Pablo Querubin (NYU), Leander Heldring (Kellog School of Business)
### Licensing Information
The dataset has a CC-BY 4.0 license
### Citation Information
Coming Soon
### Contributions
Coming Soon |
McGill-NLP/FaithDial | 2023-02-05T04:09:45.000Z | [
"task_categories:conversational",
"task_categories:text-generation",
"task_ids:dialogue-modeling",
"annotations_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:10K<n<100k",
"language:en",
"license:mit",
"faithful-dialogue-modeling",
"trustworthy-dialogue-modeling",
"arxiv:2204.10757",
"region:us"
] | McGill-NLP | FaithDial is a new benchmark for hallucination-free dialogues, created by manually editing hallucinated and uncooperative responses in Wizard of Wikipedia. | @article{dziri2022faithdial,
title={FaithDial: A Faithful Benchmark for Information-Seeking Dialogue},
author={Dziri, Nouha and Kamalloo, Ehsan and Milton, Sivan and Zaiane, Osmar and Yu, Mo and Ponti, Edoardo and Reddy, Siva},
journal={arXiv preprint, arXiv:2204.10757},
year={2022},
url={https://arxiv.org/abs/2204.10757}
} | null | 10 | 2,390 | ---
annotations_creators:
- crowdsourced
language:
- en
license:
- mit
multilinguality:
- monolingual
size_categories:
- 10K<n<100k
task_categories:
- conversational
- text-generation
task_ids:
- dialogue-modeling
pretty_name: A Faithful Benchmark for Information-Seeking Dialogue
tags:
- faithful-dialogue-modeling
- trustworthy-dialogue-modeling
---
## Dataset Summary
FaithDial is a faithful knowledge-grounded dialogue benchmark, composed of **50,761** turns spanning **5649** conversations. It was curated through Amazon Mechanical Turk by asking annotators to amend hallucinated utterances in [Wizard of Wikipedia](https://parl.ai/projects/wizard_of_wikipedia/) (WoW). In our dialogue setting, we simulate interactions between two speakers: **an information seeker** and **a bot wizard**. The seeker has a large degree of freedom as opposed to the wizard bot which is more restricted on what it can communicate. In fact, it must abide by the following rules:
- **First**, it should be truthful by providing information that is attributable to the source knowledge *K*.
- **Second**, it should provide information conversationally, i.e., use naturalistic phrasing of *K*, support follow-on discussion with questions, and prompt user's opinions.
- **Third**, it should acknowledge its ignorance of the answer in those cases where *K* does not include it while still moving the conversation forward using *K*.
## Dataset Description
- **Homepage:** [FaithDial](https://mcgill-nlp.github.io/FaithDial/)
- **Repository:** [GitHub](https://github.com/McGill-NLP/FaithDial)
- **Point of Contact:** [Nouha Dziri](mailto:dziri@ualberta.ca)
## Language
English
## Data Instance
An example of 'train' looks as follows:
```text
[
{
"utterances": [
... // prior utterances,
{
"history": [
"Have you ever been to a concert? They're so fun!",
"No I cannot as a bot. However, have you been to Madonna's? Her 10th concert was used to help her 13th album called \"Rebel Heart\".",
"Yeah I've heard of it but never went or what it was for. Can you tell me more about it?"
],
"speaker": "Wizard",
"knowledge": "It began on September 9, 2015, in Montreal, Canada, at the Bell Centre and concluded on March 20, 2016, in Sydney, Australia at Allphones Arena.",
"original_response": "It started in September of 2015 and ran all the way through March of 2016. Can you imagine being on the road that long?",
"response": "Sure. The concert started in September 9th of 2015 at Montreal, Canada. It continued till 20th of March of 2016, where it ended at Sydney, Australia.",
"BEGIN": [
"Hallucination",
"Entailment"
],
"VRM": [
"Disclosure",
"Question"
]
},
... // more utterances
]
},
... // more dialogues
]
```
If the `original_response` is empty, it means that the response is faithful to the source and we consider it as a FaithDial response. Faithful responses in WoW are also edited slightly if they are found to have some grammatical issues or typos.
## Data Fields
- `history`: `List[string]`. The dialogue history.
- `knowledge`: `string`. The source knowkedge on which the bot wizard should ground its response.
- `speaker`: `string`. The current speaker.
- `original response`: `string`. The WoW original response before editing it.
- `response`: `string`. The new Wizard response.
- `BEGIN`: `List[string]`. The BEGIN labels for the Wizard response.
- `VRM`: `List[string]`. The VRM labels for the wizard response.
## Data Splits
- `Train`: 36809 turns
- `Valid`: 6851 turns
- `Test`: 7101 turns
`Valid` includes both the `seen` and the `unseen` data splits from WoW. The same applies to `Test`. We also include those splits for FaithDial valid and test data.
## Annotations
Following the guidelines for ethical crowdsourcing outlined in [Sheehan. 2018](https://www.tandfonline.com/doi/abs/10.1080/03637751.2017.1342043),
we hire Amazon Mechanical Turk (AMT) workers to edit utterances in WoW dialogues that were found to exhibit unfaithful responses. To ensure clarity in the task definition, we provided detailed examples for our terminology. Moreover, we performed several staging rounds over the course of several months.
# Who are the annotators?
To be eligible for the task, workers have to be located in the United States and Canada and have to answer successfully 20 questions as part of a qualification test. Before launching the main annotation task, we perform a small pilot round (60 HITS) to check the performance of the workers. We email workers who commit errors, providing them with examples on how to fix their mistakes in future HITS.
## Personal and Sensitive Information
Seeker utterances in FaithDial may contain personal and sensitive information.
## Social Impact of Dataset
In recent years, the conversational AI market has seen
a proliferation of a variety of applications—which are powered by large pre-trained LMs—that span
across a broad range of domains, such as customer
support, education, e-commerce, health, entertainment, etc. Ensuring that
these systems are trustworthy is key to deploy systems safely at a large scale in real-world application, especially in high-stake domain. FaithDial holds promise to encourage faithfulness in information-seeking dialogue and make virtual assistants both safer and more reliable.
## Licensing Information
MIT
## Citation Information
```bibtex
@article{dziri2022faithdial,
title={FaithDial: A Faithful Benchmark for Information-Seeking Dialogue},
author={Dziri, Nouha and Kamalloo, Ehsan and Milton, Sivan and Zaiane, Osmar and Yu, Mo and Ponti, Edoardo and Reddy, Siva},
journal={arXiv preprint, arXiv:2204.10757},
year={2022},
url={https://arxiv.org/abs/2204.10757}
}
```
|
HuggingFaceM4/general-pmd-synthetic-testing-with-embeddings | 2023-04-20T13:40:41.000Z | [
"license:bigscience-openrail-m",
"region:us"
] | HuggingFaceM4 | This dataset is designed to be used in testing. It's derived from general-pmd-10k dataset | @InProceedings{huggingface:dataset,
title = {Multimodal synthetic dataset for testing / general PMD},
author={HuggingFace, Inc.},
year={2022}
} | null | 0 | 2,387 | ---
license: bigscience-openrail-m
---
This dataset is designed to be used in testing. It's derived from general-pmd/localized_narratives__ADE20k dataset
The current splits are: `['100.unique', '100.repeat', '300.unique', '300.repeat', '1k.unique', '1k.repeat', '10k.unique', '10k.repeat']`.
The `unique` ones ensure uniqueness across `text` entries.
The `repeat` ones are repeating the same 10 unique records: - these are useful for memory leaks debugging as the records are always the same and thus remove the record variation from the equation.
The default split is `100.unique`
The full process of this dataset creation, including which records were used to build it, is documented inside [general-pmd-synthetic-testing.py](https://huggingface.co/datasets/HuggingFaceM4/general-pmd-synthetic-testing/blob/main/general-pmd-synthetic-testing.py)
|
Dahoas/synthetic-instruct-gptj-pairwise | 2023-01-09T03:48:03.000Z | [
"region:us"
] | Dahoas | null | null | null | 41 | 2,372 | Entry not found |
armanc/pubmed-rct20k | 2022-11-11T08:23:24.000Z | [
"region:us"
] | armanc | null | null | null | 0 | 2,364 | The small 20K version of the Pubmed-RCT dataset by Dernoncourt et al (2017).
```
@article{dernoncourt2017pubmed,
title={Pubmed 200k rct: a dataset for sequential sentence classification in medical abstracts},
author={Dernoncourt, Franck and Lee, Ji Young},
journal={arXiv preprint arXiv:1710.06071},
year={2017}
}
```
Note: This is the cleaned up version by Jin and Szolovits (2018).
```
@article{jin2018hierarchical,
title={Hierarchical neural networks for sequential sentence classification in medical scientific abstracts},
author={Jin, Di and Szolovits, Peter},
journal={arXiv preprint arXiv:1808.06161},
year={2018}
}
``` |
tner/ontonotes5 | 2022-07-18T00:43:55.000Z | [
"task_categories:token-classification",
"task_ids:named-entity-recognition",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"language:en",
"license:other",
"region:us"
] | tner | [ontonotes5 NER dataset](https://aclanthology.org/N06-2015/) | @inproceedings{hovy-etal-2006-ontonotes,
title = "{O}nto{N}otes: The 90{\%} Solution",
author = "Hovy, Eduard and
Marcus, Mitchell and
Palmer, Martha and
Ramshaw, Lance and
Weischedel, Ralph",
booktitle = "Proceedings of the Human Language Technology Conference of the {NAACL}, Companion Volume: Short Papers",
month = jun,
year = "2006",
address = "New York City, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/N06-2015",
pages = "57--60",
} | null | 3 | 2,359 | ---
language:
- en
license:
- other
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
task_categories:
- token-classification
task_ids:
- named-entity-recognition
pretty_name: Ontonotes5
---
# Dataset Card for "tner/ontonotes5"
## Dataset Description
- **Repository:** [T-NER](https://github.com/asahi417/tner)
- **Paper:** [https://aclanthology.org/N06-2015/](https://aclanthology.org/N06-2015/)
- **Dataset:** Ontonotes5
- **Domain:** News
- **Number of Entity:** 8
### Dataset Summary
Ontonotes5 NER dataset formatted in a part of [TNER](https://github.com/asahi417/tner) project.
- Entity Types: `CARDINAL`, `DATE`, `PERSON`, `NORP`, `GPE`, `LAW`, `PERCENT`, `ORDINAL`, `MONEY`, `WORK_OF_ART`, `FAC`, `TIME`, `QUANTITY`, `PRODUCT`, `LANGUAGE`, `ORG`, `LOC`, `EVENT`
## Dataset Structure
### Data Instances
An example of `train` looks as follows.
```
{
'tags': [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 4, 5, 0, 0, 0, 0, 11, 12, 12, 12, 12, 0, 0, 7, 0, 0, 0, 0, 0],
'tokens': ['``', 'It', "'s", 'very', 'costly', 'and', 'time', '-', 'consuming', ',', "''", 'says', 'Phil', 'Rosen', ',', 'a', 'partner', 'in', 'Fleet', '&', 'Leasing', 'Management', 'Inc.', ',', 'a', 'Boston', 'car', '-', 'leasing', 'company', '.']
}
```
### Label ID
The label2id dictionary can be found at [here](https://huggingface.co/datasets/tner/onotonotes5/raw/main/dataset/label.json).
```python
{
"O": 0,
"B-CARDINAL": 1,
"B-DATE": 2,
"I-DATE": 3,
"B-PERSON": 4,
"I-PERSON": 5,
"B-NORP": 6,
"B-GPE": 7,
"I-GPE": 8,
"B-LAW": 9,
"I-LAW": 10,
"B-ORG": 11,
"I-ORG": 12,
"B-PERCENT": 13,
"I-PERCENT": 14,
"B-ORDINAL": 15,
"B-MONEY": 16,
"I-MONEY": 17,
"B-WORK_OF_ART": 18,
"I-WORK_OF_ART": 19,
"B-FAC": 20,
"B-TIME": 21,
"I-CARDINAL": 22,
"B-LOC": 23,
"B-QUANTITY": 24,
"I-QUANTITY": 25,
"I-NORP": 26,
"I-LOC": 27,
"B-PRODUCT": 28,
"I-TIME": 29,
"B-EVENT": 30,
"I-EVENT": 31,
"I-FAC": 32,
"B-LANGUAGE": 33,
"I-PRODUCT": 34,
"I-ORDINAL": 35,
"I-LANGUAGE": 36
}
```
### Data Splits
| name |train|validation|test|
|---------|----:|---------:|---:|
|ontonotes5|59924| 8528|8262|
### Citation Information
```
@inproceedings{hovy-etal-2006-ontonotes,
title = "{O}nto{N}otes: The 90{\%} Solution",
author = "Hovy, Eduard and
Marcus, Mitchell and
Palmer, Martha and
Ramshaw, Lance and
Weischedel, Ralph",
booktitle = "Proceedings of the Human Language Technology Conference of the {NAACL}, Companion Volume: Short Papers",
month = jun,
year = "2006",
address = "New York City, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/N06-2015",
pages = "57--60",
}
``` |
naver-clova-ix/synthdog-en | 2022-07-22T06:42:50.000Z | [
"region:us"
] | naver-clova-ix | null | null | null | 5 | 2,357 | Entry not found |
ade_corpus_v2 | 2023-06-01T14:59:53.000Z | [
"task_categories:text-classification",
"task_categories:token-classification",
"task_ids:coreference-resolution",
"task_ids:fact-checking",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"size_categories:1K<n<10K",
"size_categories:n<1K",
"source_datasets:original",
"language:en",
"license:unknown",
"region:us"
] | null | ADE-Corpus-V2 Dataset: Adverse Drug Reaction Data.
This is a dataset for Classification if a sentence is ADE-related (True) or not (False) and Relation Extraction between Adverse Drug Event and Drug.
DRUG-AE.rel provides relations between drugs and adverse effects.
DRUG-DOSE.rel provides relations between drugs and dosages.
ADE-NEG.txt provides all sentences in the ADE corpus that DO NOT contain any drug-related adverse effects. | @article{GURULINGAPPA2012885,
title = "Development of a benchmark corpus to support the automatic extraction of drug-related adverse effects from medical case reports",
journal = "Journal of Biomedical Informatics",
volume = "45",
number = "5",
pages = "885 - 892",
year = "2012",
note = "Text Mining and Natural Language Processing in Pharmacogenomics",
issn = "1532-0464",
doi = "https://doi.org/10.1016/j.jbi.2012.04.008",
url = "http://www.sciencedirect.com/science/article/pii/S1532046412000615",
author = "Harsha Gurulingappa and Abdul Mateen Rajput and Angus Roberts and Juliane Fluck and Martin Hofmann-Apitius and Luca Toldo",
keywords = "Adverse drug effect, Benchmark corpus, Annotation, Harmonization, Sentence classification",
abstract = "A significant amount of information about drug-related safety issues such as adverse effects are published in medical case reports that can only be explored by human readers due to their unstructured nature. The work presented here aims at generating a systematically annotated corpus that can support the development and validation of methods for the automatic extraction of drug-related adverse effects from medical case reports. The documents are systematically double annotated in various rounds to ensure consistent annotations. The annotated documents are finally harmonized to generate representative consensus annotations. In order to demonstrate an example use case scenario, the corpus was employed to train and validate models for the classification of informative against the non-informative sentences. A Maximum Entropy classifier trained with simple features and evaluated by 10-fold cross-validation resulted in the F1 score of 0.70 indicating a potential useful application of the corpus."
} | null | 17 | 2,342 | ---
annotations_creators:
- expert-generated
language_creators:
- found
language:
- en
license:
- unknown
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
- 1K<n<10K
- n<1K
source_datasets:
- original
task_categories:
- text-classification
- token-classification
task_ids:
- coreference-resolution
- fact-checking
pretty_name: Adverse Drug Reaction Data v2
dataset_info:
- config_name: Ade_corpus_v2_classification
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': Not-Related
'1': Related
splits:
- name: train
num_bytes: 3403711
num_examples: 23516
download_size: 3791162
dataset_size: 3403711
- config_name: Ade_corpus_v2_drug_ade_relation
features:
- name: text
dtype: string
- name: drug
dtype: string
- name: effect
dtype: string
- name: indexes
struct:
- name: drug
sequence:
- name: start_char
dtype: int32
- name: end_char
dtype: int32
- name: effect
sequence:
- name: start_char
dtype: int32
- name: end_char
dtype: int32
splits:
- name: train
num_bytes: 1546021
num_examples: 6821
download_size: 3791162
dataset_size: 1546021
- config_name: Ade_corpus_v2_drug_dosage_relation
features:
- name: text
dtype: string
- name: drug
dtype: string
- name: dosage
dtype: string
- name: indexes
struct:
- name: drug
sequence:
- name: start_char
dtype: int32
- name: end_char
dtype: int32
- name: dosage
sequence:
- name: start_char
dtype: int32
- name: end_char
dtype: int32
splits:
- name: train
num_bytes: 64725
num_examples: 279
download_size: 3791162
dataset_size: 64725
train-eval-index:
- config: Ade_corpus_v2_classification
task: text-classification
task_id: multi_class_classification
splits:
train_split: train
col_mapping:
text: text
label: target
metrics:
- type: accuracy
name: Accuracy
- type: f1
name: F1 macro
args:
average: macro
- type: f1
name: F1 micro
args:
average: micro
- type: f1
name: F1 weighted
args:
average: weighted
- type: precision
name: Precision macro
args:
average: macro
- type: precision
name: Precision micro
args:
average: micro
- type: precision
name: Precision weighted
args:
average: weighted
- type: recall
name: Recall macro
args:
average: macro
- type: recall
name: Recall micro
args:
average: micro
- type: recall
name: Recall weighted
args:
average: weighted
config_names:
- Ade_corpus_v2_classification
- Ade_corpus_v2_drug_ade_relation
- Ade_corpus_v2_drug_dosage_relation
---
# Dataset Card for Adverse Drug Reaction Data v2
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://www.sciencedirect.com/science/article/pii/S1532046412000615
- **Repository:** [Needs More Information]
- **Paper:** https://www.sciencedirect.com/science/article/pii/S1532046412000615
- **Leaderboard:** [Needs More Information]
- **Point of Contact:** [Needs More Information]
### Dataset Summary
ADE-Corpus-V2 Dataset: Adverse Drug Reaction Data.
This is a dataset for Classification if a sentence is ADE-related (True) or not (False) and Relation Extraction between Adverse Drug Event and Drug.
DRUG-AE.rel provides relations between drugs and adverse effects.
DRUG-DOSE.rel provides relations between drugs and dosages.
ADE-NEG.txt provides all sentences in the ADE corpus that DO NOT contain any drug-related adverse effects.
### Supported Tasks and Leaderboards
Sentiment classification, Relation Extraction
### Languages
English
## Dataset Structure
### Data Instances
#### Config - `Ade_corpus_v2_classification`
```
{
'label': 1,
'text': 'Intravenous azithromycin-induced ototoxicity.'
}
```
#### Config - `Ade_corpus_v2_drug_ade_relation`
```
{
'drug': 'azithromycin',
'effect': 'ototoxicity',
'indexes': {
'drug': {
'end_char': [24],
'start_char': [12]
},
'effect': {
'end_char': [44],
'start_char': [33]
}
},
'text': 'Intravenous azithromycin-induced ototoxicity.'
}
```
#### Config - `Ade_corpus_v2_drug_dosage_relation`
```
{
'dosage': '4 times per day',
'drug': 'insulin',
'indexes': {
'dosage': {
'end_char': [56],
'start_char': [41]
},
'drug': {
'end_char': [40],
'start_char': [33]}
},
'text': 'She continued to receive regular insulin 4 times per day over the following 3 years with only occasional hives.'
}
```
### Data Fields
#### Config - `Ade_corpus_v2_classification`
- `text` - Input text.
- `label` - Whether the adverse drug effect(ADE) related (1) or not (0).
-
#### Config - `Ade_corpus_v2_drug_ade_relation`
- `text` - Input text.
- `drug` - Name of drug.
- `effect` - Effect caused by the drug.
- `indexes.drug.start_char` - Start index of `drug` string in text.
- `indexes.drug.end_char` - End index of `drug` string in text.
- `indexes.effect.start_char` - Start index of `effect` string in text.
- `indexes.effect.end_char` - End index of `effect` string in text.
#### Config - `Ade_corpus_v2_drug_dosage_relation`
- `text` - Input text.
- `drug` - Name of drug.
- `dosage` - Dosage of the drug.
- `indexes.drug.start_char` - Start index of `drug` string in text.
- `indexes.drug.end_char` - End index of `drug` string in text.
- `indexes.dosage.start_char` - Start index of `dosage` string in text.
- `indexes.dosage.end_char` - End index of `dosage` string in text.
### Data Splits
| Train |
| ------ |
| 23516 |
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
[Needs More Information]
## Considerations for Using the Data
### Social Impact of Dataset
[Needs More Information]
### Discussion of Biases
[Needs More Information]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
[Needs More Information]
### Licensing Information
[Needs More Information]
### Citation Information
```
@article{GURULINGAPPA2012885,
title = "Development of a benchmark corpus to support the automatic extraction of drug-related adverse effects from medical case reports",
journal = "Journal of Biomedical Informatics",
volume = "45",
number = "5",
pages = "885 - 892",
year = "2012",
note = "Text Mining and Natural Language Processing in Pharmacogenomics",
issn = "1532-0464",
doi = "https://doi.org/10.1016/j.jbi.2012.04.008",
url = "http://www.sciencedirect.com/science/article/pii/S1532046412000615",
author = "Harsha Gurulingappa and Abdul Mateen Rajput and Angus Roberts and Juliane Fluck and Martin Hofmann-Apitius and Luca Toldo",
keywords = "Adverse drug effect, Benchmark corpus, Annotation, Harmonization, Sentence classification",
abstract = "A significant amount of information about drug-related safety issues such as adverse effects are published in medical case reports that can only be explored by human readers due to their unstructured nature. The work presented here aims at generating a systematically annotated corpus that can support the development and validation of methods for the automatic extraction of drug-related adverse effects from medical case reports. The documents are systematically double annotated in various rounds to ensure consistent annotations. The annotated documents are finally harmonized to generate representative consensus annotations. In order to demonstrate an example use case scenario, the corpus was employed to train and validate models for the classification of informative against the non-informative sentences. A Maximum Entropy classifier trained with simple features and evaluated by 10-fold cross-validation resulted in the F1 score of 0.70 indicating a potential useful application of the corpus."
}
```
### Contributions
Thanks to [@Nilanshrajput](https://github.com/Nilanshrajput), [@lhoestq](https://github.com/lhoestq) for adding this dataset. |
jamescalam/llama-2-arxiv-papers-chunked | 2023-07-25T03:12:24.000Z | [
"language:en",
"arxiv:2307.09288",
"region:us"
] | jamescalam | null | null | null | 9 | 2,341 | ---
language:
- en
pretty_name: Chunked Arxiv Papers for Llama 2
---
This dataset contains chunked extracts (of ~300 tokens) from papers related to (and including) the [Llama 2 research paper](https://arxiv.org/abs/2307.09288). Related papers were identified by following a trail of references, extracting those papers with the [`arxiv-bot`](https://github.com/aurelio-labs/arxiv-bot) package, and repeating. |
exams | 2023-06-01T14:59:56.000Z | [
"task_categories:question-answering",
"task_ids:multiple-choice-qa",
"annotations_creators:found",
"language_creators:found",
"multilinguality:monolingual",
"multilinguality:multilingual",
"size_categories:10K<n<100K",
"size_categories:1K<n<10K",
"size_categories:n<1K",
"source_datasets:original",
"language:ar",
"language:bg",
"language:de",
"language:es",
"language:fr",
"language:hr",
"language:hu",
"language:it",
"language:lt",
"language:mk",
"language:pl",
"language:pt",
"language:sq",
"language:sr",
"language:tr",
"language:vi",
"license:cc-by-sa-4.0",
"arxiv:2011.03080",
"region:us"
] | null | EXAMS is a benchmark dataset for multilingual and cross-lingual question answering from high school examinations.
It consists of more than 24,000 high-quality high school exam questions in 16 languages,
covering 8 language families and 24 school subjects from Natural Sciences and Social Sciences, among others. | @article{hardalov2020exams,
title={EXAMS: A Multi-subject High School Examinations Dataset for Cross-lingual and Multilingual Question Answering},
author={Hardalov, Momchil and Mihaylov, Todor and Dimitrina Zlatkova and Yoan Dinkov and Ivan Koychev and Preslav Nvakov},
journal={arXiv preprint arXiv:2011.03080},
year={2020}
} | null | 10 | 2,322 | ---
pretty_name: EXAMS
annotations_creators:
- found
language_creators:
- found
language:
- ar
- bg
- de
- es
- fr
- hr
- hu
- it
- lt
- mk
- pl
- pt
- sq
- sr
- tr
- vi
license:
- cc-by-sa-4.0
multilinguality:
- monolingual
- multilingual
size_categories:
- 10K<n<100K
- 1K<n<10K
- n<1K
source_datasets:
- original
task_categories:
- question-answering
task_ids:
- multiple-choice-qa
paperswithcode_id: exams
dataset_info:
- config_name: alignments
features:
- name: source_id
dtype: string
- name: target_id_list
sequence: string
splits:
- name: full
num_bytes: 1265280
num_examples: 10834
download_size: 169745177
dataset_size: 1265280
- config_name: multilingual
features:
- name: id
dtype: string
- name: question
struct:
- name: stem
dtype: string
- name: choices
sequence:
- name: text
dtype: string
- name: label
dtype: string
- name: para
dtype: string
- name: answerKey
dtype: string
- name: info
struct:
- name: grade
dtype: int32
- name: subject
dtype: string
- name: language
dtype: string
splits:
- name: train
num_bytes: 3385865
num_examples: 7961
- name: validation
num_bytes: 1143067
num_examples: 2672
- name: test
num_bytes: 5753625
num_examples: 13510
download_size: 169745177
dataset_size: 10282557
- config_name: multilingual_with_para
features:
- name: id
dtype: string
- name: question
struct:
- name: stem
dtype: string
- name: choices
sequence:
- name: text
dtype: string
- name: label
dtype: string
- name: para
dtype: string
- name: answerKey
dtype: string
- name: info
struct:
- name: grade
dtype: int32
- name: subject
dtype: string
- name: language
dtype: string
splits:
- name: train
num_bytes: 127298595
num_examples: 7961
- name: validation
num_bytes: 42713069
num_examples: 2672
- name: test
num_bytes: 207981218
num_examples: 13510
download_size: 169745177
dataset_size: 377992882
- config_name: crosslingual_test
features:
- name: id
dtype: string
- name: question
struct:
- name: stem
dtype: string
- name: choices
sequence:
- name: text
dtype: string
- name: label
dtype: string
- name: para
dtype: string
- name: answerKey
dtype: string
- name: info
struct:
- name: grade
dtype: int32
- name: subject
dtype: string
- name: language
dtype: string
splits:
- name: test
num_bytes: 8412531
num_examples: 19736
download_size: 169745177
dataset_size: 8412531
- config_name: crosslingual_with_para_test
features:
- name: id
dtype: string
- name: question
struct:
- name: stem
dtype: string
- name: choices
sequence:
- name: text
dtype: string
- name: label
dtype: string
- name: para
dtype: string
- name: answerKey
dtype: string
- name: info
struct:
- name: grade
dtype: int32
- name: subject
dtype: string
- name: language
dtype: string
splits:
- name: test
num_bytes: 207981218
num_examples: 13510
download_size: 169745177
dataset_size: 207981218
- config_name: crosslingual_bg
features:
- name: id
dtype: string
- name: question
struct:
- name: stem
dtype: string
- name: choices
sequence:
- name: text
dtype: string
- name: label
dtype: string
- name: para
dtype: string
- name: answerKey
dtype: string
- name: info
struct:
- name: grade
dtype: int32
- name: subject
dtype: string
- name: language
dtype: string
splits:
- name: train
num_bytes: 1078545
num_examples: 2344
- name: validation
num_bytes: 282115
num_examples: 593
download_size: 169745177
dataset_size: 1360660
- config_name: crosslingual_with_para_bg
features:
- name: id
dtype: string
- name: question
struct:
- name: stem
dtype: string
- name: choices
sequence:
- name: text
dtype: string
- name: label
dtype: string
- name: para
dtype: string
- name: answerKey
dtype: string
- name: info
struct:
- name: grade
dtype: int32
- name: subject
dtype: string
- name: language
dtype: string
splits:
- name: train
num_bytes: 47068024
num_examples: 2344
- name: validation
num_bytes: 11916370
num_examples: 593
download_size: 169745177
dataset_size: 58984394
- config_name: crosslingual_hr
features:
- name: id
dtype: string
- name: question
struct:
- name: stem
dtype: string
- name: choices
sequence:
- name: text
dtype: string
- name: label
dtype: string
- name: para
dtype: string
- name: answerKey
dtype: string
- name: info
struct:
- name: grade
dtype: int32
- name: subject
dtype: string
- name: language
dtype: string
splits:
- name: train
num_bytes: 808320
num_examples: 2341
- name: validation
num_bytes: 176910
num_examples: 538
download_size: 169745177
dataset_size: 985230
- config_name: crosslingual_with_para_hr
features:
- name: id
dtype: string
- name: question
struct:
- name: stem
dtype: string
- name: choices
sequence:
- name: text
dtype: string
- name: label
dtype: string
- name: para
dtype: string
- name: answerKey
dtype: string
- name: info
struct:
- name: grade
dtype: int32
- name: subject
dtype: string
- name: language
dtype: string
splits:
- name: train
num_bytes: 24890820
num_examples: 2341
- name: validation
num_bytes: 5695382
num_examples: 538
download_size: 169745177
dataset_size: 30586202
- config_name: crosslingual_hu
features:
- name: id
dtype: string
- name: question
struct:
- name: stem
dtype: string
- name: choices
sequence:
- name: text
dtype: string
- name: label
dtype: string
- name: para
dtype: string
- name: answerKey
dtype: string
- name: info
struct:
- name: grade
dtype: int32
- name: subject
dtype: string
- name: language
dtype: string
splits:
- name: train
num_bytes: 678447
num_examples: 1731
- name: validation
num_bytes: 202324
num_examples: 536
download_size: 169745177
dataset_size: 880771
- config_name: crosslingual_with_para_hu
features:
- name: id
dtype: string
- name: question
struct:
- name: stem
dtype: string
- name: choices
sequence:
- name: text
dtype: string
- name: label
dtype: string
- name: para
dtype: string
- name: answerKey
dtype: string
- name: info
struct:
- name: grade
dtype: int32
- name: subject
dtype: string
- name: language
dtype: string
splits:
- name: train
num_bytes: 19036575
num_examples: 1731
- name: validation
num_bytes: 6043577
num_examples: 536
download_size: 169745177
dataset_size: 25080152
- config_name: crosslingual_it
features:
- name: id
dtype: string
- name: question
struct:
- name: stem
dtype: string
- name: choices
sequence:
- name: text
dtype: string
- name: label
dtype: string
- name: para
dtype: string
- name: answerKey
dtype: string
- name: info
struct:
- name: grade
dtype: int32
- name: subject
dtype: string
- name: language
dtype: string
splits:
- name: train
num_bytes: 399864
num_examples: 1010
- name: validation
num_bytes: 93343
num_examples: 246
download_size: 169745177
dataset_size: 493207
- config_name: crosslingual_with_para_it
features:
- name: id
dtype: string
- name: question
struct:
- name: stem
dtype: string
- name: choices
sequence:
- name: text
dtype: string
- name: label
dtype: string
- name: para
dtype: string
- name: answerKey
dtype: string
- name: info
struct:
- name: grade
dtype: int32
- name: subject
dtype: string
- name: language
dtype: string
splits:
- name: train
num_bytes: 16409787
num_examples: 1010
- name: validation
num_bytes: 4018497
num_examples: 246
download_size: 169745177
dataset_size: 20428284
- config_name: crosslingual_mk
features:
- name: id
dtype: string
- name: question
struct:
- name: stem
dtype: string
- name: choices
sequence:
- name: text
dtype: string
- name: label
dtype: string
- name: para
dtype: string
- name: answerKey
dtype: string
- name: info
struct:
- name: grade
dtype: int32
- name: subject
dtype: string
- name: language
dtype: string
splits:
- name: train
num_bytes: 826582
num_examples: 1665
- name: validation
num_bytes: 204570
num_examples: 410
download_size: 169745177
dataset_size: 1031152
- config_name: crosslingual_with_para_mk
features:
- name: id
dtype: string
- name: question
struct:
- name: stem
dtype: string
- name: choices
sequence:
- name: text
dtype: string
- name: label
dtype: string
- name: para
dtype: string
- name: answerKey
dtype: string
- name: info
struct:
- name: grade
dtype: int32
- name: subject
dtype: string
- name: language
dtype: string
splits:
- name: train
num_bytes: 38446774
num_examples: 1665
- name: validation
num_bytes: 9673826
num_examples: 410
download_size: 169745177
dataset_size: 48120600
- config_name: crosslingual_pl
features:
- name: id
dtype: string
- name: question
struct:
- name: stem
dtype: string
- name: choices
sequence:
- name: text
dtype: string
- name: label
dtype: string
- name: para
dtype: string
- name: answerKey
dtype: string
- name: info
struct:
- name: grade
dtype: int32
- name: subject
dtype: string
- name: language
dtype: string
splits:
- name: train
num_bytes: 574246
num_examples: 1577
- name: validation
num_bytes: 141877
num_examples: 394
download_size: 169745177
dataset_size: 716123
- config_name: crosslingual_with_para_pl
features:
- name: id
dtype: string
- name: question
struct:
- name: stem
dtype: string
- name: choices
sequence:
- name: text
dtype: string
- name: label
dtype: string
- name: para
dtype: string
- name: answerKey
dtype: string
- name: info
struct:
- name: grade
dtype: int32
- name: subject
dtype: string
- name: language
dtype: string
splits:
- name: train
num_bytes: 16374617
num_examples: 1577
- name: validation
num_bytes: 4159076
num_examples: 394
download_size: 169745177
dataset_size: 20533693
- config_name: crosslingual_pt
features:
- name: id
dtype: string
- name: question
struct:
- name: stem
dtype: string
- name: choices
sequence:
- name: text
dtype: string
- name: label
dtype: string
- name: para
dtype: string
- name: answerKey
dtype: string
- name: info
struct:
- name: grade
dtype: int32
- name: subject
dtype: string
- name: language
dtype: string
splits:
- name: train
num_bytes: 375214
num_examples: 740
- name: validation
num_bytes: 87850
num_examples: 184
download_size: 169745177
dataset_size: 463064
- config_name: crosslingual_with_para_pt
features:
- name: id
dtype: string
- name: question
struct:
- name: stem
dtype: string
- name: choices
sequence:
- name: text
dtype: string
- name: label
dtype: string
- name: para
dtype: string
- name: answerKey
dtype: string
- name: info
struct:
- name: grade
dtype: int32
- name: subject
dtype: string
- name: language
dtype: string
splits:
- name: train
num_bytes: 12185799
num_examples: 740
- name: validation
num_bytes: 3093848
num_examples: 184
download_size: 169745177
dataset_size: 15279647
- config_name: crosslingual_sq
features:
- name: id
dtype: string
- name: question
struct:
- name: stem
dtype: string
- name: choices
sequence:
- name: text
dtype: string
- name: label
dtype: string
- name: para
dtype: string
- name: answerKey
dtype: string
- name: info
struct:
- name: grade
dtype: int32
- name: subject
dtype: string
- name: language
dtype: string
splits:
- name: train
num_bytes: 424388
num_examples: 1194
- name: validation
num_bytes: 110293
num_examples: 311
download_size: 169745177
dataset_size: 534681
- config_name: crosslingual_with_para_sq
features:
- name: id
dtype: string
- name: question
struct:
- name: stem
dtype: string
- name: choices
sequence:
- name: text
dtype: string
- name: label
dtype: string
- name: para
dtype: string
- name: answerKey
dtype: string
- name: info
struct:
- name: grade
dtype: int32
- name: subject
dtype: string
- name: language
dtype: string
splits:
- name: train
num_bytes: 17341921
num_examples: 1194
- name: validation
num_bytes: 4450152
num_examples: 311
download_size: 169745177
dataset_size: 21792073
- config_name: crosslingual_sr
features:
- name: id
dtype: string
- name: question
struct:
- name: stem
dtype: string
- name: choices
sequence:
- name: text
dtype: string
- name: label
dtype: string
- name: para
dtype: string
- name: answerKey
dtype: string
- name: info
struct:
- name: grade
dtype: int32
- name: subject
dtype: string
- name: language
dtype: string
splits:
- name: train
num_bytes: 650268
num_examples: 1323
- name: validation
num_bytes: 145928
num_examples: 314
download_size: 169745177
dataset_size: 796196
- config_name: crosslingual_with_para_sr
features:
- name: id
dtype: string
- name: question
struct:
- name: stem
dtype: string
- name: choices
sequence:
- name: text
dtype: string
- name: label
dtype: string
- name: para
dtype: string
- name: answerKey
dtype: string
- name: info
struct:
- name: grade
dtype: int32
- name: subject
dtype: string
- name: language
dtype: string
splits:
- name: train
num_bytes: 24576553
num_examples: 1323
- name: validation
num_bytes: 5772713
num_examples: 314
download_size: 169745177
dataset_size: 30349266
- config_name: crosslingual_tr
features:
- name: id
dtype: string
- name: question
struct:
- name: stem
dtype: string
- name: choices
sequence:
- name: text
dtype: string
- name: label
dtype: string
- name: para
dtype: string
- name: answerKey
dtype: string
- name: info
struct:
- name: grade
dtype: int32
- name: subject
dtype: string
- name: language
dtype: string
splits:
- name: train
num_bytes: 718431
num_examples: 1571
- name: validation
num_bytes: 182974
num_examples: 393
download_size: 169745177
dataset_size: 901405
- config_name: crosslingual_with_para_tr
features:
- name: id
dtype: string
- name: question
struct:
- name: stem
dtype: string
- name: choices
sequence:
- name: text
dtype: string
- name: label
dtype: string
- name: para
dtype: string
- name: answerKey
dtype: string
- name: info
struct:
- name: grade
dtype: int32
- name: subject
dtype: string
- name: language
dtype: string
splits:
- name: train
num_bytes: 18597963
num_examples: 1571
- name: validation
num_bytes: 4763341
num_examples: 393
download_size: 169745177
dataset_size: 23361304
- config_name: crosslingual_vi
features:
- name: id
dtype: string
- name: question
struct:
- name: stem
dtype: string
- name: choices
sequence:
- name: text
dtype: string
- name: label
dtype: string
- name: para
dtype: string
- name: answerKey
dtype: string
- name: info
struct:
- name: grade
dtype: int32
- name: subject
dtype: string
- name: language
dtype: string
splits:
- name: train
num_bytes: 954191
num_examples: 1955
- name: validation
num_bytes: 232264
num_examples: 488
download_size: 169745177
dataset_size: 1186455
- config_name: crosslingual_with_para_vi
features:
- name: id
dtype: string
- name: question
struct:
- name: stem
dtype: string
- name: choices
sequence:
- name: text
dtype: string
- name: label
dtype: string
- name: para
dtype: string
- name: answerKey
dtype: string
- name: info
struct:
- name: grade
dtype: int32
- name: subject
dtype: string
- name: language
dtype: string
splits:
- name: train
num_bytes: 40884023
num_examples: 1955
- name: validation
num_bytes: 10260662
num_examples: 488
download_size: 169745177
dataset_size: 51144685
config_names:
- alignments
- crosslingual_bg
- crosslingual_hr
- crosslingual_hu
- crosslingual_it
- crosslingual_mk
- crosslingual_pl
- crosslingual_pt
- crosslingual_sq
- crosslingual_sr
- crosslingual_test
- crosslingual_tr
- crosslingual_vi
- crosslingual_with_para_bg
- crosslingual_with_para_hr
- crosslingual_with_para_hu
- crosslingual_with_para_it
- crosslingual_with_para_mk
- crosslingual_with_para_pl
- crosslingual_with_para_pt
- crosslingual_with_para_sq
- crosslingual_with_para_sr
- crosslingual_with_para_test
- crosslingual_with_para_tr
- crosslingual_with_para_vi
- multilingual
- multilingual_with_para
---
# Dataset Card for [Dataset Name]
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Repository:** https://github.com/mhardalov/exams-qa
- **Paper:** [EXAMS: A Multi-Subject High School Examinations Dataset for Cross-Lingual and Multilingual Question Answering](https://arxiv.org/abs/2011.03080)
- **Point of Contact:** [hardalov@@fmi.uni-sofia.bg](hardalov@@fmi.uni-sofia.bg)
### Dataset Summary
EXAMS is a benchmark dataset for multilingual and cross-lingual question answering from high school examinations. It consists of more than 24,000 high-quality high school exam questions in 16 languages, covering 8 language families and 24 school subjects from Natural Sciences and Social Sciences, among others.
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
The languages in the dataset are:
- ar
- bg
- de
- es
- fr
- hr
- hu
- it
- lt
- mk
- pl
- pt
- sq
- sr
- tr
- vi
## Dataset Structure
### Data Instances
An example of a data instance (with support paragraphs, in Bulgarian) is:
```
{'answerKey': 'C',
'id': '35dd6b52-7e71-11ea-9eb1-54bef70b159e',
'info': {'grade': 12, 'language': 'Bulgarian', 'subject': 'Biology'},
'question': {'choices': {'label': ['A', 'B', 'C', 'D'],
'para': ['Това води до наследствени изменения между организмите. Мирновременните вождове са наследствени. Черният, сивият и кафявият цвят на оцветяване на тялото се определя от пигмента меланин и възниква в резултат на наследствени изменения. Тези различия, според Монтескьо, не са наследствени. Те са и важни наследствени вещи в клана. Те са били наследствени архонти и управляват демократично. Реликвите са исторически, религиозни, семейни (наследствени) и технически. Общо са направени 800 изменения. Не всички наследствени аномалии на хемоглобина са вредни, т.е. Моногенните наследствени болести, които водят до мигрена, са редки. Няма наследствени владетели. Повечето от тях са наследствени и се предават на потомството. Всичките синове са ерцхерцози на всичките наследствени земи и претенденти. През 1509 г. Фраунбергите са издигнати на наследствени имперски графове. Фамилията Валдбург заради постиженията са номинирани на „наследствени имперски трушсеси“. Фамилията Валдбург заради постиженията са номинирани на „наследствени имперски трушсеси“. Описани са единични наследствени случаи, но по-често липсва фамилна обремененост. Позициите им са наследствени и се предават в рамките на клана. Внесени са изменения в конструкцията на веригите. и са направени изменения в ходовата част. На храма са правени лоши архитектурни изменения. Изменения са предприети и вътре в двореца. Имало двама наследствени вождове. Имало двама наследствени вождове. Годишният календар, „компасът“ и биологичния часовник са наследствени и при много бозайници.',
'Постепенно задълбочаващите се функционални изменения довеждат и до структурни изменения. Те се дължат както на растягането на кожата, така и на въздействието на хормоналните изменения върху кожната тъкан. тези изменения се долавят по-ясно. Впоследствие, той претърпява изменения. Ширината остава без изменения. След тяхното издаване се налагат изменения в първоначалния Кодекс, защото не е съобразен с направените в Дигестите изменения. Еволюционният преход се характеризира със следните изменения: Наблюдават се и сезонни изменения в теглото. Приемат се изменения и допълнения към Устава. Тук се размножават и предизвикват възпалителни изменения. Общо са направени 800 изменения. Бронирането не претърпява съществени изменения. При животните се откриват изменения при злокачествената форма. Срещат се и дегенеративни изменения в семенните каналчета. ТАВКР „Баку“ се строи по изменения проект 1143.4. Трансът се съпровожда с определени изменения на мозъчната дейност. На изменения е подложен и Светия Синод. Внесени са изменения в конструкцията на веригите. На храма са правени лоши архитектурни изменения. Оттогава стиховете претърпяват изменения няколко пъти. Настъпват съществени изменения в музикалната култура. По-късно той претърпява леки изменения. Настъпват съществени изменения в музикалната култура. Претърпява сериозни изменения само носовата надстройка. Хоризонталното брониране е оставено без изменения.',
'Модификациите са обратими. Тези реакции са обратими. В началните стадии тези натрупвания са обратими. Всички такива ефекти са временни и обратими. Много от реакциите са обратими и идентични с тези при гликолизата. Ако в обращение има книжни пари, те са обратими в злато при поискване . Общо са направени 800 изменения. Непоследователността е представена от принципа на "симетрия", при който взаимоотношенията са разглеждани като симетрични или обратими. Откакто формулите в клетките на електронната таблица не са обратими, тази техника е с ограничена стойност. Ефектът на Пелтие-Зеебек и ефектът Томсън са обратими (ефектът на Пелтие е обратен на ефекта на Зеебек). Плазмолизата протича в три етапа, в зависимост от силата и продължителността на въздействието:\n\nПървите два етапа са обратими. Внесени са изменения в конструкцията на веригите. и са направени изменения в ходовата част. На храма са правени лоши архитектурни изменения. Изменения са предприети и вътре в двореца. Оттогава насетне екипите не са претърпявали съществени изменения. Изменения са направени и в колесника на машината. Тези изменения са обявени през октомври 1878 година. Последните изменения са внесени през януари 2009 година. В процеса на последващото проектиране са внесени някои изменения. Сериозните изменения са в края на Втората световна война. Внесени са изменения в конструкцията на погребите и подемниците. Внесени са изменения в конструкцията на погребите и подемниците. Внесени са изменения в конструкцията на погребите и подемниците. Постепенно задълбочаващите се функционални изменения довеждат и до структурни изменения.',
'Ерозионни процеси от масов характер липсват. Обновлението в редиците на партията приема масов характер. Тя обаче няма масов характер поради спецификата на формата. Движението против десятъка придобива масов характер и в Балчишка околия. Понякога екзекутирането на „обсебените от Сатана“ взимало невероятно масов характер. Укриването на дължими като наряд продукти в селата придобива масов характер. Периодичните миграции са в повечето случаи с масов характер и са свързани със сезонните изменения в природата, а непериодичните са премествания на животни, които настъпват след пожари, замърсяване на средата, висока численост и др. Имат необратим характер. Именно по време на двувековните походи на западните рицари използването на гербовете придобива масов характер. След присъединяването на Южен Кавказ към Русия, изселването на азербайджанци от Грузия придобива масов характер. Те имат нормативен характер. Те имат установителен характер. Освобождаването на работна сила обикновено има масов характер, защото обхваща големи контингенти от носителите на труд. Валежите имат подчертано континентален характер. Имат най-често издънков характер. Приливите имат предимно полуденонощен характер. Някои от тях имат мистериален характер. Тези сведения имат случаен, епизодичен характер. Те имат сезонен или годишен характер. Временните обезпечителни мерки имат временен характер. Други имат пожелателен характер (Здравко, Слава). Ловът и събирачеството имат спомагателен характер. Фактически успяват само малко да усилят бронирането на артилерийските погреби, другите изменения носят само частен характер. Някои карикатури имат само развлекателен характер, докато други имат политически нюанси. Поемите на Хезиод имат по-приложен характер.'],
'text': ['дължат се на фенотипни изменения',
'имат масов характер',
'са наследствени',
'са обратими']},
'stem': 'Мутационите изменения:'}}
```
### Data Fields
A data instance contains the following fields:
- `id`: A question ID, unique across the dataset
- `question`: the question contains the following:
- `stem`: a stemmed representation of the question textual
- `choices`: a set of 3 to 5 candidate answers, which each have:
- `text`: the text of the answers
- `label`: a label in `['A', 'B', 'C', 'D', 'E']` used to match to the `answerKey`
- `para`: (optional) a supported paragraph from Wikipedia in the same language as the question and answer
- `answerKey`: the key corresponding to the right answer's `label`
- `info`: some additional information on the question including:
- `grade`: the school grade for the exam this question was taken from
- `subject`: a free text description of the academic subject
- `language`: the English name of the language for this question
### Data Splits
Depending on the configuration, the dataset have different splits:
- "alignments": a single "full" split
- "multilingual" and "multilingual_with_para": "train", "validation" and "test" splits
- "crosslingual_test" and "crosslingual_with_para_test": a single "test" split
- the rest of crosslingual configurations: "train" and "validation" splits
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
Eχαµs was collected from official state exams prepared by the ministries of education of various countries. These exams are taken by students graduating from high school, and often require knowledge learned through the entire course.
The questions cover a large variety of subjects and material based on the country’s education system. They cover major school subjects such as Biology, Chemistry, Geography, History, and Physics, but we also highly specialized ones such as Agriculture, Geology, Informatics, as well as some applied and profiled studies.
Some countries allow students to take official examinations in several languages. This dataset provides 9,857 parallel question pairs spread across seven languages coming from Croatia (Croatian, Serbian, Italian, Hungarian), Hungary (Hungarian, German, French, Spanish, Croatian, Serbian, Italian), and North Macedonia (Macedonian, Albanian, Turkish).
For all languages in the dataset, the first step in the process of data collection was to download the PDF files per year, per subject, and per language (when parallel languages were available in the same source), convert the PDF files to text, and select those that were well formatted and followed the document structure.
Then, Regular Expressions (RegEx) were used to parse the questions, their corresponding choices and the correct answer choice. In order to ensure that all our questions are answerable using textual input only, questions that contained visual information were removed, as selected by using curated list of words such as map, table, picture, graph, etc., in the corresponding language.
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
The dataset, which contains paragraphs from Wikipedia, is licensed under CC-BY-SA 4.0. The code in this repository is licensed according the [LICENSE file](https://raw.githubusercontent.com/mhardalov/exams-qa/main/LICENSE).
### Citation Information
```
@article{hardalov2020exams,
title={EXAMS: A Multi-subject High School Examinations Dataset for Cross-lingual and Multilingual Question Answering},
author={Hardalov, Momchil and Mihaylov, Todor and Dimitrina Zlatkova and Yoan Dinkov and Ivan Koychev and Preslav Nvakov},
journal={arXiv preprint arXiv:2011.03080},
year={2020}
}
```
### Contributions
Thanks to [@yjernite](https://github.com/yjernite) for adding this dataset. |
tatoeba | 2022-11-03T16:32:34.000Z | [
"task_categories:translation",
"annotations_creators:found",
"language_creators:found",
"multilinguality:multilingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:ab",
"language:acm",
"language:ady",
"language:af",
"language:afb",
"language:afh",
"language:aii",
"language:ain",
"language:ajp",
"language:akl",
"language:aln",
"language:am",
"language:an",
"language:ang",
"language:aoz",
"language:apc",
"language:ar",
"language:arq",
"language:ary",
"language:arz",
"language:as",
"language:ast",
"language:avk",
"language:awa",
"language:ayl",
"language:az",
"language:ba",
"language:bal",
"language:bar",
"language:be",
"language:ber",
"language:bg",
"language:bho",
"language:bjn",
"language:bm",
"language:bn",
"language:bo",
"language:br",
"language:brx",
"language:bs",
"language:bua",
"language:bvy",
"language:bzt",
"language:ca",
"language:cay",
"language:cbk",
"language:ce",
"language:ceb",
"language:ch",
"language:chg",
"language:chn",
"language:cho",
"language:chr",
"language:cjy",
"language:ckb",
"language:ckt",
"language:cmn",
"language:co",
"language:code",
"language:cpi",
"language:crh",
"language:crk",
"language:cs",
"language:csb",
"language:cv",
"language:cy",
"language:da",
"language:de",
"language:dng",
"language:drt",
"language:dsb",
"language:dtp",
"language:dv",
"language:dws",
"language:ee",
"language:egl",
"language:el",
"language:emx",
"language:en",
"language:enm",
"language:eo",
"language:es",
"language:et",
"language:eu",
"language:ext",
"language:fi",
"language:fj",
"language:fkv",
"language:fo",
"language:fr",
"language:frm",
"language:fro",
"language:frr",
"language:fuc",
"language:fur",
"language:fuv",
"language:fy",
"language:ga",
"language:gag",
"language:gan",
"language:gbm",
"language:gcf",
"language:gd",
"language:gil",
"language:gl",
"language:gn",
"language:gom",
"language:gos",
"language:got",
"language:grc",
"language:gsw",
"language:gu",
"language:gv",
"language:ha",
"language:hak",
"language:haw",
"language:hbo",
"language:he",
"language:hi",
"language:hif",
"language:hil",
"language:hnj",
"language:hoc",
"language:hr",
"language:hrx",
"language:hsb",
"language:hsn",
"language:ht",
"language:hu",
"language:hy",
"language:ia",
"language:iba",
"language:id",
"language:ie",
"language:ig",
"language:ii",
"language:ike",
"language:ilo",
"language:io",
"language:is",
"language:it",
"language:izh",
"language:ja",
"language:jam",
"language:jbo",
"language:jdt",
"language:jpa",
"language:jv",
"language:ka",
"language:kaa",
"language:kab",
"language:kam",
"language:kek",
"language:kha",
"language:kjh",
"language:kk",
"language:kl",
"language:km",
"language:kmr",
"language:kn",
"language:ko",
"language:koi",
"language:kpv",
"language:krc",
"language:krl",
"language:ksh",
"language:ku",
"language:kum",
"language:kw",
"language:kxi",
"language:ky",
"language:la",
"language:laa",
"language:lad",
"language:lb",
"language:ldn",
"language:lfn",
"language:lg",
"language:lij",
"language:liv",
"language:lkt",
"language:lld",
"language:lmo",
"language:ln",
"language:lo",
"language:lt",
"language:ltg",
"language:lut",
"language:lv",
"language:lzh",
"language:lzz",
"language:mad",
"language:mai",
"language:max",
"language:mdf",
"language:mfe",
"language:mg",
"language:mgm",
"language:mh",
"language:mhr",
"language:mi",
"language:mic",
"language:min",
"language:mk",
"language:ml",
"language:mn",
"language:mni",
"language:mnw",
"language:moh",
"language:mr",
"language:mt",
"language:mvv",
"language:mwl",
"language:mww",
"language:my",
"language:myv",
"language:na",
"language:nah",
"language:nan",
"language:nb",
"language:nch",
"language:nds",
"language:ngt",
"language:ngu",
"language:niu",
"language:nl",
"language:nlv",
"language:nn",
"language:nog",
"language:non",
"language:nov",
"language:npi",
"language:nst",
"language:nus",
"language:nv",
"language:ny",
"language:nys",
"language:oar",
"language:oc",
"language:ofs",
"language:ood",
"language:or",
"language:orv",
"language:os",
"language:osp",
"language:ota",
"language:otk",
"language:pa",
"language:pag",
"language:pal",
"language:pam",
"language:pap",
"language:pau",
"language:pcd",
"language:pdc",
"language:pes",
"language:phn",
"language:pi",
"language:pl",
"language:pms",
"language:pnb",
"language:ppl",
"language:prg",
"language:ps",
"language:pt",
"language:qu",
"language:quc",
"language:qya",
"language:rap",
"language:rif",
"language:rm",
"language:rn",
"language:ro",
"language:rom",
"language:ru",
"language:rue",
"language:rw",
"language:sa",
"language:sah",
"language:sc",
"language:scn",
"language:sco",
"language:sd",
"language:sdh",
"language:se",
"language:sg",
"language:sgs",
"language:shs",
"language:shy",
"language:si",
"language:sjn",
"language:sl",
"language:sm",
"language:sma",
"language:sn",
"language:so",
"language:sq",
"language:sr",
"language:stq",
"language:su",
"language:sux",
"language:sv",
"language:swg",
"language:swh",
"language:syc",
"language:ta",
"language:te",
"language:tet",
"language:tg",
"language:th",
"language:thv",
"language:ti",
"language:tig",
"language:tk",
"language:tl",
"language:tlh",
"language:tly",
"language:tmr",
"language:tmw",
"language:tn",
"language:to",
"language:toi",
"language:tok",
"language:tpi",
"language:tpw",
"language:tr",
"language:ts",
"language:tt",
"language:tts",
"language:tvl",
"language:ty",
"language:tyv",
"language:tzl",
"language:udm",
"language:ug",
"language:uk",
"language:umb",
"language:ur",
"language:uz",
"language:vec",
"language:vep",
"language:vi",
"language:vo",
"language:vro",
"language:wa",
"language:war",
"language:wo",
"language:wuu",
"language:xal",
"language:xh",
"language:xqa",
"language:yi",
"language:yo",
"language:yue",
"language:zlm",
"language:zsm",
"language:zu",
"language:zza",
"license:cc-by-2.0",
"region:us"
] | null | This is a collection of translated sentences from Tatoeba
359 languages, 3,403 bitexts
total number of files: 750
total number of tokens: 65.54M
total number of sentence fragments: 8.96M | @InProceedings{TIEDEMANN12.463,
author = {J{\"o}rg}rg Tiedemann},
title = {Parallel Data, Tools and Interfaces in OPUS},
booktitle = {Proceedings of the Eight International Conference on Language Resources and Evaluation (LREC'12)},
year = {2012},
month = {may},
date = {23-25},
address = {Istanbul, Turkey},
editor = {Nicoletta Calzolari (Conference Chair) and Khalid Choukri and Thierry Declerck and Mehmet Ugur Dogan and Bente Maegaard and Joseph Mariani and Jan Odijk and Stelios Piperidis},
publisher = {European Language Resources Association (ELRA)},
isbn = {978-2-9517408-7-7},
language = {english}
} | null | 19 | 2,317 | ---
annotations_creators:
- found
language_creators:
- found
language:
- ab
- acm
- ady
- af
- afb
- afh
- aii
- ain
- ajp
- akl
- aln
- am
- an
- ang
- aoz
- apc
- ar
- arq
- ary
- arz
- as
- ast
- avk
- awa
- ayl
- az
- ba
- bal
- bar
- be
- ber
- bg
- bho
- bjn
- bm
- bn
- bo
- br
- brx
- bs
- bua
- bvy
- bzt
- ca
- cay
- cbk
- ce
- ceb
- ch
- chg
- chn
- cho
- chr
- cjy
- ckb
- ckt
- cmn
- co
- code
- cpi
- crh
- crk
- cs
- csb
- cv
- cy
- da
- de
- dng
- drt
- dsb
- dtp
- dv
- dws
- ee
- egl
- el
- emx
- en
- enm
- eo
- es
- et
- eu
- ext
- fi
- fj
- fkv
- fo
- fr
- frm
- fro
- frr
- fuc
- fur
- fuv
- fy
- ga
- gag
- gan
- gbm
- gcf
- gd
- gil
- gl
- gn
- gom
- gos
- got
- grc
- gsw
- gu
- gv
- ha
- hak
- haw
- hbo
- he
- hi
- hif
- hil
- hnj
- hoc
- hr
- hrx
- hsb
- hsn
- ht
- hu
- hy
- ia
- iba
- id
- ie
- ig
- ii
- ike
- ilo
- io
- is
- it
- izh
- ja
- jam
- jbo
- jdt
- jpa
- jv
- ka
- kaa
- kab
- kam
- kek
- kha
- kjh
- kk
- kl
- km
- kmr
- kn
- ko
- koi
- kpv
- krc
- krl
- ksh
- ku
- kum
- kw
- kxi
- ky
- la
- laa
- lad
- lb
- ldn
- lfn
- lg
- lij
- liv
- lkt
- lld
- lmo
- ln
- lo
- lt
- ltg
- lut
- lv
- lzh
- lzz
- mad
- mai
- max
- mdf
- mfe
- mg
- mgm
- mh
- mhr
- mi
- mic
- min
- mk
- ml
- mn
- mni
- mnw
- moh
- mr
- mt
- mvv
- mwl
- mww
- my
- myv
- na
- nah
- nan
- nb
- nch
- nds
- ngt
- ngu
- niu
- nl
- nlv
- nn
- nog
- non
- nov
- npi
- nst
- nus
- nv
- ny
- nys
- oar
- oc
- ofs
- ood
- or
- orv
- os
- osp
- ota
- otk
- pa
- pag
- pal
- pam
- pap
- pau
- pcd
- pdc
- pes
- phn
- pi
- pl
- pms
- pnb
- ppl
- prg
- ps
- pt
- qu
- quc
- qya
- rap
- rif
- rm
- rn
- ro
- rom
- ru
- rue
- rw
- sa
- sah
- sc
- scn
- sco
- sd
- sdh
- se
- sg
- sgs
- shs
- shy
- si
- sjn
- sl
- sm
- sma
- sn
- so
- sq
- sr
- stq
- su
- sux
- sv
- swg
- swh
- syc
- ta
- te
- tet
- tg
- th
- thv
- ti
- tig
- tk
- tl
- tlh
- tly
- tmr
- tmw
- tn
- to
- toi
- tok
- tpi
- tpw
- tr
- ts
- tt
- tts
- tvl
- ty
- tyv
- tzl
- udm
- ug
- uk
- umb
- ur
- uz
- vec
- vep
- vi
- vo
- vro
- wa
- war
- wo
- wuu
- xal
- xh
- xqa
- yi
- yo
- yue
- zlm
- zsm
- zu
- zza
license:
- cc-by-2.0
multilinguality:
- multilingual
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- translation
task_ids: []
paperswithcode_id: tatoeba
pretty_name: Tatoeba
dataset_info:
- config_name: en-mr
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- en
- mr
splits:
- name: train
num_bytes: 6190484
num_examples: 53462
download_size: 1436200
dataset_size: 6190484
- config_name: eo-nl
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- eo
- nl
splits:
- name: train
num_bytes: 8150048
num_examples: 93650
download_size: 3020382
dataset_size: 8150048
- config_name: es-pt
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- es
- pt
splits:
- name: train
num_bytes: 6180464
num_examples: 67782
download_size: 2340361
dataset_size: 6180464
- config_name: fr-ru
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- fr
- ru
splits:
- name: train
num_bytes: 19775390
num_examples: 195161
download_size: 5509784
dataset_size: 19775390
- config_name: es-gl
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- es
- gl
splits:
- name: train
num_bytes: 287683
num_examples: 3135
download_size: 128506
dataset_size: 287683
---
# Dataset Card for Tatoeba
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** http://opus.nlpl.eu/Tatoeba.php
- **Repository:** None
- **Paper:** http://www.lrec-conf.org/proceedings/lrec2012/pdf/463_Paper.pdf
- **Leaderboard:** [More Information Needed]
- **Point of Contact:** [More Information Needed]
### Dataset Summary
Tatoeba is a collection of sentences and translations.
To load a language pair which isn't part of the config, all you need to do is specify the language code as pairs.
You can find the valid pairs in Homepage section of Dataset Description: http://opus.nlpl.eu/Tatoeba.php
E.g.
`dataset = load_dataset("tatoeba", lang1="en", lang2="he")`
The default date is v2021-07-22, but you can also change the date with
`dataset = load_dataset("tatoeba", lang1="en", lang2="he", date="v2020-11-09")`
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
The languages in the dataset are:
- ab
- acm
- ady
- af
- afb
- afh
- aii
- ain
- ajp
- akl
- aln
- am
- an
- ang
- aoz
- apc
- ar
- arq
- ary
- arz
- as
- ast
- avk
- awa
- ayl
- az
- ba
- bal
- bar
- be
- ber
- bg
- bho
- bjn
- bm
- bn
- bo
- br
- brx
- bs
- bua
- bvy
- bzt
- ca
- cay
- cbk
- ce
- ceb
- ch
- chg
- chn
- cho
- chr
- cjy
- ckb
- ckt
- cmn
- co
- code
- cpi
- crh
- crk
- cs
- csb
- cv
- cy
- da
- de
- dng
- drt
- dsb
- dtp
- dv
- dws
- ee
- egl
- el
- emx
- en
- enm
- eo
- es
- et
- eu
- ext
- fi
- fj
- fkv
- fo
- fr
- frm
- fro
- frr
- fuc
- fur
- fuv
- fy
- ga
- gag
- gan
- gbm
- gcf
- gd
- gil
- gl
- gn
- gom
- gos
- got
- grc
- gsw
- gu
- gv
- ha
- hak
- haw
- hbo
- he
- hi
- hif
- hil
- hnj
- hoc
- hr
- hrx
- hsb
- hsn
- ht
- hu
- hy
- ia
- iba
- id
- ie
- ig
- ii
- ike
- ilo
- io
- is
- it
- izh
- ja
- jam
- jbo
- jdt
- jpa
- jv
- ka
- kaa
- kab
- kam
- kek
- kha
- kjh
- kk
- kl
- km
- kmr
- kn
- ko
- koi
- kpv
- krc
- krl
- ksh
- ku
- kum
- kw
- kxi
- ky
- kzj: Coastal Kadazan (deprecated tag; preferred value: Kadazan Dusun; Central Dusun (`dtp`))
- la
- laa
- lad
- lb
- ldn
- lfn
- lg
- lij
- liv
- lkt
- lld
- lmo
- ln
- lo
- lt
- ltg
- lut
- lv
- lzh
- lzz
- mad
- mai
- max
- mdf
- mfe
- mg
- mgm
- mh
- mhr
- mi
- mic
- min
- mk
- ml
- mn
- mni
- mnw
- moh
- mr
- mt
- mvv
- mwl
- mww
- my
- myv
- na
- nah
- nan
- nb
- nch
- nds
- ngt
- ngu
- niu
- nl
- nlv
- nn
- nog
- non
- nov
- npi
- nst
- nus
- nv
- ny
- nys
- oar
- oc
- ofs
- ood
- or
- orv
- os
- osp
- ota
- otk
- pa
- pag
- pal
- pam
- pap
- pau
- pcd
- pdc
- pes
- phn
- pi
- pl
- pms
- pnb
- ppl
- prg
- ps
- pt
- qu
- quc
- qya
- rap
- rif
- rm
- rn
- ro
- rom
- ru
- rue
- rw
- sa
- sah
- sc
- scn
- sco
- sd
- sdh
- se
- sg
- sgs
- shs
- shy
- si
- sjn
- sl
- sm
- sma
- sn
- so
- sq
- sr
- stq
- su
- sux
- sv
- swg
- swh
- syc
- ta
- te
- tet
- tg
- th
- thv
- ti
- tig
- tk
- tl
- tlh
- tly
- tmr
- tmw
- tn
- to
- toi
- tok
- tpi
- tpw
- tr
- ts
- tt
- tts
- tvl
- ty
- tyv
- tzl
- udm
- ug
- uk
- umb
- ur
- uz
- vec
- vep
- vi
- vo
- vro
- wa
- war
- wo
- wuu
- xal
- xh
- xqa
- yi
- yo
- yue
- zlm
- zsm
- zu
- zza
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
[More Information Needed]
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
[More Information Needed]
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@abhishekkrthakur](https://github.com/abhishekkrthakur) for adding this dataset. |
HuggingFaceH4/testing_alpaca_small | 2023-04-12T21:55:05.000Z | [
"region:us"
] | HuggingFaceH4 | null | null | null | 0 | 2,315 | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: completion
dtype: string
splits:
- name: train
num_bytes: 33856
num_examples: 100
- name: test
num_bytes: 32475
num_examples: 100
download_size: 52543
dataset_size: 66331
---
# Dataset Card for "testing_alpaca_small"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
monash_tsf | 2023-06-13T13:26:34.000Z | [
"task_categories:time-series-forecasting",
"task_ids:univariate-time-series-forecasting",
"task_ids:multivariate-time-series-forecasting",
"annotations_creators:no-annotation",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"license:cc-by-4.0",
"region:us"
] | null | Monash Time Series Forecasting Repository which contains 30+ datasets of related time series for global forecasting research. This repository includes both real-world and competition time series datasets covering varied domains. | @InProceedings{godahewa2021monash,
author = "Godahewa, Rakshitha and Bergmeir, Christoph and Webb, Geoffrey I. and Hyndman, Rob J. and Montero-Manso, Pablo",
title = "Monash Time Series Forecasting Archive",
booktitle = "Neural Information Processing Systems Track on Datasets and Benchmarks",
year = "2021",
note = "forthcoming"
} | null | 20 | 2,302 | ---
annotations_creators:
- no-annotation
language_creators:
- found
license:
- cc-by-4.0
multilinguality:
- monolingual
pretty_name: Monash Time Series Forecasting Repository
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- time-series-forecasting
task_ids:
- univariate-time-series-forecasting
- multivariate-time-series-forecasting
dataset_info:
- config_name: weather
features:
- name: start
dtype: timestamp[s]
- name: target
sequence: float32
- name: feat_static_cat
sequence: uint64
- name: feat_dynamic_real
sequence:
sequence: float32
- name: item_id
dtype: string
splits:
- name: train
num_bytes: 176893738
num_examples: 3010
- name: test
num_bytes: 177638713
num_examples: 3010
- name: validation
num_bytes: 177266226
num_examples: 3010
download_size: 38820451
dataset_size: 531798677
- config_name: tourism_yearly
features:
- name: start
dtype: timestamp[s]
- name: target
sequence: float32
- name: feat_static_cat
sequence: uint64
- name: feat_dynamic_real
sequence:
sequence: float32
- name: item_id
dtype: string
splits:
- name: train
num_bytes: 54264
num_examples: 518
- name: test
num_bytes: 71358
num_examples: 518
- name: validation
num_bytes: 62811
num_examples: 518
download_size: 36749
dataset_size: 188433
- config_name: tourism_quarterly
features:
- name: start
dtype: timestamp[s]
- name: target
sequence: float32
- name: feat_static_cat
sequence: uint64
- name: feat_dynamic_real
sequence:
sequence: float32
- name: item_id
dtype: string
splits:
- name: train
num_bytes: 162738
num_examples: 427
- name: test
num_bytes: 190920
num_examples: 427
- name: validation
num_bytes: 176829
num_examples: 427
download_size: 93833
dataset_size: 530487
- config_name: tourism_monthly
features:
- name: start
dtype: timestamp[s]
- name: target
sequence: float32
- name: feat_static_cat
sequence: uint64
- name: feat_dynamic_real
sequence:
sequence: float32
- name: item_id
dtype: string
splits:
- name: train
num_bytes: 391518
num_examples: 366
- name: test
num_bytes: 463986
num_examples: 366
- name: validation
num_bytes: 427752
num_examples: 366
download_size: 199791
dataset_size: 1283256
- config_name: cif_2016
features:
- name: start
dtype: timestamp[s]
- name: target
sequence: float32
- name: feat_static_cat
sequence: uint64
- name: feat_dynamic_real
sequence:
sequence: float32
- name: item_id
dtype: string
splits:
- name: train
num_bytes: 24731
num_examples: 72
- name: test
num_bytes: 31859
num_examples: 72
- name: validation
num_bytes: 28295
num_examples: 72
download_size: 53344
dataset_size: 84885
- config_name: london_smart_meters
features:
- name: start
dtype: timestamp[s]
- name: target
sequence: float32
- name: feat_static_cat
sequence: uint64
- name: feat_dynamic_real
sequence:
sequence: float32
- name: item_id
dtype: string
splits:
- name: train
num_bytes: 684386194
num_examples: 5560
- name: test
num_bytes: 687138394
num_examples: 5560
- name: validation
num_bytes: 685762294
num_examples: 5560
download_size: 219673439
dataset_size: 2057286882
- config_name: australian_electricity_demand
features:
- name: start
dtype: timestamp[s]
- name: target
sequence: float32
- name: feat_static_cat
sequence: uint64
- name: feat_dynamic_real
sequence:
sequence: float32
- name: item_id
dtype: string
splits:
- name: train
num_bytes: 4763162
num_examples: 5
- name: test
num_bytes: 4765637
num_examples: 5
- name: validation
num_bytes: 4764400
num_examples: 5
download_size: 5770526
dataset_size: 14293199
- config_name: wind_farms_minutely
features:
- name: start
dtype: timestamp[s]
- name: target
sequence: float32
- name: feat_static_cat
sequence: uint64
- name: feat_dynamic_real
sequence:
sequence: float32
- name: item_id
dtype: string
splits:
- name: train
num_bytes: 710078918
num_examples: 339
- name: test
num_bytes: 710246723
num_examples: 339
- name: validation
num_bytes: 710162820
num_examples: 339
download_size: 71383130
dataset_size: 2130488461
- config_name: bitcoin
features:
- name: start
dtype: timestamp[s]
- name: target
sequence: float32
- name: feat_static_cat
sequence: uint64
- name: feat_dynamic_real
sequence:
sequence: float32
- name: item_id
dtype: string
splits:
- name: train
num_bytes: 336511
num_examples: 18
- name: test
num_bytes: 340966
num_examples: 18
- name: validation
num_bytes: 338738
num_examples: 18
download_size: 220403
dataset_size: 1016215
- config_name: pedestrian_counts
features:
- name: start
dtype: timestamp[s]
- name: target
sequence: float32
- name: feat_static_cat
sequence: uint64
- name: feat_dynamic_real
sequence:
sequence: float32
- name: item_id
dtype: string
splits:
- name: train
num_bytes: 12897120
num_examples: 66
- name: test
num_bytes: 12923256
num_examples: 66
- name: validation
num_bytes: 12910188
num_examples: 66
download_size: 4587054
dataset_size: 38730564
- config_name: vehicle_trips
features:
- name: start
dtype: timestamp[s]
- name: target
sequence: float32
- name: feat_static_cat
sequence: uint64
- name: feat_dynamic_real
sequence:
sequence: float32
- name: item_id
dtype: string
splits:
- name: train
num_bytes: 105261
num_examples: 329
- name: test
num_bytes: 186688
num_examples: 329
- name: validation
num_bytes: 145974
num_examples: 329
download_size: 44914
dataset_size: 437923
- config_name: kdd_cup_2018
features:
- name: start
dtype: timestamp[s]
- name: target
sequence: float32
- name: feat_static_cat
sequence: uint64
- name: feat_dynamic_real
sequence:
sequence: float32
- name: item_id
dtype: string
splits:
- name: train
num_bytes: 12040046
num_examples: 270
- name: test
num_bytes: 12146966
num_examples: 270
- name: validation
num_bytes: 12093506
num_examples: 270
download_size: 2456948
dataset_size: 36280518
- config_name: nn5_daily
features:
- name: start
dtype: timestamp[s]
- name: target
sequence: float32
- name: feat_static_cat
sequence: uint64
- name: feat_dynamic_real
sequence:
sequence: float32
- name: item_id
dtype: string
splits:
- name: train
num_bytes: 314828
num_examples: 111
- name: test
num_bytes: 366110
num_examples: 111
- name: validation
num_bytes: 340469
num_examples: 111
download_size: 287708
dataset_size: 1021407
- config_name: nn5_weekly
features:
- name: start
dtype: timestamp[s]
- name: target
sequence: float32
- name: feat_static_cat
sequence: uint64
- name: feat_dynamic_real
sequence:
sequence: float32
- name: item_id
dtype: string
splits:
- name: train
num_bytes: 48344
num_examples: 111
- name: test
num_bytes: 55670
num_examples: 111
- name: validation
num_bytes: 52007
num_examples: 111
download_size: 62043
dataset_size: 156021
- config_name: kaggle_web_traffic
features:
- name: start
dtype: timestamp[s]
- name: target
sequence: float32
- name: feat_static_cat
sequence: uint64
- name: feat_dynamic_real
sequence:
sequence: float32
- name: item_id
dtype: string
splits:
- name: train
num_bytes: 415494391
num_examples: 145063
- name: test
num_bytes: 486103806
num_examples: 145063
- name: validation
num_bytes: 450799098
num_examples: 145063
download_size: 145485324
dataset_size: 1352397295
- config_name: kaggle_web_traffic_weekly
features:
- name: start
dtype: timestamp[s]
- name: target
sequence: float32
- name: feat_static_cat
sequence: uint64
- name: feat_dynamic_real
sequence:
sequence: float32
- name: item_id
dtype: string
splits:
- name: train
num_bytes: 64242469
num_examples: 145063
- name: test
num_bytes: 73816627
num_examples: 145063
- name: validation
num_bytes: 69029548
num_examples: 145063
download_size: 28930900
dataset_size: 207088644
- config_name: solar_10_minutes
features:
- name: start
dtype: timestamp[s]
- name: target
sequence: float32
- name: feat_static_cat
sequence: uint64
- name: feat_dynamic_real
sequence:
sequence: float32
- name: item_id
dtype: string
splits:
- name: train
num_bytes: 29640033
num_examples: 137
- name: test
num_bytes: 29707848
num_examples: 137
- name: validation
num_bytes: 29673941
num_examples: 137
download_size: 4559353
dataset_size: 89021822
- config_name: solar_weekly
features:
- name: start
dtype: timestamp[s]
- name: target
sequence: float32
- name: feat_static_cat
sequence: uint64
- name: feat_dynamic_real
sequence:
sequence: float32
- name: item_id
dtype: string
splits:
- name: train
num_bytes: 28614
num_examples: 137
- name: test
num_bytes: 34265
num_examples: 137
- name: validation
num_bytes: 31439
num_examples: 137
download_size: 24375
dataset_size: 94318
- config_name: car_parts
features:
- name: start
dtype: timestamp[s]
- name: target
sequence: float32
- name: feat_static_cat
sequence: uint64
- name: feat_dynamic_real
sequence:
sequence: float32
- name: item_id
dtype: string
splits:
- name: train
num_bytes: 396653
num_examples: 2674
- name: test
num_bytes: 661379
num_examples: 2674
- name: validation
num_bytes: 529016
num_examples: 2674
download_size: 39656
dataset_size: 1587048
- config_name: fred_md
features:
- name: start
dtype: timestamp[s]
- name: target
sequence: float32
- name: feat_static_cat
sequence: uint64
- name: feat_dynamic_real
sequence:
sequence: float32
- name: item_id
dtype: string
splits:
- name: train
num_bytes: 314514
num_examples: 107
- name: test
num_bytes: 325107
num_examples: 107
- name: validation
num_bytes: 319811
num_examples: 107
download_size: 169107
dataset_size: 959432
- config_name: traffic_hourly
features:
- name: start
dtype: timestamp[s]
- name: target
sequence: float32
- name: feat_static_cat
sequence: uint64
- name: feat_dynamic_real
sequence:
sequence: float32
- name: item_id
dtype: string
splits:
- name: train
num_bytes: 62071974
num_examples: 862
- name: test
num_bytes: 62413326
num_examples: 862
- name: validation
num_bytes: 62242650
num_examples: 862
download_size: 22868806
dataset_size: 186727950
- config_name: traffic_weekly
features:
- name: start
dtype: timestamp[s]
- name: target
sequence: float32
- name: feat_static_cat
sequence: uint64
- name: feat_dynamic_real
sequence:
sequence: float32
- name: item_id
dtype: string
splits:
- name: train
num_bytes: 344154
num_examples: 862
- name: test
num_bytes: 401046
num_examples: 862
- name: validation
num_bytes: 372600
num_examples: 862
download_size: 245126
dataset_size: 1117800
- config_name: hospital
features:
- name: start
dtype: timestamp[s]
- name: target
sequence: float32
- name: feat_static_cat
sequence: uint64
- name: feat_dynamic_real
sequence:
sequence: float32
- name: item_id
dtype: string
splits:
- name: train
num_bytes: 217625
num_examples: 767
- name: test
num_bytes: 293558
num_examples: 767
- name: validation
num_bytes: 255591
num_examples: 767
download_size: 78110
dataset_size: 766774
- config_name: covid_deaths
features:
- name: start
dtype: timestamp[s]
- name: target
sequence: float32
- name: feat_static_cat
sequence: uint64
- name: feat_dynamic_real
sequence:
sequence: float32
- name: item_id
dtype: string
splits:
- name: train
num_bytes: 176352
num_examples: 266
- name: test
num_bytes: 242187
num_examples: 266
- name: validation
num_bytes: 209270
num_examples: 266
download_size: 27335
dataset_size: 627809
- config_name: sunspot
features:
- name: start
dtype: timestamp[s]
- name: target
sequence: float32
- name: feat_static_cat
sequence: uint64
- name: feat_dynamic_real
sequence:
sequence: float32
- name: item_id
dtype: string
splits:
- name: train
num_bytes: 304726
num_examples: 1
- name: test
num_bytes: 304974
num_examples: 1
- name: validation
num_bytes: 304850
num_examples: 1
download_size: 68865
dataset_size: 914550
- config_name: saugeenday
features:
- name: start
dtype: timestamp[s]
- name: target
sequence: float32
- name: feat_static_cat
sequence: uint64
- name: feat_dynamic_real
sequence:
sequence: float32
- name: item_id
dtype: string
splits:
- name: train
num_bytes: 97722
num_examples: 1
- name: test
num_bytes: 97969
num_examples: 1
- name: validation
num_bytes: 97845
num_examples: 1
download_size: 28721
dataset_size: 293536
- config_name: us_births
features:
- name: start
dtype: timestamp[s]
- name: target
sequence: float32
- name: feat_static_cat
sequence: uint64
- name: feat_dynamic_real
sequence:
sequence: float32
- name: item_id
dtype: string
splits:
- name: train
num_bytes: 29923
num_examples: 1
- name: test
num_bytes: 30171
num_examples: 1
- name: validation
num_bytes: 30047
num_examples: 1
download_size: 16332
dataset_size: 90141
- config_name: solar_4_seconds
features:
- name: start
dtype: timestamp[s]
- name: target
sequence: float32
- name: feat_static_cat
sequence: uint64
- name: feat_dynamic_real
sequence:
sequence: float32
- name: item_id
dtype: string
splits:
- name: train
num_bytes: 30513083
num_examples: 1
- name: test
num_bytes: 30513578
num_examples: 1
- name: validation
num_bytes: 30513331
num_examples: 1
download_size: 794502
dataset_size: 91539992
- config_name: wind_4_seconds
features:
- name: start
dtype: timestamp[s]
- name: target
sequence: float32
- name: feat_static_cat
sequence: uint64
- name: feat_dynamic_real
sequence:
sequence: float32
- name: item_id
dtype: string
splits:
- name: train
num_bytes: 30512774
num_examples: 1
- name: test
num_bytes: 30513269
num_examples: 1
- name: validation
num_bytes: 30513021
num_examples: 1
download_size: 2226184
dataset_size: 91539064
- config_name: rideshare
features:
- name: start
dtype: timestamp[s]
- name: target
sequence:
sequence: float32
- name: feat_static_cat
sequence: uint64
- name: feat_dynamic_real
sequence:
sequence: float32
- name: item_id
dtype: string
splits:
- name: train
num_bytes: 4249051
num_examples: 156
- name: test
num_bytes: 5161435
num_examples: 156
- name: validation
num_bytes: 4705243
num_examples: 156
download_size: 1031826
dataset_size: 14115729
- config_name: oikolab_weather
features:
- name: start
dtype: timestamp[s]
- name: target
sequence: float32
- name: feat_static_cat
sequence: uint64
- name: feat_dynamic_real
sequence:
sequence: float32
- name: item_id
dtype: string
splits:
- name: train
num_bytes: 3299142
num_examples: 8
- name: test
num_bytes: 3302310
num_examples: 8
- name: validation
num_bytes: 3300726
num_examples: 8
download_size: 1326101
dataset_size: 9902178
- config_name: temperature_rain
features:
- name: start
dtype: timestamp[s]
- name: target
sequence:
sequence: float32
- name: feat_static_cat
sequence: uint64
- name: feat_dynamic_real
sequence:
sequence: float32
- name: item_id
dtype: string
splits:
- name: train
num_bytes: 88121466
num_examples: 422
- name: test
num_bytes: 96059286
num_examples: 422
- name: validation
num_bytes: 92090376
num_examples: 422
download_size: 25747139
dataset_size: 276271128
---
# Dataset Card for Monash Time Series Forecasting Repository
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [Monash Time Series Forecasting Repository](https://forecastingdata.org/)
- **Repository:** [Monash Time Series Forecasting Repository code repository](https://github.com/rakshitha123/TSForecasting)
- **Paper:** [Monash Time Series Forecasting Archive](https://openreview.net/pdf?id=wEc1mgAjU-)
- **Leaderboard:** [Baseline Results](https://forecastingdata.org/#results)
- **Point of Contact:** [Rakshitha Godahewa](mailto:rakshitha.godahewa@monash.edu)
### Dataset Summary
The first comprehensive time series forecasting repository containing datasets of related time series to facilitate the evaluation of global forecasting models. All datasets are intended to use only for research purpose. Our repository contains 30 datasets including both publicly available time series datasets (in different formats) and datasets curated by us. Many datasets have different versions based on the frequency and the inclusion of missing values, making the total number of dataset variations to 58. Furthermore, it includes both real-world and competition time series datasets covering varied domains.
The following table shows a list of datasets available:
| Name | Domain | No. of series | Freq. | Pred. Len. | Source |
|-------------------------------|-----------|---------------|--------|------------|-------------------------------------------------------------------------------------------------------------------------------------|
| weather | Nature | 3010 | 1D | 30 | [Sparks et al., 2020](https://cran.r-project.org/web/packages/bomrang) |
| tourism_yearly | Tourism | 1311 | 1Y | 4 | [Athanasopoulos et al., 2011](https://doi.org/10.1016/j.ijforecast.2010.04.009) |
| tourism_quarterly | Tourism | 1311 | 1Q-JAN | 8 | [Athanasopoulos et al., 2011](https://doi.org/10.1016/j.ijforecast.2010.04.009) |
| tourism_monthly | Tourism | 1311 | 1M | 24 | [Athanasopoulos et al., 2011](https://doi.org/10.1016/j.ijforecast.2010.04.009) |
| cif_2016 | Banking | 72 | 1M | 12 | [Stepnicka and Burda, 2017](https://doi.org/10.1109/FUZZ-IEEE.2017.8015455) |
| london_smart_meters | Energy | 5560 | 30T | 60 | [Jean-Michel, 2019](https://www.kaggle.com/jeanmidev/smart-meters-in-london) |
| australian_electricity_demand | Energy | 5 | 30T | 60 | [Godahewa et al. 2021](https://openreview.net/pdf?id=wEc1mgAjU-) |
| wind_farms_minutely | Energy | 339 | 1T | 60 | [Godahewa et al. 2021](https://openreview.net/pdf?id=wEc1mgAjU- ) |
| bitcoin | Economic | 18 | 1D | 30 | [Godahewa et al. 2021](https://openreview.net/pdf?id=wEc1mgAjU- ) |
| pedestrian_counts | Transport | 66 | 1H | 48 | [City of Melbourne, 2020](https://data.melbourne.vic.gov.au/Transport/Pedestrian-Counting-System-Monthly-counts-per-hour/b2ak-trbp) |
| vehicle_trips | Transport | 329 | 1D | 30 | [fivethirtyeight, 2015](https://github.com/fivethirtyeight/uber-tlc-foil-response) |
| kdd_cup_2018 | Nature | 270 | 1H | 48 | [KDD Cup, 2018](https://www.kdd.org/kdd2018/kdd-cup) |
| nn5_daily | Banking | 111 | 1D | 56 | [Ben Taieb et al., 2012](https://doi.org/10.1016/j.eswa.2012.01.039) |
| nn5_weekly | Banking | 111 | 1W-MON | 8 | [Ben Taieb et al., 2012](https://doi.org/10.1016/j.eswa.2012.01.039) |
| kaggle_web_traffic | Web | 145063 | 1D | 59 | [Google, 2017](https://www.kaggle.com/c/web-traffic-time-series-forecasting) |
| kaggle_web_traffic_weekly | Web | 145063 | 1W-WED | 8 | [Google, 2017](https://www.kaggle.com/c/web-traffic-time-series-forecasting) |
| solar_10_minutes | Energy | 137 | 10T | 60 | [Solar, 2020](https://www.nrel.gov/grid/solar-power-data.html) |
| solar_weekly | Energy | 137 | 1W-SUN | 5 | [Solar, 2020](https://www.nrel.gov/grid/solar-power-data.html) |
| car_parts | Sales | 2674 | 1M | 12 | [Hyndman, 2015](https://cran.r-project.org/web/packages/expsmooth/) |
| fred_md | Economic | 107 | 1M | 12 | [McCracken and Ng, 2016](https://doi.org/10.1080/07350015.2015.1086655) |
| traffic_hourly | Transport | 862 | 1H | 48 | [Caltrans, 2020](http://pems.dot.ca.gov/) |
| traffic_weekly | Transport | 862 | 1W-WED | 8 | [Caltrans, 2020](http://pems.dot.ca.gov/) |
| hospital | Health | 767 | 1M | 12 | [Hyndman, 2015](https://cran.r-project.org/web/packages/expsmooth/) |
| covid_deaths | Health | 266 | 1D | 30 | [Johns Hopkins University, 2020](https://github.com/CSSEGISandData/COVID-19) |
| sunspot | Nature | 1 | 1D | 30 | [Sunspot, 2015](http://www.sidc.be/silso/newdataset) |
| saugeenday | Nature | 1 | 1D | 30 | [McLeod and Gweon, 2013](http://www.jenvstat.org/v04/i11) |
| us_births | Health | 1 | 1D | 30 | [Pruim et al., 2020](https://cran.r-project.org/web/packages/mosaicData) |
| solar_4_seconds | Energy | 1 | 4S | 60 | [Godahewa et al. 2021](https://openreview.net/pdf?id=wEc1mgAjU- ) |
| wind_4_seconds | Energy | 1 | 4S | 60 | [Godahewa et al. 2021](https://openreview.net/pdf?id=wEc1mgAjU- ) |
| rideshare | Transport | 2304 | 1H | 48 | [Godahewa et al. 2021](https://openreview.net/pdf?id=wEc1mgAjU- ) |
| oikolab_weather | Nature | 8 | 1H | 48 | [Oikolab](https://oikolab.com/) |
| temperature_rain | Nature | 32072 | 1D | 30 | [Godahewa et al. 2021](https://openreview.net/pdf?id=wEc1mgAjU- )
### Dataset Usage
To load a particular dataset just specify its name from the table above e.g.:
```python
load_dataset("monash_tsf", "nn5_daily")
```
> Notes:
> - Data might contain missing values as in the original datasets.
> - The prediction length is either specified in the dataset or a default value depending on the frequency is used as in the original repository benchmark.
### Supported Tasks and Leaderboards
#### `time-series-forecasting`
##### `univariate-time-series-forecasting`
The univariate time series forecasting tasks involves learning the future one dimensional `target` values of a time series in a dataset for some `prediction_length` time steps. The performance of the forecast models can then be validated via the ground truth in the `validation` split and tested via the `test` split.
##### `multivariate-time-series-forecasting`
The multivariate time series forecasting task involves learning the future vector of `target` values of a time series in a dataset for some `prediction_length` time steps. Similar to the univariate setting the performance of a multivariate model can be validated via the ground truth in the `validation` split and tested via the `test` split.
### Languages
## Dataset Structure
### Data Instances
A sample from the training set is provided below:
```python
{
'start': datetime.datetime(2012, 1, 1, 0, 0),
'target': [14.0, 18.0, 21.0, 20.0, 22.0, 20.0, ...],
'feat_static_cat': [0],
'feat_dynamic_real': [[0.3, 0.4], [0.1, 0.6], ...],
'item_id': '0'
}
```
### Data Fields
For the univariate regular time series each series has the following keys:
* `start`: a datetime of the first entry of each time series in the dataset
* `target`: an array[float32] of the actual target values
* `feat_static_cat`: an array[uint64] which contains a categorical identifier of each time series in the dataset
* `feat_dynamic_real`: optional array of covariate features
* `item_id`: a string identifier of each time series in a dataset for reference
For the multivariate time series the `target` is a vector of the multivariate dimension for each time point.
### Data Splits
The datasets are split in time depending on the prediction length specified in the datasets. In particular for each time series in a dataset there is a prediction length window of the future in the validation split and another prediction length more in the test split.
## Dataset Creation
### Curation Rationale
To facilitate the evaluation of global forecasting models. All datasets in our repository are intended for research purposes and to evaluate the performance of new forecasting algorithms.
### Source Data
#### Initial Data Collection and Normalization
Out of the 30 datasets, 23 were already publicly available in different platforms with different data formats. The original sources of all datasets are mentioned in the datasets table above.
After extracting and curating these datasets, we analysed them individually to identify the datasets containing series with different frequencies and missing observations. Nine datasets contain time series belonging to different frequencies and the archive contains a separate dataset per each frequency.
#### Who are the source language producers?
The data comes from the datasets listed in the table above.
### Annotations
#### Annotation process
The annotations come from the datasets listed in the table above.
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
* [Rakshitha Godahewa](mailto:rakshitha.godahewa@monash.edu)
* [Christoph Bergmeir](mailto:christoph.bergmeir@monash.edu)
* [Geoff Webb](mailto:geoff.webb@monash.edu)
* [Rob Hyndman](mailto:rob.hyndman@monash.edu)
* [Pablo Montero-Manso](mailto:pablo.monteromanso@sydney.edu.au)
### Licensing Information
[Creative Commons Attribution 4.0 International](https://creativecommons.org/licenses/by/4.0/legalcode)
### Citation Information
```tex
@InProceedings{godahewa2021monash,
author = "Godahewa, Rakshitha and Bergmeir, Christoph and Webb, Geoffrey I. and Hyndman, Rob J. and Montero-Manso, Pablo",
title = "Monash Time Series Forecasting Archive",
booktitle = "Neural Information Processing Systems Track on Datasets and Benchmarks",
year = "2021",
note = "forthcoming"
}
```
### Contributions
Thanks to [@kashif](https://github.com/kashif) for adding this dataset. |
app_reviews | 2022-11-03T16:47:21.000Z | [
"task_categories:text-classification",
"task_ids:text-scoring",
"task_ids:sentiment-scoring",
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:original",
"language:en",
"license:unknown",
"region:us"
] | null | It is a large dataset of Android applications belonging to 23 differentapps categories, which provides an overview of the types of feedback users report on the apps and documents the evolution of the related code metrics. The dataset contains about 395 applications of the F-Droid repository, including around 600 versions, 280,000 user reviews (extracted with specific text mining approaches) | @InProceedings{Zurich Open Repository and
Archive:dataset,
title = {Software Applications User Reviews},
authors={Grano, Giovanni; Di Sorbo, Andrea; Mercaldo, Francesco; Visaggio, Corrado A; Canfora, Gerardo;
Panichella, Sebastiano},
year={2017}
} | null | 13 | 2,288 | ---
annotations_creators:
- crowdsourced
language_creators:
- crowdsourced
language:
- en
license:
- unknown
multilinguality:
- monolingual
size_categories:
- 100K<n<1M
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- text-scoring
- sentiment-scoring
paperswithcode_id: null
pretty_name: AppReviews
dataset_info:
features:
- name: package_name
dtype: string
- name: review
dtype: string
- name: date
dtype: string
- name: star
dtype: int8
splits:
- name: train
num_bytes: 32769079
num_examples: 288065
download_size: 42592679
dataset_size: 32769079
---
# Dataset Card for [Dataset Name]
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [Home Page](https://github.com/sealuzh/user_quality)
- **Repository:** [Repo Link](https://github.com/sealuzh/user_quality)
- **Paper:** [Link](https://giograno.me/assets/pdf/workshop/wama17.pdf)
- **Leaderboard:
- **Point of Contact:** [Darshan Gandhi](darshangandhi1151@gmail.com)
### Dataset Summary
It is a large dataset of Android applications belonging to 23 differentapps categories, which provides an overview of the types of feedback users report on the apps and documents the evolution of the related code metrics. The dataset contains about 395 applications of the F-Droid repository, including around 600 versions, 280,000 user reviews (extracted with specific text mining approaches)
### Supported Tasks and Leaderboards
The dataset we provide comprises 395 different apps from F-Droid repository, including code quality indicators of 629 versions of these
apps. It also encloses app reviews related to each of these versions, which have been automatically categorized classifying types of user feedback from a software maintenance and evolution perspective.
### Languages
The dataset is a monolingual dataset which has the messages English.
## Dataset Structure
### Data Instances
The dataset consists of a message in English.
{'package_name': 'com.mantz_it.rfanalyzer',
'review': "Great app! The new version now works on my Bravia Android TV which is great as it's right by my rooftop aerial cable. The scan feature would be useful...any ETA on when this will be available? Also the option to import a list of bookmarks e.g. from a simple properties file would be useful.",
'date': 'October 12 2016',
'star': 4}
### Data Fields
* package_name : Name of the Software Application Package
* review : Message of the user
* date : date when the user posted the review
* star : rating provied by the user for the application
### Data Splits
There is training data, with a total of : 288065
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
With the help of this dataset one can try to understand more about software applications and what are the views and opinions of the users about them. This helps to understand more about which type of software applications are prefeered by the users and how do these applications facilitate the user to help them solve their problems and issues.
### Discussion of Biases
The reviews are only for applications which are in the open-source software applications, the other sectors have not been considered here
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
Giovanni Grano - (University of Zurich), Sebastiano Panichella - (University of Zurich), Andrea di Sorbo - (University of Sannio)
### Licensing Information
[More Information Needed]
### Citation Information
@InProceedings{Zurich Open Repository and
Archive:dataset,
title = {Software Applications User Reviews},
authors={Grano, Giovanni; Di Sorbo, Andrea; Mercaldo, Francesco; Visaggio, Corrado A; Canfora, Gerardo;
Panichella, Sebastiano},
year={2017}
}
### Contributions
Thanks to [@darshan-gandhi](https://github.com/darshan-gandhi) for adding this dataset. |
Villekom/oa_dolly_15k_fi | 2023-08-23T14:15:07.000Z | [
"region:us"
] | Villekom | null | null | null | 0 | 2,269 | ---
dataset_info:
features:
- name: INSTRUCTION
dtype: string
- name: RESPONSE
dtype: string
- name: SOURCE
dtype: string
- name: METADATA
struct:
- name: CATEGORY
dtype: string
- name: CONTEXT
dtype: string
splits:
- name: train
num_bytes: 13654728
num_examples: 15015
download_size: 8698896
dataset_size: 13654728
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "oa_dolly_15k_fi"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
lhoestq/test2 | 2021-07-23T14:21:45.000Z | [
"region:us"
] | lhoestq | null | null | null | 0 | 2,258 | This is a readme
|
clue | 2023-05-25T06:34:47.000Z | [
"task_categories:text-classification",
"task_categories:multiple-choice",
"task_ids:topic-classification",
"task_ids:semantic-similarity-scoring",
"task_ids:natural-language-inference",
"task_ids:multiple-choice-qa",
"annotations_creators:other",
"language_creators:other",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:original",
"language:zh",
"license:unknown",
"coreference-nli",
"qa-nli",
"region:us"
] | null | CLUE, A Chinese Language Understanding Evaluation Benchmark
(https://www.cluebenchmarks.com/) is a collection of resources for training,
evaluating, and analyzing Chinese language understanding systems. | @misc{xu2020clue,
title={CLUE: A Chinese Language Understanding Evaluation Benchmark},
author={Liang Xu and Xuanwei Zhang and Lu Li and Hai Hu and Chenjie Cao and Weitang Liu and Junyi Li and Yudong Li and Kai Sun and Yechen Xu and Yiming Cui and Cong Yu and Qianqian Dong and Yin Tian and Dian Yu and Bo Shi and Jun Zeng and Rongzhao Wang and Weijian Xie and Yanting Li and Yina Patterson and Zuoyu Tian and Yiwen Zhang and He Zhou and Shaoweihua Liu and Qipeng Zhao and Cong Yue and Xinrui Zhang and Zhengliang Yang and Zhenzhong Lan},
year={2020},
eprint={2004.05986},
archivePrefix={arXiv},
primaryClass={cs.CL}
} | null | 26 | 2,256 | ---
annotations_creators:
- other
language_creators:
- other
language:
- zh
license:
- unknown
multilinguality:
- monolingual
size_categories:
- 100K<n<1M
source_datasets:
- original
task_categories:
- text-classification
- multiple-choice
task_ids:
- topic-classification
- semantic-similarity-scoring
- natural-language-inference
- multiple-choice-qa
paperswithcode_id: clue
pretty_name: 'CLUE: Chinese Language Understanding Evaluation benchmark'
tags:
- coreference-nli
- qa-nli
dataset_info:
- config_name: afqmc
features:
- name: sentence1
dtype: string
- name: sentence2
dtype: string
- name: label
dtype:
class_label:
names:
'0': '0'
'1': '1'
- name: idx
dtype: int32
splits:
- name: test
num_bytes: 378726
num_examples: 3861
- name: train
num_bytes: 3396535
num_examples: 34334
- name: validation
num_bytes: 426293
num_examples: 4316
download_size: 1195044
dataset_size: 4201554
- config_name: tnews
features:
- name: sentence
dtype: string
- name: label
dtype:
class_label:
names:
'0': '100'
'1': '101'
'2': '102'
'3': '103'
'4': '104'
'5': '106'
'6': '107'
'7': '108'
'8': '109'
'9': '110'
'10': '112'
'11': '113'
'12': '114'
'13': '115'
'14': '116'
- name: idx
dtype: int32
splits:
- name: test
num_bytes: 810974
num_examples: 10000
- name: train
num_bytes: 4245701
num_examples: 53360
- name: validation
num_bytes: 797926
num_examples: 10000
download_size: 5123575
dataset_size: 5854601
- config_name: iflytek
features:
- name: sentence
dtype: string
- name: label
dtype:
class_label:
names:
'0': '0'
'1': '1'
'2': '2'
'3': '3'
'4': '4'
'5': '5'
'6': '6'
'7': '7'
'8': '8'
'9': '9'
'10': '10'
'11': '11'
'12': '12'
'13': '13'
'14': '14'
'15': '15'
'16': '16'
'17': '17'
'18': '18'
'19': '19'
'20': '20'
'21': '21'
'22': '22'
'23': '23'
'24': '24'
'25': '25'
'26': '26'
'27': '27'
'28': '28'
'29': '29'
'30': '30'
'31': '31'
'32': '32'
'33': '33'
'34': '34'
'35': '35'
'36': '36'
'37': '37'
'38': '38'
'39': '39'
'40': '40'
'41': '41'
'42': '42'
'43': '43'
'44': '44'
'45': '45'
'46': '46'
'47': '47'
'48': '48'
'49': '49'
'50': '50'
'51': '51'
'52': '52'
'53': '53'
'54': '54'
'55': '55'
'56': '56'
'57': '57'
'58': '58'
'59': '59'
'60': '60'
'61': '61'
'62': '62'
'63': '63'
'64': '64'
'65': '65'
'66': '66'
'67': '67'
'68': '68'
'69': '69'
'70': '70'
'71': '71'
'72': '72'
'73': '73'
'74': '74'
'75': '75'
'76': '76'
'77': '77'
'78': '78'
'79': '79'
'80': '80'
'81': '81'
'82': '82'
'83': '83'
'84': '84'
'85': '85'
'86': '86'
'87': '87'
'88': '88'
'89': '89'
'90': '90'
'91': '91'
'92': '92'
'93': '93'
'94': '94'
'95': '95'
'96': '96'
'97': '97'
'98': '98'
'99': '99'
'100': '100'
'101': '101'
'102': '102'
'103': '103'
'104': '104'
'105': '105'
'106': '106'
'107': '107'
'108': '108'
'109': '109'
'110': '110'
'111': '111'
'112': '112'
'113': '113'
'114': '114'
'115': '115'
'116': '116'
'117': '117'
'118': '118'
- name: idx
dtype: int32
splits:
- name: test
num_bytes: 2105688
num_examples: 2600
- name: train
num_bytes: 10028613
num_examples: 12133
- name: validation
num_bytes: 2157123
num_examples: 2599
download_size: 6505938
dataset_size: 14291424
- config_name: cmnli
features:
- name: sentence1
dtype: string
- name: sentence2
dtype: string
- name: label
dtype:
class_label:
names:
'0': neutral
'1': entailment
'2': contradiction
- name: idx
dtype: int32
splits:
- name: test
num_bytes: 2386837
num_examples: 13880
- name: train
num_bytes: 67685309
num_examples: 391783
- name: validation
num_bytes: 2051845
num_examples: 12241
download_size: 31404066
dataset_size: 72123991
- config_name: cluewsc2020
features:
- name: idx
dtype: int32
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': 'true'
'1': 'false'
- name: target
struct:
- name: span1_text
dtype: string
- name: span2_text
dtype: string
- name: span1_index
dtype: int32
- name: span2_index
dtype: int32
splits:
- name: test
num_bytes: 645649
num_examples: 2574
- name: train
num_bytes: 288828
num_examples: 1244
- name: validation
num_bytes: 72682
num_examples: 304
download_size: 281384
dataset_size: 1007159
- config_name: csl
features:
- name: idx
dtype: int32
- name: corpus_id
dtype: int32
- name: abst
dtype: string
- name: label
dtype:
class_label:
names:
'0': '0'
'1': '1'
- name: keyword
sequence: string
splits:
- name: test
num_bytes: 2463740
num_examples: 3000
- name: train
num_bytes: 16478914
num_examples: 20000
- name: validation
num_bytes: 2464575
num_examples: 3000
download_size: 3234594
dataset_size: 21407229
- config_name: cmrc2018
features:
- name: id
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: text
dtype: string
- name: answer_start
dtype: int32
splits:
- name: test
num_bytes: 3112066
num_examples: 2000
- name: train
num_bytes: 15508110
num_examples: 10142
- name: validation
num_bytes: 5183809
num_examples: 3219
- name: trial
num_bytes: 1606931
num_examples: 1002
download_size: 3405146
dataset_size: 25410916
- config_name: drcd
features:
- name: id
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: text
dtype: string
- name: answer_start
dtype: int32
splits:
- name: test
num_bytes: 4982402
num_examples: 3493
- name: train
num_bytes: 37443458
num_examples: 26936
- name: validation
num_bytes: 5222753
num_examples: 3524
download_size: 7264200
dataset_size: 47648613
- config_name: chid
features:
- name: idx
dtype: int32
- name: candidates
sequence: string
- name: content
sequence: string
- name: answers
sequence:
- name: text
dtype: string
- name: candidate_id
dtype: int32
splits:
- name: test
num_bytes: 11480463
num_examples: 3447
- name: train
num_bytes: 252478178
num_examples: 84709
- name: validation
num_bytes: 10117789
num_examples: 3218
download_size: 139199202
dataset_size: 274076430
- config_name: c3
features:
- name: id
dtype: int32
- name: context
sequence: string
- name: question
dtype: string
- name: choice
sequence: string
- name: answer
dtype: string
splits:
- name: test
num_bytes: 1600166
num_examples: 1625
- name: train
num_bytes: 9672787
num_examples: 11869
- name: validation
num_bytes: 2990967
num_examples: 3816
download_size: 3495930
dataset_size: 14263920
- config_name: ocnli
features:
- name: sentence1
dtype: string
- name: sentence2
dtype: string
- name: label
dtype:
class_label:
names:
'0': neutral
'1': entailment
'2': contradiction
- name: idx
dtype: int32
splits:
- name: test
num_bytes: 376066
num_examples: 3000
- name: train
num_bytes: 6187190
num_examples: 50437
- name: validation
num_bytes: 366235
num_examples: 2950
download_size: 4359754
dataset_size: 6929491
- config_name: diagnostics
features:
- name: sentence1
dtype: string
- name: sentence2
dtype: string
- name: label
dtype:
class_label:
names:
'0': neutral
'1': entailment
'2': contradiction
- name: idx
dtype: int32
splits:
- name: test
num_bytes: 42400
num_examples: 514
download_size: 12062
dataset_size: 42400
---
# Dataset Card for "clue"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://www.cluebenchmarks.com
- **Repository:** https://github.com/CLUEbenchmark/CLUE
- **Paper:** [CLUE: A Chinese Language Understanding Evaluation Benchmark](https://aclanthology.org/2020.coling-main.419/)
- **Point of Contact:** [Zhenzhong Lan](mailto:lanzhenzhong@westlake.edu.cn)
- **Size of downloaded dataset files:** 198.68 MB
- **Size of the generated dataset:** 486.34 MB
- **Total amount of disk used:** 685.02 MB
### Dataset Summary
CLUE, A Chinese Language Understanding Evaluation Benchmark
(https://www.cluebenchmarks.com/) is a collection of resources for training,
evaluating, and analyzing Chinese language understanding systems.
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Dataset Structure
### Data Instances
#### afqmc
- **Size of downloaded dataset files:** 1.20 MB
- **Size of the generated dataset:** 4.20 MB
- **Total amount of disk used:** 5.40 MB
An example of 'validation' looks as follows.
```
{
"idx": 0,
"label": 0,
"sentence1": "双十一花呗提额在哪",
"sentence2": "里可以提花呗额度"
}
```
#### c3
- **Size of downloaded dataset files:** 3.20 MB
- **Size of the generated dataset:** 15.69 MB
- **Total amount of disk used:** 18.90 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"answer": "比人的灵敏",
"choice": ["没有人的灵敏", "和人的差不多", "和人的一样好", "比人的灵敏"],
"context": "[\"许多动物的某些器官感觉特别灵敏,它们能比人类提前知道一些灾害事件的发生,例如,海洋中的水母能预报风暴,老鼠能事先躲避矿井崩塌或有害气体,等等。地震往往能使一些动物的某些感觉器官受到刺激而发生异常反应。如一个地区的重力发生变异,某些动物可能通过它们的平衡...",
"id": 1,
"question": "动物的器官感觉与人的相比有什么不同?"
}
```
#### chid
- **Size of downloaded dataset files:** 139.20 MB
- **Size of the generated dataset:** 274.08 MB
- **Total amount of disk used:** 413.28 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"answers": {
"candidate_id": [3, 5, 6, 1, 7, 4, 0],
"text": ["碌碌无为", "无所作为", "苦口婆心", "得过且过", "未雨绸缪", "软硬兼施", "传宗接代"]
},
"candidates": "[\"传宗接代\", \"得过且过\", \"咄咄逼人\", \"碌碌无为\", \"软硬兼施\", \"无所作为\", \"苦口婆心\", \"未雨绸缪\", \"和衷共济\", \"人老珠黄\"]...",
"content": "[\"谈到巴萨目前的成就,瓜迪奥拉用了“坚持”两个字来形容。自从上世纪90年代克鲁伊夫带队以来,巴萨就坚持每年都有拉玛西亚球员进入一队的传统。即便是范加尔时代,巴萨强力推出的“巴萨五鹰”德拉·佩纳、哈维、莫雷罗、罗杰·加西亚和贝拉乌桑几乎#idiom0000...",
"idx": 0
}
```
#### cluewsc2020
- **Size of downloaded dataset files:** 0.28 MB
- **Size of the generated dataset:** 1.03 MB
- **Total amount of disk used:** 1.29 MB
An example of 'train' looks as follows.
```
{
"idx": 0,
"label": 1,
"target": {
"span1_index": 3,
"span1_text": "伤口",
"span2_index": 27,
"span2_text": "它们"
},
"text": "裂开的伤口涂满尘土,里面有碎石子和木头刺,我小心翼翼把它们剔除出去。"
}
```
#### cmnli
- **Size of downloaded dataset files:** 31.40 MB
- **Size of the generated dataset:** 72.12 MB
- **Total amount of disk used:** 103.53 MB
An example of 'train' looks as follows.
```
{
"idx": 0,
"label": 0,
"sentence1": "从概念上讲,奶油略读有两个基本维度-产品和地理。",
"sentence2": "产品和地理位置是使奶油撇油起作用的原因。"
}
```
### Data Fields
The data fields are the same among all splits.
#### afqmc
- `sentence1`: a `string` feature.
- `sentence2`: a `string` feature.
- `label`: a classification label, with possible values including `0` (0), `1` (1).
- `idx`: a `int32` feature.
#### c3
- `id`: a `int32` feature.
- `context`: a `list` of `string` features.
- `question`: a `string` feature.
- `choice`: a `list` of `string` features.
- `answer`: a `string` feature.
#### chid
- `idx`: a `int32` feature.
- `candidates`: a `list` of `string` features.
- `content`: a `list` of `string` features.
- `answers`: a dictionary feature containing:
- `text`: a `string` feature.
- `candidate_id`: a `int32` feature.
#### cluewsc2020
- `idx`: a `int32` feature.
- `text`: a `string` feature.
- `label`: a classification label, with possible values including `true` (0), `false` (1).
- `span1_text`: a `string` feature.
- `span2_text`: a `string` feature.
- `span1_index`: a `int32` feature.
- `span2_index`: a `int32` feature.
#### cmnli
- `sentence1`: a `string` feature.
- `sentence2`: a `string` feature.
- `label`: a classification label, with possible values including `neutral` (0), `entailment` (1), `contradiction` (2).
- `idx`: a `int32` feature.
### Data Splits
| name |train |validation|test |
|-----------|-----:|---------:|----:|
|afqmc | 34334| 4316| 3861|
|c3 | 11869| 3816| 3892|
|chid | 84709| 3218| 3231|
|cluewsc2020| 1244| 304| 290|
|cmnli |391783| 12241|13880|
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@inproceedings{xu-etal-2020-clue,
title = "{CLUE}: A {C}hinese Language Understanding Evaluation Benchmark",
author = "Xu, Liang and
Hu, Hai and
Zhang, Xuanwei and
Li, Lu and
Cao, Chenjie and
Li, Yudong and
Xu, Yechen and
Sun, Kai and
Yu, Dian and
Yu, Cong and
Tian, Yin and
Dong, Qianqian and
Liu, Weitang and
Shi, Bo and
Cui, Yiming and
Li, Junyi and
Zeng, Jun and
Wang, Rongzhao and
Xie, Weijian and
Li, Yanting and
Patterson, Yina and
Tian, Zuoyu and
Zhang, Yiwen and
Zhou, He and
Liu, Shaoweihua and
Zhao, Zhe and
Zhao, Qipeng and
Yue, Cong and
Zhang, Xinrui and
Yang, Zhengliang and
Richardson, Kyle and
Lan, Zhenzhong",
booktitle = "Proceedings of the 28th International Conference on Computational Linguistics",
month = dec,
year = "2020",
address = "Barcelona, Spain (Online)",
publisher = "International Committee on Computational Linguistics",
url = "https://aclanthology.org/2020.coling-main.419",
doi = "10.18653/v1/2020.coling-main.419",
pages = "4762--4772",
}
```
### Contributions
Thanks to [@thomwolf](https://github.com/thomwolf), [@JetRunner](https://github.com/JetRunner) for adding this dataset. |
gsarti/flores_101 | 2022-10-27T08:37:36.000Z | [
"task_categories:text-generation",
"task_categories:translation",
"annotations_creators:found",
"language_creators:expert-generated",
"multilinguality:multilingual",
"multilinguality:translation",
"size_categories:unknown",
"source_datasets:extended|flores",
"language:af",
"language:am",
"language:ar",
"language:hy",
"language:as",
"language:ast",
"language:az",
"language:be",
"language:bn",
"language:bs",
"language:bg",
"language:my",
"language:ca",
"language:ceb",
"language:zho",
"language:hr",
"language:cs",
"language:da",
"language:nl",
"language:en",
"language:et",
"language:tl",
"language:fi",
"language:fr",
"language:ff",
"language:gl",
"language:lg",
"language:ka",
"language:de",
"language:el",
"language:gu",
"language:ha",
"language:he",
"language:hi",
"language:hu",
"language:is",
"language:ig",
"language:id",
"language:ga",
"language:it",
"language:ja",
"language:jv",
"language:kea",
"language:kam",
"language:kn",
"language:kk",
"language:km",
"language:ko",
"language:ky",
"language:lo",
"language:lv",
"language:ln",
"language:lt",
"language:luo",
"language:lb",
"language:mk",
"language:ms",
"language:ml",
"language:mt",
"language:mi",
"language:mr",
"language:mn",
"language:ne",
"language:ns",
"language:no",
"language:ny",
"language:oc",
"language:or",
"language:om",
"language:ps",
"language:fa",
"language:pl",
"language:pt",
"language:pa",
"language:ro",
"language:ru",
"language:sr",
"language:sn",
"language:sd",
"language:sk",
"language:sl",
"language:so",
"language:ku",
"language:es",
"language:sw",
"language:sv",
"language:tg",
"language:ta",
"language:te",
"language:th",
"language:tr",
"language:uk",
"language:umb",
"language:ur",
"language:uz",
"language:vi",
"language:cy",
"language:wo",
"language:xh",
"language:yo",
"language:zu",
"license:cc-by-sa-4.0",
"conditional-text-generation",
"arxiv:2106.03193",
"region:us"
] | gsarti | One of the biggest challenges hindering progress in low-resource and multilingual machine translation is the
lack of good evaluation benchmarks. Current evaluation benchmarks either lack good coverage of low-resource
languages, consider only restricted domains, or are low quality because they are constructed using
semi-automatic procedures. In this work, we introduce the FLORES evaluation benchmark, consisting of 3001
sentences extracted from English Wikipedia and covering a variety of different topics and domains.
These sentences have been translated in 101 languages by professional translators through a carefully
controlled process. The resulting dataset enables better assessment of model quality on the long tail of
low-resource languages, including the evaluation of many-to-many multilingual translation systems, as all
translations are multilingually aligned. By publicly releasing such a high-quality and high-coverage dataset,
we hope to foster progress in the machine translation community and beyond. | @inproceedings{,
title={The {FLORES}-101 Evaluation Benchmark for Low-Resource and Multilingual Machine Translation},
author={
Goyal, Naman and Gao, Cynthia and Chaudhary, Vishrav and Chen, Peng-Jen and Wenzek, Guillaume and
Ju, Da and Krishnan, Sanjana and Ranzato, Marc'Aurelio and Guzm\'{a}n, Francisco and Fan, Angela
},
year={2021}
} | null | 8 | 2,256 | ---
annotations_creators:
- found
language_creators:
- expert-generated
language:
- af
- am
- ar
- hy
- as
- ast
- az
- be
- bn
- bs
- bg
- my
- ca
- ceb
- zho
- hr
- cs
- da
- nl
- en
- et
- tl
- fi
- fr
- ff
- gl
- lg
- ka
- de
- el
- gu
- ha
- he
- hi
- hu
- is
- ig
- id
- ga
- it
- ja
- jv
- kea
- kam
- kn
- kk
- km
- ko
- ky
- lo
- lv
- ln
- lt
- luo
- lb
- mk
- ms
- ml
- mt
- mi
- mr
- mn
- ne
- ns
- 'no'
- ny
- oc
- or
- om
- ps
- fa
- pl
- pt
- pa
- ro
- ru
- sr
- sn
- sd
- sk
- sl
- so
- ku
- es
- sw
- sv
- tg
- ta
- te
- th
- tr
- uk
- umb
- ur
- uz
- vi
- cy
- wo
- xh
- yo
- zu
license:
- cc-by-sa-4.0
multilinguality:
- multilingual
- translation
size_categories:
- unknown
source_datasets:
- extended|flores
task_categories:
- text-generation
- translation
task_ids: []
paperswithcode_id: flores
pretty_name: flores101
tags:
- conditional-text-generation
---
# Dataset Card for Flores 101
## Table of Contents
- [Dataset Card for Flores 101](#dataset-card-for-flores-101)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Home:** [WMT](http://www.statmt.org/wmt21/large-scale-multilingual-translation-task.html)
- **Repository:** [Github](https://github.com/facebookresearch/flores)
- **Blogpost:** [FAIR](https://ai.facebook.com/blog/the-flores-101-data-set-helping-build-better-translation-systems-around-the-world)
- **Paper:** [Arxiv](https://arxiv.org/abs/2106.03193)
- **Point of Contact:** [flores@fb.com](mailto:flores@fb.com)
- **Leaderboard** [Dynabench](https://dynabench.org/flores/Flores%20MT%20Evaluation%20(FULL))
### Dataset Summary
FLORES is a benchmark dataset for machine translation between English and low-resource languages.
Abstract from the original paper:
> One of the biggest challenges hindering progress in low-resource and multilingual machine translation is the lack of good evaluation benchmarks. Current evaluation benchmarks either lack good coverage of low-resource languages, consider only restricted domains, or are low quality because they are constructed using semi-automatic procedures. In this work, we introduce the FLORES evaluation benchmark, consisting of 3001 sentences extracted from English Wikipedia and covering a variety of different topics and domains. These sentences have been translated in 101 languages by professional translators through a carefully controlled process. The resulting dataset enables better assessment of model quality on the long tail of low-resource languages, including the evaluation of many-to-many multilingual translation systems, as all translations are multilingually aligned. By publicly releasing such a high-quality and high-coverage dataset, we hope to foster progress in the machine translation community and beyond.
**Disclaimer**: *The Flores-101 dataset is hosted by the Facebook and licensed under the [Creative Commons Attribution-ShareAlike 4.0 International License](https://creativecommons.org/licenses/by-sa/4.0/).
### Supported Tasks and Leaderboards
#### Multilingual Machine Translation
Refer to the [Dynabench leaderboard](https://dynabench.org/flores/Flores%20MT%20Evaluation%20(FULL)) for additional details on model evaluation on FLORES-101 in the context of the WMT2021 shared task on [Large-Scale Multilingual Machine Translation](http://www.statmt.org/wmt21/large-scale-multilingual-translation-task.html).
### Languages
The dataset contains parallel sentences for 101 languages, as mentioned in the original [Github](https://github.com/facebookresearch/flores/blob/master/README.md) page for the project. Languages are identified with the ISO 639-3 code (e.g. `eng`, `fra`, `rus`) as in the original dataset.
**New:** Use the configuration `all` to access the full set of parallel sentences for all the available languages in a single command.
## Dataset Structure
### Data Instances
A sample from the `dev` split for the Russian language (`rus` config) is provided below. All configurations have the same structure, and all sentences are aligned across configurations and splits.
```python
{
'id': 1,
'sentence': 'В понедельник ученые из Медицинской школы Стэнфордского университета объявили об изобретении нового диагностического инструмента, который может сортировать клетки по их типу; это маленький чип, который можно напечатать, используя стандартный струйный принтер примерно за 1 цент США.',
'URL': 'https://en.wikinews.org/wiki/Scientists_say_new_medical_diagnostic_chip_can_sort_cells_anywhere_with_an_inkjet',
'domain': 'wikinews',
'topic': 'health',
'has_image': 0,
'has_hyperlink': 0
}
```
The text is provided as-in the original dataset, without further preprocessing or tokenization.
### Data Fields
- `id`: Row number for the data entry, starting at 1.
- `sentence`: The full sentence in the specific language.
- `URL`: The URL for the English article from which the sentence was extracted.
- `domain`: The domain of the sentence.
- `topic`: The topic of the sentence.
- `has_image`: Whether the original article contains an image.
- `has_hyperlink`: Whether the sentence contains a hyperlink.
### Data Splits
| config| `dev`| `devtest`|
|-----------------:|-----:|---------:|
|all configurations| 997| 1012:|
### Dataset Creation
Please refer to the original article [The FLORES-101 Evaluation Benchmark for Low-Resource and Multilingual Machine Translation](https://arxiv.org/abs/2106.03193) for additional information on dataset creation.
## Additional Information
### Dataset Curators
The original authors of FLORES-101 are the curators of the original dataset. For problems or updates on this 🤗 Datasets version, please contact [gabriele.sarti996@gmail.com](mailto:gabriele.sarti996@gmail.com).
### Licensing Information
Licensed with Creative Commons Attribution Share Alike 4.0. License available [here](https://creativecommons.org/licenses/by-sa/4.0/).
### Citation Information
Please cite the authors if you use these corpora in your work:
```bibtex
@inproceedings{flores101,
title={The FLORES-101 Evaluation Benchmark for Low-Resource and Multilingual Machine Translation},
author={Goyal, Naman and Gao, Cynthia and Chaudhary, Vishrav and Chen, Peng-Jen and Wenzek, Guillaume and Ju, Da and Krishnan, Sanjana and Ranzato, Marc'Aurelio and Guzm\'{a}n, Francisco and Fan, Angela},
journal={arXiv preprint arXiv:2106.03193},
year={2021}
}
``` |
pcuenq/oxford-pets | 2022-08-06T16:01:34.000Z | [
"task_categories:image-classification",
"source_datasets:https://www.robots.ox.ac.uk/~vgg/data/pets/",
"license:cc-by-sa-4.0",
"pets",
"oxford",
"region:us"
] | pcuenq | null | null | null | 5 | 2,245 | ---
tags:
- pets
- oxford
license: cc-by-sa-4.0
license_details: https://www.robots.ox.ac.uk/~vgg/data/pets/
pretty_name: Oxford-IIIT Pet Dataset (no annotations)
source_datasets: https://www.robots.ox.ac.uk/~vgg/data/pets/
task_categories:
- image-classification
---
# Oxford-IIIT Pet Dataset
Images from [The Oxford-IIIT Pet Dataset](https://www.robots.ox.ac.uk/~vgg/data/pets/). Only images and labels have been pushed, segmentation annotations were ignored.
- **Homepage:** https://www.robots.ox.ac.uk/~vgg/data/pets/
License:
Same as the original dataset.
|
intfloat/multilingual_cc_news | 2023-04-23T08:19:06.000Z | [
"size_categories:100M<n<1B",
"language:en",
"language:zh",
"language:fr",
"language:de",
"language:af",
"language:ar",
"region:us"
] | intfloat | \
Multilingual CC-News dataset.
This is the processed version from https://huggingface.co/datasets/CloverSearch/cc-news-mutlilingual. | null | null | 3 | 2,237 | ---
size_categories:
- 100M<n<1B
language:
- en
- zh
- fr
- de
- af
- ar
---
### Dataset Summary
This dataset is based on [CloverSearch/cc-news-mutlilingual](https://huggingface.co/datasets/CloverSearch/cc-news-mutlilingual).
We add a script to support access multilingual CC-News dataset with HuggingFace datasets API instead of directly downloading raw data files.
### Data Fields
- `title`: a `string` feature.
- `maintext`: a `string` feature.
- `url`: a `string` feature.
- `date_publish`: a `string` feature.
### How to use this dataset
You can load any subset of CC-News per language:
```python
from datasets import load_dataset
dataset = load_dataset("intfloat/multilingual_cc_news", languages=["af"])
```
## Supported Languages
```
af
als
am
an
ar
arz
as
ast
av
az
azb
ba
bar
bcl
be
bg
bh
bn
bo
bpy
br
bs
bxr
ca
cbk
ce
ceb
ckb
co
cs
cv
cy
da
de
diq
dsb
dty
dv
el
eml
en
eo
es
et
eu
fa
fi
fr
fy
ga
gd
gl
gn
gom
gu
gv
he
hi
hif
hr
hsb
ht
hu
hy
ia
id
ie
ilo
io
is
it
ja
jbo
jv
ka
kk
km
kn
ko
krc
ku
kv
kw
ky
la
lb
lez
li
lmo
lo
lt
lv
mai
mg
mhr
min
mk
ml
mn
mr
mrj
ms
mt
mwl
my
myv
mzn
nah
nap
nds
ne
new
nl
nn
no
oc
or
os
pa
pam
pfl
pl
pms
pnb
ps
pt
qu
rm
ro
ru
sa
sah
sc
scn
sco
sd
sh
si
sk
sl
so
sq
sr
su
sv
sw
ta
te
tg
th
tk
tl
tr
tt
tyv
ug
uk
ur
uz
vec
vep
vi
vls
vo
wa
war
wuu
xal
xmf
yi
yo
yue
zh
```
|
bcui19/chat-v2-anthropic-helpfulness | 2023-06-26T23:22:50.000Z | [
"license:apache-2.0",
"region:us"
] | bcui19 | null | null | null | 0 | 2,204 | ---
license: apache-2.0
dataset_info:
features:
- name: prompt
dtype: string
- name: response
dtype: string
- name: source
dtype: string
splits:
- name: train
num_bytes: 162490682.0
num_examples: 155270
- name: test
num_bytes: 8773391.0
num_examples: 8336
download_size: 82339171
dataset_size: 171264073.0
---
|
ivanzhouyq/RedPajama-Tiny | 2023-07-03T18:16:47.000Z | [
"task_categories:text-generation",
"language:en",
"region:us"
] | ivanzhouyq | RedPajama is a clean-room, fully open-source implementation of the LLaMa dataset. This is a 1B-token sample of the full dataset. | null | null | 2 | 2,169 | ---
task_categories:
- text-generation
language:
- en
pretty_name: RedPajama Tiny
---
# Dataset Card for Dataset Name
### Dataset Summary
This is a tiny version of the RedPajama dataset, which is a clean-room, fully open-source implementation of the LLaMa dataset.
This dataset contains 64 samples from each of the 7 sources.
The full dataset has the following token counts and is available for [download]( https://huggingface.co/datasets/togethercomputer/RedPajama-Data-1T):
| Dataset | Token Count |
|---------------|-------------|
| Commoncrawl | 878 Billion |
| C4 | 175 Billion |
| GitHub | 59 Billion |
| Books | 26 Billion |
| ArXiv | 28 Billion |
| Wikipedia | 24 Billion |
| StackExchange | 20 Billion |
| Total | 1.2 Trillion |
### Languages
Primarily English, though the Wikipedia slice contains multiple languages.
## Dataset Structure
The dataset structure is as follows:
```
{
"text": ...,
"meta": {"url": "...", "timestamp": "...", "source": "...", "language": "...", ...}
}
```
## Dataset Creation
This dataset was created to follow the LLaMa paper as closely as possible to try to reproduce its recipe.
### Source Data
#### Commoncrawl
We download five dumps from Commoncrawl, and run the dumps through the official `cc_net` pipeline.
We then deduplicate on the paragraph level, and filter out low quality text using a linear classifier trained to
classify paragraphs as Wikipedia references or random Commoncrawl samples.
#### C4
C4 is downloaded from Huggingface. The only preprocessing step is to bring the data into our own format.
#### GitHub
The raw GitHub data is downloaded from Google BigQuery. We deduplicate on the file level and filter out low quality
files and only keep projects that are distributed under the MIT, BSD, or Apache license.
#### Wikipedia
We use the Wikipedia dataset available on Huggingface, which is based on the Wikipedia dump from 2023-03-20 and contains
text in 20 different languages. The dataset comes in preprocessed format, so that hyperlinks, comments and other
formatting boilerplate has been removed.
#### Gutenberg and Books3
The PG19 subset of the Gutenberg Project and Books3 datasets are downloaded from Huggingface. After downloading, we use
simhash to remove near duplicates.
#### ArXiv
ArXiv data is downloaded from Amazon S3 in the `arxiv` requester pays bucket. We only keep latex source files and
remove preambles, comments, macros and bibliographies.
#### Stackexchange
The Stack Exchange split of the dataset is download from the
[Internet Archive](https://archive.org/download/stackexchange). Here we only keep the posts from the 28 largest sites,
remove html tags, group the posts into question-answer pairs, and order answers by their score.
|
scientific_papers | 2023-04-05T13:39:46.000Z | [
"task_categories:summarization",
"annotations_creators:found",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:original",
"language:en",
"license:unknown",
"abstractive-summarization",
"arxiv:1804.05685",
"region:us"
] | null | Scientific papers datasets contains two sets of long and structured documents.
The datasets are obtained from ArXiv and PubMed OpenAccess repositories.
Both "arxiv" and "pubmed" have two features:
- article: the body of the document, pagragraphs seperated by "/n".
- abstract: the abstract of the document, pagragraphs seperated by "/n".
- section_names: titles of sections, seperated by "/n". | @article{Cohan_2018,
title={A Discourse-Aware Attention Model for Abstractive Summarization of
Long Documents},
url={http://dx.doi.org/10.18653/v1/n18-2097},
DOI={10.18653/v1/n18-2097},
journal={Proceedings of the 2018 Conference of the North American Chapter of
the Association for Computational Linguistics: Human Language
Technologies, Volume 2 (Short Papers)},
publisher={Association for Computational Linguistics},
author={Cohan, Arman and Dernoncourt, Franck and Kim, Doo Soon and Bui, Trung and Kim, Seokhwan and Chang, Walter and Goharian, Nazli},
year={2018}
} | null | 77 | 2,153 | ---
annotations_creators:
- found
language:
- en
language_creators:
- found
license:
- unknown
multilinguality:
- monolingual
pretty_name: ScientificPapers
size_categories:
- 100K<n<1M
source_datasets:
- original
task_categories:
- summarization
task_ids: []
paperswithcode_id: null
tags:
- abstractive-summarization
dataset_info:
- config_name: arxiv
features:
- name: article
dtype: string
- name: abstract
dtype: string
- name: section_names
dtype: string
splits:
- name: train
num_bytes: 7148341992
num_examples: 203037
- name: validation
num_bytes: 217125524
num_examples: 6436
- name: test
num_bytes: 217514961
num_examples: 6440
download_size: 4504646347
dataset_size: 7582982477
- config_name: pubmed
features:
- name: article
dtype: string
- name: abstract
dtype: string
- name: section_names
dtype: string
splits:
- name: train
num_bytes: 2252027383
num_examples: 119924
- name: validation
num_bytes: 127403398
num_examples: 6633
- name: test
num_bytes: 127184448
num_examples: 6658
download_size: 4504646347
dataset_size: 2506615229
---
# Dataset Card for "scientific_papers"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:** https://github.com/armancohan/long-summarization
- **Paper:** [A Discourse-Aware Attention Model for Abstractive Summarization of Long Documents](https://arxiv.org/abs/1804.05685)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of downloaded dataset files:** 9.01 GB
- **Size of the generated dataset:** 10.09 GB
- **Total amount of disk used:** 19.10 GB
### Dataset Summary
Scientific papers datasets contains two sets of long and structured documents.
The datasets are obtained from ArXiv and PubMed OpenAccess repositories.
Both "arxiv" and "pubmed" have two features:
- article: the body of the document, paragraphs separated by "/n".
- abstract: the abstract of the document, paragraphs separated by "/n".
- section_names: titles of sections, separated by "/n".
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Dataset Structure
### Data Instances
#### arxiv
- **Size of downloaded dataset files:** 4.50 GB
- **Size of the generated dataset:** 7.58 GB
- **Total amount of disk used:** 12.09 GB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"abstract": "\" we have studied the leptonic decay @xmath0 , via the decay channel @xmath1 , using a sample of tagged @xmath2 decays collected...",
"article": "\"the leptonic decays of a charged pseudoscalar meson @xmath7 are processes of the type @xmath8 , where @xmath9 , @xmath10 , or @...",
"section_names": "[sec:introduction]introduction\n[sec:detector]data and the cleo- detector\n[sec:analysys]analysis method\n[sec:conclusion]summary"
}
```
#### pubmed
- **Size of downloaded dataset files:** 4.50 GB
- **Size of the generated dataset:** 2.51 GB
- **Total amount of disk used:** 7.01 GB
An example of 'validation' looks as follows.
```
This example was too long and was cropped:
{
"abstract": "\" background and aim : there is lack of substantial indian data on venous thromboembolism ( vte ) . \\n the aim of this study was...",
"article": "\"approximately , one - third of patients with symptomatic vte manifests pe , whereas two - thirds manifest dvt alone .\\nboth dvt...",
"section_names": "\"Introduction\\nSubjects and Methods\\nResults\\nDemographics and characteristics of venous thromboembolism patients\\nRisk factors ..."
}
```
### Data Fields
The data fields are the same among all splits.
#### arxiv
- `article`: a `string` feature.
- `abstract`: a `string` feature.
- `section_names`: a `string` feature.
#### pubmed
- `article`: a `string` feature.
- `abstract`: a `string` feature.
- `section_names`: a `string` feature.
### Data Splits
| name |train |validation|test|
|------|-----:|---------:|---:|
|arxiv |203037| 6436|6440|
|pubmed|119924| 6633|6658|
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@article{Cohan_2018,
title={A Discourse-Aware Attention Model for Abstractive Summarization of
Long Documents},
url={http://dx.doi.org/10.18653/v1/n18-2097},
DOI={10.18653/v1/n18-2097},
journal={Proceedings of the 2018 Conference of the North American Chapter of
the Association for Computational Linguistics: Human Language
Technologies, Volume 2 (Short Papers)},
publisher={Association for Computational Linguistics},
author={Cohan, Arman and Dernoncourt, Franck and Kim, Doo Soon and Bui, Trung and Kim, Seokhwan and Chang, Walter and Goharian, Nazli},
year={2018}
}
```
### Contributions
Thanks to [@thomwolf](https://github.com/thomwolf), [@jplu](https://github.com/jplu), [@lewtun](https://github.com/lewtun), [@patrickvonplaten](https://github.com/patrickvonplaten) for adding this dataset. |
gem | 2023-06-01T14:59:56.000Z | [
"task_categories:fill-mask",
"task_categories:summarization",
"task_categories:table-to-text",
"task_categories:tabular-to-text",
"task_categories:text-generation",
"task_categories:text2text-generation",
"task_ids:dialogue-modeling",
"task_ids:rdf-to-text",
"task_ids:news-articles-summarization",
"task_ids:text-simplification",
"annotations_creators:crowdsourced",
"annotations_creators:found",
"language_creators:crowdsourced",
"language_creators:found",
"language_creators:machine-generated",
"multilinguality:monolingual",
"multilinguality:multilingual",
"size_categories:100K<n<1M",
"size_categories:10K<n<100K",
"size_categories:1K<n<10K",
"source_datasets:extended|other-vision-datasets",
"source_datasets:original",
"language:cs",
"language:de",
"language:en",
"language:es",
"language:ru",
"language:tr",
"language:vi",
"license:other",
"intent-to-text",
"meaning-representation-to-text",
"concepts-to-text",
"arxiv:2102.01672",
"region:us"
] | null | GEM is a benchmark environment for Natural Language Generation with a focus on its Evaluation,
both through human annotations and automated Metrics.
GEM aims to:
- measure NLG progress across 13 datasets spanning many NLG tasks and languages.
- provide an in-depth analysis of data and models presented via data statements and challenge sets.
- develop standards for evaluation of generated text using both automated and human metrics.
It is our goal to regularly update GEM and to encourage toward more inclusive practices in dataset development
by extending existing data or developing datasets for additional languages. | @article{gem_benchmark,
author = {Sebastian Gehrmann and
Tosin P. Adewumi and
Karmanya Aggarwal and
Pawan Sasanka Ammanamanchi and
Aremu Anuoluwapo and
Antoine Bosselut and
Khyathi Raghavi Chandu and
Miruna{-}Adriana Clinciu and
Dipanjan Das and
Kaustubh D. Dhole and
Wanyu Du and
Esin Durmus and
Ondrej Dusek and
Chris Emezue and
Varun Gangal and
Cristina Garbacea and
Tatsunori Hashimoto and
Yufang Hou and
Yacine Jernite and
Harsh Jhamtani and
Yangfeng Ji and
Shailza Jolly and
Dhruv Kumar and
Faisal Ladhak and
Aman Madaan and
Mounica Maddela and
Khyati Mahajan and
Saad Mahamood and
Bodhisattwa Prasad Majumder and
Pedro Henrique Martins and
Angelina McMillan{-}Major and
Simon Mille and
Emiel van Miltenburg and
Moin Nadeem and
Shashi Narayan and
Vitaly Nikolaev and
Rubungo Andre Niyongabo and
Salomey Osei and
Ankur P. Parikh and
Laura Perez{-}Beltrachini and
Niranjan Ramesh Rao and
Vikas Raunak and
Juan Diego Rodriguez and
Sashank Santhanam and
Joao Sedoc and
Thibault Sellam and
Samira Shaikh and
Anastasia Shimorina and
Marco Antonio Sobrevilla Cabezudo and
Hendrik Strobelt and
Nishant Subramani and
Wei Xu and
Diyi Yang and
Akhila Yerukola and
Jiawei Zhou},
title = {The {GEM} Benchmark: Natural Language Generation, its Evaluation and
Metrics},
journal = {CoRR},
volume = {abs/2102.01672},
year = {2021},
url = {https://arxiv.org/abs/2102.01672},
archivePrefix = {arXiv},
eprint = {2102.01672}
} | null | 21 | 2,130 | ---
annotations_creators:
- crowdsourced
- found
language_creators:
- crowdsourced
- found
- machine-generated
language:
- cs
- de
- en
- es
- ru
- tr
- vi
license:
- other
multilinguality:
- monolingual
- multilingual
size_categories:
- 100K<n<1M
- 10K<n<100K
- 1K<n<10K
source_datasets:
- extended|other-vision-datasets
- original
task_categories:
- fill-mask
- summarization
- table-to-text
- tabular-to-text
- text-generation
- text2text-generation
task_ids:
- dialogue-modeling
- rdf-to-text
- news-articles-summarization
- text-simplification
paperswithcode_id: gem
pretty_name: GEM
tags:
- intent-to-text
- meaning-representation-to-text
- concepts-to-text
dataset_info:
- config_name: mlsum_de
features:
- name: gem_id
dtype: string
- name: gem_parent_id
dtype: string
- name: text
dtype: string
- name: topic
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: date
dtype: string
- name: target
dtype: string
- name: references
list: string
splits:
- name: train
num_bytes: 858060337
num_examples: 220748
- name: validation
num_bytes: 49712791
num_examples: 11392
- name: test
num_bytes: 49146354
num_examples: 10695
- name: challenge_train_sample
num_bytes: 1894220
num_examples: 500
- name: challenge_validation_sample
num_bytes: 2202723
num_examples: 500
- name: challenge_test_covid
num_bytes: 19771285
num_examples: 5058
download_size: 362783528
dataset_size: 980787710
- config_name: mlsum_es
features:
- name: gem_id
dtype: string
- name: gem_parent_id
dtype: string
- name: text
dtype: string
- name: topic
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: date
dtype: string
- name: target
dtype: string
- name: references
list: string
splits:
- name: train
num_bytes: 1211240956
num_examples: 259888
- name: validation
num_bytes: 51611723
num_examples: 9977
- name: test
num_bytes: 72117564
num_examples: 13366
- name: challenge_train_sample
num_bytes: 2366443
num_examples: 500
- name: challenge_validation_sample
num_bytes: 2658596
num_examples: 500
- name: challenge_test_covid
num_bytes: 13576624
num_examples: 1938
download_size: 525621426
dataset_size: 1353571906
- config_name: wiki_lingua_es_en_v0
features:
- name: gem_id
dtype: string
- name: gem_parent_id
dtype: string
- name: source
dtype: string
- name: target
dtype: string
- name: references
list: string
splits:
- name: train
num_bytes: 215665468
num_examples: 79515
- name: validation
num_bytes: 25891008
num_examples: 8835
- name: test
num_bytes: 50195305
num_examples: 19797
download_size: 169406387
dataset_size: 291751781
- config_name: wiki_lingua_ru_en_v0
features:
- name: gem_id
dtype: string
- name: gem_parent_id
dtype: string
- name: source
dtype: string
- name: target
dtype: string
- name: references
list: string
splits:
- name: train
num_bytes: 159631205
num_examples: 36898
- name: validation
num_bytes: 18626973
num_examples: 4100
- name: test
num_bytes: 34865311
num_examples: 9094
download_size: 169406387
dataset_size: 213123489
- config_name: wiki_lingua_tr_en_v0
features:
- name: gem_id
dtype: string
- name: gem_parent_id
dtype: string
- name: source
dtype: string
- name: target
dtype: string
- name: references
list: string
splits:
- name: train
num_bytes: 7689845
num_examples: 3193
- name: validation
num_bytes: 942122
num_examples: 355
- name: test
num_bytes: 1875110
num_examples: 808
download_size: 169406387
dataset_size: 10507077
- config_name: wiki_lingua_vi_en_v0
features:
- name: gem_id
dtype: string
- name: gem_parent_id
dtype: string
- name: source
dtype: string
- name: target
dtype: string
- name: references
list: string
splits:
- name: train
num_bytes: 31599580
num_examples: 9206
- name: validation
num_bytes: 3618660
num_examples: 1023
- name: test
num_bytes: 6267359
num_examples: 2167
download_size: 169406387
dataset_size: 41485599
- config_name: wiki_lingua_arabic_ar
features:
- name: gem_id
dtype: string
- name: gem_parent_id
dtype: string
- name: source_aligned
dtype:
translation:
languages:
- ar
- en
- name: target_aligned
dtype:
translation:
languages:
- ar
- en
- name: source
dtype: string
- name: target
dtype: string
- name: references
list: string
splits:
- name: train
num_bytes: 208106335
num_examples: 20441
- name: validation
num_bytes: 31126187
num_examples: 2919
- name: test
num_bytes: 60915220
num_examples: 5841
download_size: 58984103
dataset_size: 300147742
- config_name: wiki_lingua_chinese_zh
features:
- name: gem_id
dtype: string
- name: gem_parent_id
dtype: string
- name: source_aligned
dtype:
translation:
languages:
- zh
- en
- name: target_aligned
dtype:
translation:
languages:
- zh
- en
- name: source
dtype: string
- name: target
dtype: string
- name: references
list: string
splits:
- name: train
num_bytes: 86130302
num_examples: 13211
- name: validation
num_bytes: 13060918
num_examples: 1886
- name: test
num_bytes: 25310021
num_examples: 3775
download_size: 32899156
dataset_size: 124501241
- config_name: wiki_lingua_czech_cs
features:
- name: gem_id
dtype: string
- name: gem_parent_id
dtype: string
- name: source_aligned
dtype:
translation:
languages:
- cs
- en
- name: target_aligned
dtype:
translation:
languages:
- cs
- en
- name: source
dtype: string
- name: target
dtype: string
- name: references
list: string
splits:
- name: train
num_bytes: 41107318
num_examples: 5033
- name: validation
num_bytes: 6305328
num_examples: 718
- name: test
num_bytes: 12124770
num_examples: 1438
download_size: 14515534
dataset_size: 59537416
- config_name: wiki_lingua_dutch_nl
features:
- name: gem_id
dtype: string
- name: gem_parent_id
dtype: string
- name: source_aligned
dtype:
translation:
languages:
- nl
- en
- name: target_aligned
dtype:
translation:
languages:
- nl
- en
- name: source
dtype: string
- name: target
dtype: string
- name: references
list: string
splits:
- name: train
num_bytes: 169067454
num_examples: 21866
- name: validation
num_bytes: 25521003
num_examples: 3123
- name: test
num_bytes: 49165151
num_examples: 6248
download_size: 56492150
dataset_size: 243753608
- config_name: wiki_lingua_english_en
features:
- name: gem_id
dtype: string
- name: gem_parent_id
dtype: string
- name: source_aligned
dtype:
translation:
languages:
- en
- en
- name: target_aligned
dtype:
translation:
languages:
- en
- en
- name: source
dtype: string
- name: target
dtype: string
- name: references
list: string
splits:
- name: train
num_bytes: 464171624
num_examples: 99020
- name: validation
num_bytes: 67652281
num_examples: 13823
- name: test
num_bytes: 138944243
num_examples: 28614
download_size: 118031903
dataset_size: 670768148
- config_name: wiki_lingua_french_fr
features:
- name: gem_id
dtype: string
- name: gem_parent_id
dtype: string
- name: source_aligned
dtype:
translation:
languages:
- fr
- en
- name: target_aligned
dtype:
translation:
languages:
- fr
- en
- name: source
dtype: string
- name: target
dtype: string
- name: references
list: string
splits:
- name: train
num_bytes: 372039357
num_examples: 44556
- name: validation
num_bytes: 54992250
num_examples: 6364
- name: test
num_bytes: 108831855
num_examples: 12731
download_size: 118758047
dataset_size: 535863462
- config_name: wiki_lingua_german_de
features:
- name: gem_id
dtype: string
- name: gem_parent_id
dtype: string
- name: source_aligned
dtype:
translation:
languages:
- de
- en
- name: target_aligned
dtype:
translation:
languages:
- de
- en
- name: source
dtype: string
- name: target
dtype: string
- name: references
list: string
splits:
- name: train
num_bytes: 322276536
num_examples: 40839
- name: validation
num_bytes: 47631883
num_examples: 5833
- name: test
num_bytes: 93715331
num_examples: 11669
download_size: 107638803
dataset_size: 463623750
- config_name: wiki_lingua_hindi_hi
features:
- name: gem_id
dtype: string
- name: gem_parent_id
dtype: string
- name: source_aligned
dtype:
translation:
languages:
- hi
- en
- name: target_aligned
dtype:
translation:
languages:
- hi
- en
- name: source
dtype: string
- name: target
dtype: string
- name: references
list: string
splits:
- name: train
num_bytes: 99672133
num_examples: 6942
- name: validation
num_bytes: 14706378
num_examples: 991
- name: test
num_bytes: 28543048
num_examples: 1984
download_size: 21042040
dataset_size: 142921559
- config_name: wiki_lingua_indonesian_id
features:
- name: gem_id
dtype: string
- name: gem_parent_id
dtype: string
- name: source_aligned
dtype:
translation:
languages:
- id
- en
- name: target_aligned
dtype:
translation:
languages:
- id
- en
- name: source
dtype: string
- name: target
dtype: string
- name: references
list: string
splits:
- name: train
num_bytes: 263974954
num_examples: 33237
- name: validation
num_bytes: 39297987
num_examples: 4747
- name: test
num_bytes: 76567819
num_examples: 9497
download_size: 83968162
dataset_size: 379840760
- config_name: wiki_lingua_italian_it
features:
- name: gem_id
dtype: string
- name: gem_parent_id
dtype: string
- name: source_aligned
dtype:
translation:
languages:
- it
- en
- name: target_aligned
dtype:
translation:
languages:
- it
- en
- name: source
dtype: string
- name: target
dtype: string
- name: references
list: string
splits:
- name: train
num_bytes: 267090482
num_examples: 35661
- name: validation
num_bytes: 39227425
num_examples: 5093
- name: test
num_bytes: 76840429
num_examples: 10189
download_size: 88921209
dataset_size: 383158336
- config_name: wiki_lingua_japanese_ja
features:
- name: gem_id
dtype: string
- name: gem_parent_id
dtype: string
- name: source_aligned
dtype:
translation:
languages:
- ja
- en
- name: target_aligned
dtype:
translation:
languages:
- ja
- en
- name: source
dtype: string
- name: target
dtype: string
- name: references
list: string
splits:
- name: train
num_bytes: 73871019
num_examples: 8853
- name: validation
num_bytes: 10807006
num_examples: 1264
- name: test
num_bytes: 21175951
num_examples: 2530
download_size: 22803299
dataset_size: 105853976
- config_name: wiki_lingua_korean_ko
features:
- name: gem_id
dtype: string
- name: gem_parent_id
dtype: string
- name: source_aligned
dtype:
translation:
languages:
- ko
- en
- name: target_aligned
dtype:
translation:
languages:
- ko
- en
- name: source
dtype: string
- name: target
dtype: string
- name: references
list: string
splits:
- name: train
num_bytes: 73106687
num_examples: 8524
- name: validation
num_bytes: 10788276
num_examples: 1216
- name: test
num_bytes: 21172641
num_examples: 2436
download_size: 23336917
dataset_size: 105067604
- config_name: wiki_lingua_portuguese_pt
features:
- name: gem_id
dtype: string
- name: gem_parent_id
dtype: string
- name: source_aligned
dtype:
translation:
languages:
- pt
- en
- name: target_aligned
dtype:
translation:
languages:
- pt
- en
- name: source
dtype: string
- name: target
dtype: string
- name: references
list: string
splits:
- name: train
num_bytes: 405546332
num_examples: 57159
- name: validation
num_bytes: 59729210
num_examples: 8165
- name: test
num_bytes: 117775356
num_examples: 16331
download_size: 137542940
dataset_size: 583050898
- config_name: wiki_lingua_russian_ru
features:
- name: gem_id
dtype: string
- name: gem_parent_id
dtype: string
- name: source_aligned
dtype:
translation:
languages:
- ru
- en
- name: target_aligned
dtype:
translation:
languages:
- ru
- en
- name: source
dtype: string
- name: target
dtype: string
- name: references
list: string
splits:
- name: train
num_bytes: 406299624
num_examples: 37028
- name: validation
num_bytes: 59651340
num_examples: 5288
- name: test
num_bytes: 116330937
num_examples: 10580
download_size: 106281321
dataset_size: 582281901
- config_name: wiki_lingua_spanish_es
features:
- name: gem_id
dtype: string
- name: gem_parent_id
dtype: string
- name: source_aligned
dtype:
translation:
languages:
- es
- en
- name: target_aligned
dtype:
translation:
languages:
- es
- en
- name: source
dtype: string
- name: target
dtype: string
- name: references
list: string
splits:
- name: train
num_bytes: 604276564
num_examples: 79212
- name: validation
num_bytes: 88677656
num_examples: 11316
- name: test
num_bytes: 177096288
num_examples: 22632
download_size: 198247534
dataset_size: 870050508
- config_name: wiki_lingua_thai_th
features:
- name: gem_id
dtype: string
- name: gem_parent_id
dtype: string
- name: source_aligned
dtype:
translation:
languages:
- th
- en
- name: target_aligned
dtype:
translation:
languages:
- th
- en
- name: source
dtype: string
- name: target
dtype: string
- name: references
list: string
splits:
- name: train
num_bytes: 139287649
num_examples: 10325
- name: validation
num_bytes: 21097845
num_examples: 1475
- name: test
num_bytes: 40049968
num_examples: 2950
download_size: 29988180
dataset_size: 200435462
- config_name: wiki_lingua_turkish_tr
features:
- name: gem_id
dtype: string
- name: gem_parent_id
dtype: string
- name: source_aligned
dtype:
translation:
languages:
- tr
- en
- name: target_aligned
dtype:
translation:
languages:
- tr
- en
- name: source
dtype: string
- name: target
dtype: string
- name: references
list: string
splits:
- name: train
num_bytes: 21987247
num_examples: 3148
- name: validation
num_bytes: 3229714
num_examples: 449
- name: test
num_bytes: 6197850
num_examples: 900
download_size: 7055820
dataset_size: 31414811
- config_name: wiki_lingua_vietnamese_vi
features:
- name: gem_id
dtype: string
- name: gem_parent_id
dtype: string
- name: source_aligned
dtype:
translation:
languages:
- vi
- en
- name: target_aligned
dtype:
translation:
languages:
- vi
- en
- name: source
dtype: string
- name: target
dtype: string
- name: references
list: string
splits:
- name: train
num_bytes: 128025008
num_examples: 13707
- name: validation
num_bytes: 19414734
num_examples: 1957
- name: test
num_bytes: 37430208
num_examples: 3917
download_size: 38035490
dataset_size: 184869950
- config_name: xsum
features:
- name: gem_id
dtype: string
- name: gem_parent_id
dtype: string
- name: xsum_id
dtype: string
- name: document
dtype: string
- name: target
dtype: string
- name: references
list: string
splits:
- name: train
num_bytes: 66299136
num_examples: 23206
- name: validation
num_bytes: 2270306
num_examples: 1117
- name: test
num_bytes: 2598509
num_examples: 1166
- name: challenge_train_sample
num_bytes: 1429145
num_examples: 500
- name: challenge_validation_sample
num_bytes: 1012689
num_examples: 500
- name: challenge_test_backtranslation
num_bytes: 1262047
num_examples: 500
- name: challenge_test_bfp_02
num_bytes: 1090364
num_examples: 500
- name: challenge_test_bfp_05
num_bytes: 1078076
num_examples: 500
- name: challenge_test_nopunc
num_bytes: 1127796
num_examples: 500
- name: challenge_test_covid
num_bytes: 1867180
num_examples: 401
download_size: 258277147
dataset_size: 80035248
- config_name: common_gen
features:
- name: gem_id
dtype: string
- name: gem_parent_id
dtype: string
- name: concept_set_id
dtype: int32
- name: concepts
list: string
- name: target
dtype: string
- name: references
list: string
splits:
- name: train
num_bytes: 10475926
num_examples: 67389
- name: validation
num_bytes: 405872
num_examples: 993
- name: test
num_bytes: 153170
num_examples: 1497
- name: challenge_train_sample
num_bytes: 85413
num_examples: 500
- name: challenge_validation_sample
num_bytes: 215192
num_examples: 500
- name: challenge_test_scramble
num_bytes: 60411
num_examples: 500
download_size: 1933517
dataset_size: 11395984
- config_name: cs_restaurants
features:
- name: gem_id
dtype: string
- name: gem_parent_id
dtype: string
- name: dialog_act
dtype: string
- name: dialog_act_delexicalized
dtype: string
- name: target_delexicalized
dtype: string
- name: target
dtype: string
- name: references
list: string
splits:
- name: train
num_bytes: 873145
num_examples: 3569
- name: validation
num_bytes: 288222
num_examples: 781
- name: test
num_bytes: 295696
num_examples: 842
- name: challenge_train_sample
num_bytes: 127869
num_examples: 500
- name: challenge_validation_sample
num_bytes: 193239
num_examples: 500
- name: challenge_test_scramble
num_bytes: 185574
num_examples: 500
download_size: 1531111
dataset_size: 1963745
- config_name: dart
features:
- name: gem_id
dtype: string
- name: gem_parent_id
dtype: string
- name: dart_id
dtype: int32
- name: tripleset
list:
list: string
- name: subtree_was_extended
dtype: bool
- name: target_sources
list: string
- name: target
dtype: string
- name: references
list: string
splits:
- name: train
num_bytes: 23047610
num_examples: 62659
- name: validation
num_bytes: 1934054
num_examples: 2768
- name: test
num_bytes: 3476953
num_examples: 5097
download_size: 29939366
dataset_size: 28458617
- config_name: e2e_nlg
features:
- name: gem_id
dtype: string
- name: gem_parent_id
dtype: string
- name: meaning_representation
dtype: string
- name: target
dtype: string
- name: references
list: string
splits:
- name: train
num_bytes: 9129030
num_examples: 33525
- name: validation
num_bytes: 1856097
num_examples: 4299
- name: test
num_bytes: 2133695
num_examples: 4693
- name: challenge_train_sample
num_bytes: 145319
num_examples: 500
- name: challenge_validation_sample
num_bytes: 226525
num_examples: 500
- name: challenge_test_scramble
num_bytes: 236199
num_examples: 500
download_size: 14668048
dataset_size: 13726865
- config_name: totto
features:
- name: gem_id
dtype: string
- name: gem_parent_id
dtype: string
- name: totto_id
dtype: int32
- name: table_page_title
dtype: string
- name: table_webpage_url
dtype: string
- name: table_section_title
dtype: string
- name: table_section_text
dtype: string
- name: table
list:
list:
- name: column_span
dtype: int32
- name: is_header
dtype: bool
- name: row_span
dtype: int32
- name: value
dtype: string
- name: highlighted_cells
list:
list: int32
- name: example_id
dtype: string
- name: sentence_annotations
list:
- name: original_sentence
dtype: string
- name: sentence_after_deletion
dtype: string
- name: sentence_after_ambiguity
dtype: string
- name: final_sentence
dtype: string
- name: overlap_subset
dtype: string
- name: target
dtype: string
- name: references
list: string
splits:
- name: train
num_bytes: 676032144
num_examples: 121153
- name: validation
num_bytes: 50736204
num_examples: 7700
- name: test
num_bytes: 41330062
num_examples: 7700
- name: challenge_train_sample
num_bytes: 2283076
num_examples: 500
- name: challenge_validation_sample
num_bytes: 3398639
num_examples: 500
- name: challenge_test_scramble
num_bytes: 2638966
num_examples: 500
download_size: 189534609
dataset_size: 776419091
- config_name: web_nlg_en
features:
- name: gem_id
dtype: string
- name: gem_parent_id
dtype: string
- name: input
list: string
- name: target
dtype: string
- name: references
list: string
- name: category
dtype: string
- name: webnlg_id
dtype: string
splits:
- name: train
num_bytes: 13067615
num_examples: 35426
- name: validation
num_bytes: 1153995
num_examples: 1667
- name: test
num_bytes: 1403601
num_examples: 1779
- name: challenge_train_sample
num_bytes: 193198
num_examples: 502
- name: challenge_validation_sample
num_bytes: 359868
num_examples: 499
- name: challenge_test_scramble
num_bytes: 402407
num_examples: 500
- name: challenge_test_numbers
num_bytes: 409213
num_examples: 500
download_size: 13181969
dataset_size: 16989897
- config_name: web_nlg_ru
features:
- name: gem_id
dtype: string
- name: gem_parent_id
dtype: string
- name: input
list: string
- name: target
dtype: string
- name: references
list: string
- name: category
dtype: string
- name: webnlg_id
dtype: string
splits:
- name: train
num_bytes: 6888009
num_examples: 14630
- name: validation
num_bytes: 795998
num_examples: 790
- name: test
num_bytes: 1145282
num_examples: 1102
- name: challenge_train_sample
num_bytes: 247089
num_examples: 501
- name: challenge_validation_sample
num_bytes: 514117
num_examples: 500
- name: challenge_test_scramble
num_bytes: 521625
num_examples: 500
download_size: 7854845
dataset_size: 10112120
- config_name: wiki_auto_asset_turk
features:
- name: gem_id
dtype: string
- name: gem_parent_id
dtype: string
- name: source
dtype: string
- name: target
dtype: string
- name: references
list: string
splits:
- name: train
num_bytes: 161095379
num_examples: 483801
- name: validation
num_bytes: 8211308
num_examples: 20000
- name: test_asset
num_bytes: 475336
num_examples: 359
- name: test_turk
num_bytes: 406842
num_examples: 359
- name: challenge_train_sample
num_bytes: 219542
num_examples: 500
- name: challenge_validation_sample
num_bytes: 213048
num_examples: 500
- name: challenge_test_asset_backtranslation
num_bytes: 436820
num_examples: 359
- name: challenge_test_asset_bfp02
num_bytes: 432742
num_examples: 359
- name: challenge_test_asset_bfp05
num_bytes: 432742
num_examples: 359
- name: challenge_test_asset_nopunc
num_bytes: 432735
num_examples: 359
- name: challenge_test_turk_backtranslation
num_bytes: 417204
num_examples: 359
- name: challenge_test_turk_bfp02
num_bytes: 414381
num_examples: 359
- name: challenge_test_turk_bfp05
num_bytes: 414383
num_examples: 359
- name: challenge_test_turk_nopunc
num_bytes: 414388
num_examples: 359
download_size: 126927527
dataset_size: 174016850
- config_name: schema_guided_dialog
features:
- name: gem_id
dtype: string
- name: gem_parent_id
dtype: string
- name: dialog_acts
list:
- name: act
dtype:
class_label:
names:
'0': AFFIRM
'1': AFFIRM_INTENT
'2': CONFIRM
'3': GOODBYE
'4': INFORM
'5': INFORM_COUNT
'6': INFORM_INTENT
'7': NEGATE
'8': NEGATE_INTENT
'9': NOTIFY_FAILURE
'10': NOTIFY_SUCCESS
'11': OFFER
'12': OFFER_INTENT
'13': REQUEST
'14': REQUEST_ALTS
'15': REQ_MORE
'16': SELECT
'17': THANK_YOU
- name: slot
dtype: string
- name: values
list: string
- name: context
list: string
- name: dialog_id
dtype: string
- name: service
dtype: string
- name: turn_id
dtype: int32
- name: prompt
dtype: string
- name: target
dtype: string
- name: references
list: string
splits:
- name: train
num_bytes: 146648117
num_examples: 164982
- name: validation
num_bytes: 9376504
num_examples: 10000
- name: test
num_bytes: 10160596
num_examples: 10000
- name: challenge_train_sample
num_bytes: 441326
num_examples: 500
- name: challenge_validation_sample
num_bytes: 491492
num_examples: 500
- name: challenge_test_backtranslation
num_bytes: 512834
num_examples: 500
- name: challenge_test_bfp02
num_bytes: 529404
num_examples: 500
- name: challenge_test_bfp05
num_bytes: 515151
num_examples: 500
- name: challenge_test_nopunc
num_bytes: 509332
num_examples: 500
- name: challenge_test_scramble
num_bytes: 514644
num_examples: 500
download_size: 17826468
dataset_size: 169699400
config_names:
- common_gen
- cs_restaurants
- dart
- e2e_nlg
- mlsum_de
- mlsum_es
- schema_guided_dialog
- totto
- web_nlg_en
- web_nlg_ru
- wiki_auto_asset_turk
- wiki_lingua_es_en
- wiki_lingua_ru_en
- wiki_lingua_tr_en
- wiki_lingua_vi_en
- xsum
---
# Dataset Card for GEM
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://gem-benchmark.github.io/](https://gem-benchmark.github.io/)
- **Repository:**
- **Paper:** [The GEM Benchmark: Natural Language Generation, its Evaluation and Metrics](https://arxiv.org/abs/2102.01672)
- **Point of Contact:** [Sebastian Gehrman](gehrmann@google.com)
- **Size of downloaded dataset files:** 2.19 GB
- **Size of the generated dataset:** 3.92 GB
- **Total amount of disk used:** 6.10 GB
### Dataset Summary
GEM is a benchmark environment for Natural Language Generation with a focus on its Evaluation,
both through human annotations and automated Metrics.
GEM aims to:
- measure NLG progress across 13 datasets spanning many NLG tasks and languages.
- provide an in-depth analysis of data and models presented via data statements and challenge sets.
- develop standards for evaluation of generated text using both automated and human metrics.
It is our goal to regularly update GEM and to encourage toward more inclusive practices in dataset development
by extending existing data or developing datasets for additional languages.
You can find more complete information in the dataset cards for each of the subsets:
- [CommonGen](https://gem-benchmark.com/data_cards/common_gen)
- [Czech Restaurant](https://gem-benchmark.com/data_cards/cs_restaurants)
- [DART](https://gem-benchmark.com/data_cards/dart)
- [E2E](https://gem-benchmark.com/data_cards/e2e_nlg)
- [MLSum](https://gem-benchmark.com/data_cards/mlsum)
- [Schema-Guided Dialog](https://gem-benchmark.com/data_cards/schema_guided_dialog)
- [WebNLG](https://gem-benchmark.com/data_cards/web_nlg)
- [Wiki-Auto/ASSET/TURK](https://gem-benchmark.com/data_cards/wiki_auto_asset_turk)
- [WikiLingua](https://gem-benchmark.com/data_cards/wiki_lingua)
- [XSum](https://gem-benchmark.com/data_cards/xsum)
The subsets are organized by task:
```
{
"summarization": {
"mlsum": ["mlsum_de", "mlsum_es"],
"wiki_lingua": ["wiki_lingua_es_en", "wiki_lingua_ru_en", "wiki_lingua_tr_en", "wiki_lingua_vi_en"],
"xsum": ["xsum"],
},
"struct2text": {
"common_gen": ["common_gen"],
"cs_restaurants": ["cs_restaurants"],
"dart": ["dart"],
"e2e": ["e2e_nlg"],
"totto": ["totto"],
"web_nlg": ["web_nlg_en", "web_nlg_ru"],
},
"simplification": {
"wiki_auto_asset_turk": ["wiki_auto_asset_turk"],
},
"dialog": {
"schema_guided_dialog": ["schema_guided_dialog"],
},
}
```
Each example has one `target` per example in its training set, and a set of `references` (with one or more items) in its validation and test set.
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Dataset Structure
### Data Instances
#### common_gen
- **Size of downloaded dataset files:** 1.85 MB
- **Size of the generated dataset:** 9.23 MB
- **Total amount of disk used:** 11.07 MB
An example of `validation` looks as follows.
```
{'concept_set_id': 0,
'concepts': ['field', 'look', 'stand'],
'gem_id': 'common_gen-validation-0',
'references': ['The player stood in the field looking at the batter.',
'The coach stands along the field, looking at the goalkeeper.',
'I stood and looked across the field, peacefully.',
'Someone stands, looking around the empty field.'],
'target': 'The player stood in the field looking at the batter.'}
```
#### cs_restaurants
- **Size of downloaded dataset files:** 1.47 MB
- **Size of the generated dataset:** 1.31 MB
- **Total amount of disk used:** 2.77 MB
An example of `validation` looks as follows.
```
{'dialog_act': '?request(area)',
'dialog_act_delexicalized': '?request(area)',
'gem_id': 'cs_restaurants-validation-0',
'references': ['Jakou lokalitu hledáte ?'],
'target': 'Jakou lokalitu hledáte ?',
'target_delexicalized': 'Jakou lokalitu hledáte ?'}
```
#### dart
- **Size of downloaded dataset files:** 29.37 MB
- **Size of the generated dataset:** 27.44 MB
- **Total amount of disk used:** 56.81 MB
An example of `validation` looks as follows.
```
{'dart_id': 0,
'gem_id': 'dart-validation-0',
'references': ['A school from Mars Hill, North Carolina, joined in 1973.'],
'subtree_was_extended': True,
'target': 'A school from Mars Hill, North Carolina, joined in 1973.',
'target_sources': ['WikiSQL_decl_sents'],
'tripleset': [['Mars Hill College', 'JOINED', '1973'], ['Mars Hill College', 'LOCATION', 'Mars Hill, North Carolina']]}
```
#### e2e_nlg
- **Size of downloaded dataset files:** 14.60 MB
- **Size of the generated dataset:** 12.14 MB
- **Total amount of disk used:** 26.74 MB
An example of `validation` looks as follows.
```
{'gem_id': 'e2e_nlg-validation-0',
'meaning_representation': 'name[Alimentum], area[city centre], familyFriendly[no]',
'references': ['There is a place in the city centre, Alimentum, that is not family-friendly.'],
'target': 'There is a place in the city centre, Alimentum, that is not family-friendly.'}
```
#### mlsum_de
- **Size of downloaded dataset files:** 347.36 MB
- **Size of the generated dataset:** 951.06 MB
- **Total amount of disk used:** 1.30 GB
An example of `validation` looks as follows.
```
{'date': '00/04/2019',
'gem_id': 'mlsum_de-validation-0',
'references': ['In einer Kleinstadt auf der Insel Usedom war eine junge Frau tot in ihrer Wohnung gefunden worden. Nun stehen zwei Bekannte unter Verdacht.'],
'target': 'In einer Kleinstadt auf der Insel Usedom war eine junge Frau tot in ihrer Wohnung gefunden worden. Nun stehen zwei Bekannte unter Verdacht.',
'text': 'Kerzen und Blumen stehen vor dem Eingang eines Hauses, in dem eine 18-jährige Frau tot aufgefunden wurde. In einer Kleinstadt auf der Insel Usedom war eine junge Frau tot in ...',
'title': 'Tod von 18-Jähriger auf Usedom: Zwei Festnahmen',
'topic': 'panorama',
'url': 'https://www.sueddeutsche.de/panorama/usedom-frau-tot-festnahme-verdaechtige-1.4412256'}
```
#### mlsum_es
- **Size of downloaded dataset files:** 514.11 MB
- **Size of the generated dataset:** 1.31 GB
- **Total amount of disk used:** 1.83 GB
An example of `validation` looks as follows.
```
{'date': '05/01/2019',
'gem_id': 'mlsum_es-validation-0',
'references': ['El diseñador que dio carta de naturaleza al estilo genuinamente americano celebra el medio siglo de su marca entre grandes fastos y problemas financieros. Conectar con las nuevas generaciones es el regalo que precisa más que nunca'],
'target': 'El diseñador que dio carta de naturaleza al estilo genuinamente americano celebra el medio siglo de su marca entre grandes fastos y problemas financieros. Conectar con las nuevas generaciones es el regalo que precisa más que nunca',
'text': 'Un oso de peluche marcándose un heelflip de monopatín es todo lo que Ralph Lauren necesitaba esta Navidad. Estampado en un jersey de lana azul marino, supone la guinda que corona ...',
'title': 'Ralph Lauren busca el secreto de la eterna juventud',
'topic': 'elpais estilo',
'url': 'http://elpais.com/elpais/2019/01/04/estilo/1546617396_933318.html'}
```
#### schema_guided_dialog
- **Size of downloaded dataset files:** 8.64 MB
- **Size of the generated dataset:** 45.78 MB
- **Total amount of disk used:** 54.43 MB
An example of `validation` looks as follows.
```
{'dialog_acts': [{'act': 2, 'slot': 'song_name', 'values': ['Carnivore']}, {'act': 2, 'slot': 'playback_device', 'values': ['TV']}],
'dialog_id': '10_00054',
'gem_id': 'schema_guided_dialog-validation-0',
'prompt': 'Yes, I would.',
'references': ['Please confirm the song Carnivore on tv.'],
'target': 'Please confirm the song Carnivore on tv.',
'turn_id': 15}
```
#### totto
- **Size of downloaded dataset files:** 187.73 MB
- **Size of the generated dataset:** 757.99 MB
- **Total amount of disk used:** 945.72 MB
An example of `validation` looks as follows.
```
{'example_id': '7391450717765563190',
'gem_id': 'totto-validation-0',
'highlighted_cells': [[3, 0], [3, 2], [3, 3]],
'overlap_subset': 'True',
'references': ['Daniel Henry Chamberlain was the 76th Governor of South Carolina from 1874.',
'Daniel Henry Chamberlain was the 76th Governor of South Carolina, beginning in 1874.',
'Daniel Henry Chamberlain was the 76th Governor of South Carolina who took office in 1874.'],
'sentence_annotations': [{'final_sentence': 'Daniel Henry Chamberlain was the 76th Governor of South Carolina from 1874.',
'original_sentence': 'Daniel Henry Chamberlain (June 23, 1835 – April 13, 1907) was an American planter, lawyer, author and the 76th Governor of South Carolina '
'from 1874 until 1877.',
'sentence_after_ambiguity': 'Daniel Henry Chamberlain was the 76th Governor of South Carolina from 1874.',
'sentence_after_deletion': 'Daniel Henry Chamberlain was the 76th Governor of South Carolina from 1874.'},
...
],
'table': [[{'column_span': 1, 'is_header': True, 'row_span': 1, 'value': '#'},
{'column_span': 2, 'is_header': True, 'row_span': 1, 'value': 'Governor'},
{'column_span': 1, 'is_header': True, 'row_span': 1, 'value': 'Took Office'},
{'column_span': 1, 'is_header': True, 'row_span': 1, 'value': 'Left Office'}],
[{'column_span': 1, 'is_header': True, 'row_span': 1, 'value': '74'},
{'column_span': 1, 'is_header': False, 'row_span': 1, 'value': '-'},
{'column_span': 1, 'is_header': False, 'row_span': 1, 'value': 'Robert Kingston Scott'},
{'column_span': 1, 'is_header': False, 'row_span': 1, 'value': 'July 6, 1868'}],
...
],
'table_page_title': 'List of Governors of South Carolina',
'table_section_text': 'Parties Democratic Republican',
'table_section_title': 'Governors under the Constitution of 1868',
'table_webpage_url': 'http://en.wikipedia.org/wiki/List_of_Governors_of_South_Carolina',
'target': 'Daniel Henry Chamberlain was the 76th Governor of South Carolina from 1874.',
'totto_id': 0}
```
#### web_nlg_en
- **Size of downloaded dataset files:** 12.95 MB
- **Size of the generated dataset:** 14.63 MB
- **Total amount of disk used:** 27.57 MB
An example of `validation` looks as follows.
```
{'category': 'Airport',
'gem_id': 'web_nlg_en-validation-0',
'input': ['Aarhus | leader | Jacob_Bundsgaard'],
'references': ['The leader of Aarhus is Jacob Bundsgaard.'],
'target': 'The leader of Aarhus is Jacob Bundsgaard.',
'webnlg_id': 'dev/Airport/1/Id1'}
```
#### web_nlg_ru
- **Size of downloaded dataset files:** 7.63 MB
- **Size of the generated dataset:** 8.41 MB
- **Total amount of disk used:** 16.04 MB
An example of `validation` looks as follows.
```
{'category': 'Airport',
'gem_id': 'web_nlg_ru-validation-0',
'input': ['Punjab,_Pakistan | leaderTitle | Provincial_Assembly_of_the_Punjab'],
'references': ['Пенджаб, Пакистан, возглавляется Провинциальной ассамблеей Пенджаба.', 'Пенджаб, Пакистан возглавляется Провинциальной ассамблеей Пенджаба.'],
'target': 'Пенджаб, Пакистан, возглавляется Провинциальной ассамблеей Пенджаба.',
'webnlg_id': 'dev/Airport/1/Id1'}
```
#### wiki_auto_asset_turk
- **Size of downloaded dataset files:** 127.27 MB
- **Size of the generated dataset:** 152.77 MB
- **Total amount of disk used:** 280.04 MB
An example of `validation` looks as follows.
```
{'gem_id': 'wiki_auto_asset_turk-validation-0',
'references': ['The Gandalf Awards honor excellent writing in in fantasy literature.'],
'source': 'The Gandalf Awards, honoring achievement in fantasy literature, were conferred by the World Science Fiction Society annually from 1974 to 1981.',
'source_id': '350_691837-1-0-0',
'target': 'The Gandalf Awards honor excellent writing in in fantasy literature.',
'target_id': '350_691837-0-0-0'}
```
#### wiki_lingua_es_en
- **Size of downloaded dataset files:** 169.41 MB
- **Size of the generated dataset:** 287.60 MB
- **Total amount of disk used:** 457.01 MB
An example of `validation` looks as follows.
```
'references': ["Practice matted hair prevention from early in your cat's life. Make sure that your cat is grooming itself effectively. Keep a close eye on cats with long hair."],
'source': 'Muchas personas presentan problemas porque no cepillaron el pelaje de sus gatos en una etapa temprana de su vida, ya que no lo consideraban necesario. Sin embargo, a medida que...',
'target': "Practice matted hair prevention from early in your cat's life. Make sure that your cat is grooming itself effectively. Keep a close eye on cats with long hair."}
```
#### wiki_lingua_ru_en
- **Size of downloaded dataset files:** 169.41 MB
- **Size of the generated dataset:** 211.21 MB
- **Total amount of disk used:** 380.62 MB
An example of `validation` looks as follows.
```
{'gem_id': 'wiki_lingua_ru_en-val-0',
'references': ['Get immediate medical care if you notice signs of a complication. Undergo diagnostic tests to check for gallstones and complications. Ask your doctor about your treatment '
'options.'],
'source': 'И хотя, скорее всего, вам не о чем волноваться, следует незамедлительно обратиться к врачу, если вы подозреваете, что у вас возникло осложнение желчекаменной болезни. Это ...',
'target': 'Get immediate medical care if you notice signs of a complication. Undergo diagnostic tests to check for gallstones and complications. Ask your doctor about your treatment '
'options.'}
```
#### wiki_lingua_tr_en
- **Size of downloaded dataset files:** 169.41 MB
- **Size of the generated dataset:** 10.35 MB
- **Total amount of disk used:** 179.75 MB
An example of `validation` looks as follows.
```
{'gem_id': 'wiki_lingua_tr_en-val-0',
'references': ['Open Instagram. Go to the video you want to download. Tap ⋮. Tap Copy Link. Open Google Chrome. Tap the address bar. Go to the SaveFromWeb site. Tap the "Paste Instagram Video" text box. Tap and hold the text box. Tap PASTE. Tap Download. Download the video. Find the video on your Android.'],
'source': 'Instagram uygulamasının çok renkli kamera şeklindeki simgesine dokun. Daha önce giriş yaptıysan Instagram haber kaynağı açılır. Giriş yapmadıysan istendiğinde e-posta adresini ...',
'target': 'Open Instagram. Go to the video you want to download. Tap ⋮. Tap Copy Link. Open Google Chrome. Tap the address bar. Go to the SaveFromWeb site. Tap the "Paste Instagram Video" text box. Tap and hold the text box. Tap PASTE. Tap Download. Download the video. Find the video on your Android.'}
```
#### wiki_lingua_vi_en
- **Size of downloaded dataset files:** 169.41 MB
- **Size of the generated dataset:** 41.02 MB
- **Total amount of disk used:** 210.43 MB
An example of `validation` looks as follows.
```
{'gem_id': 'wiki_lingua_vi_en-val-0',
'references': ['Select the right time of year for planting the tree. You will usually want to plant your tree when it is dormant, or not flowering, during cooler or colder times of year.'],
'source': 'Bạn muốn cung cấp cho cây cơ hội tốt nhất để phát triển và sinh tồn. Trồng cây đúng thời điểm trong năm chính là yếu tố then chốt. Thời điểm sẽ thay đổi phụ thuộc vào loài cây ...',
'target': 'Select the right time of year for planting the tree. You will usually want to plant your tree when it is dormant, or not flowering, during cooler or colder times of year.'}
```
#### xsum
- **Size of downloaded dataset files:** 254.89 MB
- **Size of the generated dataset:** 70.67 MB
- **Total amount of disk used:** 325.56 MB
An example of `validation` looks as follows.
```
{'document': 'Burberry reported pre-tax profits of £166m for the year to March. A year ago it made a loss of £16.1m, hit by charges at its Spanish operations.\n'
'In the past year it has opened 21 new stores and closed nine. It plans to open 20-30 stores this year worldwide.\n'
'The group has also focused on promoting the Burberry brand online...',
'gem_id': 'xsum-validation-0',
'references': ['Luxury fashion designer Burberry has returned to profit after opening new stores and spending more on online marketing'],
'target': 'Luxury fashion designer Burberry has returned to profit after opening new stores and spending more on online marketing',
'xsum_id': '10162122'}
```
### Data Fields
The data fields are the same among all splits.
#### common_gen
- `gem_id`: a `string` feature.
- `concept_set_id`: a `int32` feature.
- `concepts`: a `list` of `string` features.
- `target`: a `string` feature.
- `references`: a `list` of `string` features.
#### cs_restaurants
- `gem_id`: a `string` feature.
- `dialog_act`: a `string` feature.
- `dialog_act_delexicalized`: a `string` feature.
- `target_delexicalized`: a `string` feature.
- `target`: a `string` feature.
- `references`: a `list` of `string` features.
#### dart
- `gem_id`: a `string` feature.
- `dart_id`: a `int32` feature.
- `tripleset`: a `list` of `string` features.
- `subtree_was_extended`: a `bool` feature.
- `target_sources`: a `list` of `string` features.
- `target`: a `string` feature.
- `references`: a `list` of `string` features.
#### e2e_nlg
- `gem_id`: a `string` feature.
- `meaning_representation`: a `string` feature.
- `target`: a `string` feature.
- `references`: a `list` of `string` features.
#### mlsum_de
- `gem_id`: a `string` feature.
- `text`: a `string` feature.
- `topic`: a `string` feature.
- `url`: a `string` feature.
- `title`: a `string` feature.
- `date`: a `string` feature.
- `target`: a `string` feature.
- `references`: a `list` of `string` features.
#### mlsum_es
- `gem_id`: a `string` feature.
- `text`: a `string` feature.
- `topic`: a `string` feature.
- `url`: a `string` feature.
- `title`: a `string` feature.
- `date`: a `string` feature.
- `target`: a `string` feature.
- `references`: a `list` of `string` features.
#### schema_guided_dialog
- `gem_id`: a `string` feature.
- `act`: a classification label, with possible values including `AFFIRM` (0), `AFFIRM_INTENT` (1), `CONFIRM` (2), `GOODBYE` (3), `INFORM` (4).
- `slot`: a `string` feature.
- `values`: a `list` of `string` features.
- `dialog_id`: a `string` feature.
- `turn_id`: a `int32` feature.
- `prompt`: a `string` feature.
- `target`: a `string` feature.
- `references`: a `list` of `string` features.
#### totto
- `gem_id`: a `string` feature.
- `totto_id`: a `int32` feature.
- `table_page_title`: a `string` feature.
- `table_webpage_url`: a `string` feature.
- `table_section_title`: a `string` feature.
- `table_section_text`: a `string` feature.
- `column_span`: a `int32` feature.
- `is_header`: a `bool` feature.
- `row_span`: a `int32` feature.
- `value`: a `string` feature.
- `highlighted_cells`: a `list` of `int32` features.
- `example_id`: a `string` feature.
- `original_sentence`: a `string` feature.
- `sentence_after_deletion`: a `string` feature.
- `sentence_after_ambiguity`: a `string` feature.
- `final_sentence`: a `string` feature.
- `overlap_subset`: a `string` feature.
- `target`: a `string` feature.
- `references`: a `list` of `string` features.
#### web_nlg_en
- `gem_id`: a `string` feature.
- `input`: a `list` of `string` features.
- `target`: a `string` feature.
- `references`: a `list` of `string` features.
- `category`: a `string` feature.
- `webnlg_id`: a `string` feature.
#### web_nlg_ru
- `gem_id`: a `string` feature.
- `input`: a `list` of `string` features.
- `target`: a `string` feature.
- `references`: a `list` of `string` features.
- `category`: a `string` feature.
- `webnlg_id`: a `string` feature.
#### wiki_auto_asset_turk
- `gem_id`: a `string` feature.
- `source_id`: a `string` feature.
- `target_id`: a `string` feature.
- `source`: a `string` feature.
- `target`: a `string` feature.
- `references`: a `list` of `string` features.
#### wiki_lingua_es_en
- `gem_id`: a `string` feature.
- `source`: a `string` feature.
- `target`: a `string` feature.
- `references`: a `list` of `string` features.
#### wiki_lingua_ru_en
- `gem_id`: a `string` feature.
- `source`: a `string` feature.
- `target`: a `string` feature.
- `references`: a `list` of `string` features.
#### wiki_lingua_tr_en
- `gem_id`: a `string` feature.
- `source`: a `string` feature.
- `target`: a `string` feature.
- `references`: a `list` of `string` features.
#### wiki_lingua_vi_en
- `gem_id`: a `string` feature.
- `source`: a `string` feature.
- `target`: a `string` feature.
- `references`: a `list` of `string` features.
#### xsum
- `gem_id`: a `string` feature.
- `xsum_id`: a `string` feature.
- `document`: a `string` feature.
- `target`: a `string` feature.
- `references`: a `list` of `string` features.
### Data Splits
#### common_gen
| |train|validation|test|
|----------|----:|---------:|---:|
|common_gen|67389| 993|1497|
#### cs_restaurants
| |train|validation|test|
|--------------|----:|---------:|---:|
|cs_restaurants| 3569| 781| 842|
#### dart
| |train|validation|test|
|----|----:|---------:|---:|
|dart|62659| 2768|6959|
#### e2e_nlg
| |train|validation|test|
|-------|----:|---------:|---:|
|e2e_nlg|33525| 4299|4693|
#### mlsum_de
| |train |validation|test |
|--------|-----:|---------:|----:|
|mlsum_de|220748| 11392|10695|
#### mlsum_es
| |train |validation|test |
|--------|-----:|---------:|----:|
|mlsum_es|259886| 9977|13365|
#### schema_guided_dialog
| |train |validation|test |
|--------------------|-----:|---------:|----:|
|schema_guided_dialog|164982| 10000|10000|
#### totto
| |train |validation|test|
|-----|-----:|---------:|---:|
|totto|121153| 7700|7700|
#### web_nlg_en
| |train|validation|test|
|----------|----:|---------:|---:|
|web_nlg_en|35426| 1667|1779|
#### web_nlg_ru
| |train|validation|test|
|----------|----:|---------:|---:|
|web_nlg_ru|14630| 790|1102|
#### wiki_auto_asset_turk
| |train |validation|test_asset|test_turk|
|--------------------|-----:|---------:|---------:|--------:|
|wiki_auto_asset_turk|373801| 73249| 359| 359|
#### wiki_lingua_es_en
| |train|validation|test |
|-----------------|----:|---------:|----:|
|wiki_lingua_es_en|79515| 8835|19797|
#### wiki_lingua_ru_en
| |train|validation|test|
|-----------------|----:|---------:|---:|
|wiki_lingua_ru_en|36898| 4100|9094|
#### wiki_lingua_tr_en
| |train|validation|test|
|-----------------|----:|---------:|---:|
|wiki_lingua_tr_en| 3193| 355| 808|
#### wiki_lingua_vi_en
| |train|validation|test|
|-----------------|----:|---------:|---:|
|wiki_lingua_vi_en| 9206| 1023|2167|
#### xsum
| |train|validation|test|
|----|----:|---------:|---:|
|xsum|23206| 1117|1166|
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
CC-BY-SA-4.0
### Citation Information
```
@article{gem_benchmark,
author = {Sebastian Gehrmann and
Tosin P. Adewumi and
Karmanya Aggarwal and
Pawan Sasanka Ammanamanchi and
Aremu Anuoluwapo and
Antoine Bosselut and
Khyathi Raghavi Chandu and
Miruna{-}Adriana Clinciu and
Dipanjan Das and
Kaustubh D. Dhole and
Wanyu Du and
Esin Durmus and
Ondrej Dusek and
Chris Emezue and
Varun Gangal and
Cristina Garbacea and
Tatsunori Hashimoto and
Yufang Hou and
Yacine Jernite and
Harsh Jhamtani and
Yangfeng Ji and
Shailza Jolly and
Dhruv Kumar and
Faisal Ladhak and
Aman Madaan and
Mounica Maddela and
Khyati Mahajan and
Saad Mahamood and
Bodhisattwa Prasad Majumder and
Pedro Henrique Martins and
Angelina McMillan{-}Major and
Simon Mille and
Emiel van Miltenburg and
Moin Nadeem and
Shashi Narayan and
Vitaly Nikolaev and
Rubungo Andre Niyongabo and
Salomey Osei and
Ankur P. Parikh and
Laura Perez{-}Beltrachini and
Niranjan Ramesh Rao and
Vikas Raunak and
Juan Diego Rodriguez and
Sashank Santhanam and
Jo{\~{a}}o Sedoc and
Thibault Sellam and
Samira Shaikh and
Anastasia Shimorina and
Marco Antonio Sobrevilla Cabezudo and
Hendrik Strobelt and
Nishant Subramani and
Wei Xu and
Diyi Yang and
Akhila Yerukola and
Jiawei Zhou},
title = {The {GEM} Benchmark: Natural Language Generation, its Evaluation and
Metrics},
journal = {CoRR},
volume = {abs/2102.01672},
year = {2021},
url = {https://arxiv.org/abs/2102.01672},
archivePrefix = {arXiv},
eprint = {2102.01672}
}
```
### Contributions
Thanks to [@yjernite](https://github.com/yjernite) for adding this dataset. |
alzoubi36/policy_ie_a | 2023-06-24T07:20:44.000Z | [
"region:us"
] | alzoubi36 | null | null | null | 0 | 2,122 | ---
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype: int64
splits:
- name: train
num_bytes: 592707
num_examples: 4109
- name: validation
num_bytes: 16114
num_examples: 100
- name: test
num_bytes: 163819
num_examples: 1041
download_size: 364376
dataset_size: 772640
---
# Dataset for the PolicyIE-A task in the [PrivacyGLUE](https://github.com/infsys-lab/privacy-glue) dataset
|
FastJobs/Visual_Emotional_Analysis | 2023-03-13T06:31:17.000Z | [
"task_categories:image-classification",
"size_categories:10K<n<100K",
"language:en",
"region:us"
] | FastJobs | null | null | null | 5 | 2,110 | ---
task_categories:
- image-classification
language:
- en
size_categories:
- 10K<n<100K
--- |
wmt19 | 2023-04-05T13:44:03.000Z | [
"task_categories:translation",
"annotations_creators:no-annotation",
"language_creators:found",
"multilinguality:translation",
"size_categories:10M<n<100M",
"source_datasets:extended|europarl_bilingual",
"source_datasets:extended|news_commentary",
"source_datasets:extended|opus_paracrawl",
"source_datasets:extended|un_multi",
"language:cs",
"language:de",
"language:en",
"language:fi",
"language:fr",
"language:gu",
"language:kk",
"language:lt",
"language:ru",
"language:zh",
"license:unknown",
"region:us"
] | null | null | @ONLINE {wmt19translate,
author = {Wikimedia Foundation},
title = {ACL 2019 Fourth Conference on Machine Translation (WMT19), Shared Task: Machine Translation of News},
url = {http://www.statmt.org/wmt19/translation-task.html}
} | null | 14 | 2,108 | ---
annotations_creators:
- no-annotation
language_creators:
- found
language:
- cs
- de
- en
- fi
- fr
- gu
- kk
- lt
- ru
- zh
license:
- unknown
multilinguality:
- translation
size_categories:
- 10M<n<100M
source_datasets:
- extended|europarl_bilingual
- extended|news_commentary
- extended|opus_paracrawl
- extended|un_multi
task_categories:
- translation
task_ids: []
pretty_name: WMT19
paperswithcode_id: null
dataset_info:
- config_name: cs-en
features:
- name: translation
dtype:
translation:
languages:
- cs
- en
splits:
- name: train
num_bytes: 1314871994
num_examples: 7270695
- name: validation
num_bytes: 696229
num_examples: 2983
download_size: 2018537046
dataset_size: 1315568223
- config_name: de-en
features:
- name: translation
dtype:
translation:
languages:
- de
- en
splits:
- name: train
num_bytes: 8420967590
num_examples: 38690334
- name: validation
num_bytes: 757649
num_examples: 2998
download_size: 10422475109
dataset_size: 8421725239
- config_name: fi-en
features:
- name: translation
dtype:
translation:
languages:
- fi
- en
splits:
- name: train
num_bytes: 1422922267
num_examples: 6587448
- name: validation
num_bytes: 691841
num_examples: 3000
download_size: 1006124909
dataset_size: 1423614108
- config_name: gu-en
features:
- name: translation
dtype:
translation:
languages:
- gu
- en
splits:
- name: train
num_bytes: 590763
num_examples: 11670
- name: validation
num_bytes: 774621
num_examples: 1998
download_size: 38891457
dataset_size: 1365384
- config_name: kk-en
features:
- name: translation
dtype:
translation:
languages:
- kk
- en
splits:
- name: train
num_bytes: 9157438
num_examples: 126583
- name: validation
num_bytes: 846857
num_examples: 2066
download_size: 41558315
dataset_size: 10004295
- config_name: lt-en
features:
- name: translation
dtype:
translation:
languages:
- lt
- en
splits:
- name: train
num_bytes: 513084361
num_examples: 2344893
- name: validation
num_bytes: 541953
num_examples: 2000
download_size: 411309952
dataset_size: 513626314
- config_name: ru-en
features:
- name: translation
dtype:
translation:
languages:
- ru
- en
splits:
- name: train
num_bytes: 13721377178
num_examples: 37492126
- name: validation
num_bytes: 1085596
num_examples: 3000
download_size: 4134147853
dataset_size: 13722462774
- config_name: zh-en
features:
- name: translation
dtype:
translation:
languages:
- zh
- en
splits:
- name: train
num_bytes: 5584359748
num_examples: 25984574
- name: validation
num_bytes: 1107522
num_examples: 3981
download_size: 2195879129
dataset_size: 5585467270
- config_name: fr-de
features:
- name: translation
dtype:
translation:
languages:
- fr
- de
splits:
- name: train
num_bytes: 2358413485
num_examples: 9824476
- name: validation
num_bytes: 441426
num_examples: 1512
download_size: 757345846
dataset_size: 2358854911
---
# Dataset Card for "wmt19"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [http://www.statmt.org/wmt19/translation-task.html](http://www.statmt.org/wmt19/translation-task.html)
- **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of downloaded dataset files:** 2.02 GB
- **Size of the generated dataset:** 1.32 GB
- **Total amount of disk used:** 3.33 GB
### Dataset Summary
<div class="course-tip course-tip-orange bg-gradient-to-br dark:bg-gradient-to-r before:border-orange-500 dark:before:border-orange-800 from-orange-50 dark:from-gray-900 to-white dark:to-gray-950 border border-orange-50 text-orange-700 dark:text-gray-400">
<p><b>Warning:</b> There are issues with the Common Crawl corpus data (<a href="https://www.statmt.org/wmt13/training-parallel-commoncrawl.tgz">training-parallel-commoncrawl.tgz</a>):</p>
<ul>
<li>Non-English files contain many English sentences.</li>
<li>Their "parallel" sentences in English are not aligned: they are uncorrelated with their counterpart.</li>
</ul>
<p>We have contacted the WMT organizers.</p>
</div>
Translation dataset based on the data from statmt.org.
Versions exist for different years using a combination of data
sources. The base `wmt` allows you to create a custom dataset by choosing
your own data/language pair. This can be done as follows:
```python
from datasets import inspect_dataset, load_dataset_builder
inspect_dataset("wmt19", "path/to/scripts")
builder = load_dataset_builder(
"path/to/scripts/wmt_utils.py",
language_pair=("fr", "de"),
subsets={
datasets.Split.TRAIN: ["commoncrawl_frde"],
datasets.Split.VALIDATION: ["euelections_dev2019"],
},
)
# Standard version
builder.download_and_prepare()
ds = builder.as_dataset()
# Streamable version
ds = builder.as_streaming_dataset()
```
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Dataset Structure
### Data Instances
#### cs-en
- **Size of downloaded dataset files:** 2.02 GB
- **Size of the generated dataset:** 1.32 GB
- **Total amount of disk used:** 3.33 GB
An example of 'validation' looks as follows.
```
```
### Data Fields
The data fields are the same among all splits.
#### cs-en
- `translation`: a multilingual `string` variable, with possible languages including `cs`, `en`.
### Data Splits
|name | train |validation|
|-----|------:|---------:|
|cs-en|7270695| 2983|
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@ONLINE {wmt19translate,
author = "Wikimedia Foundation",
title = "ACL 2019 Fourth Conference on Machine Translation (WMT19), Shared Task: Machine Translation of News",
url = "http://www.statmt.org/wmt19/translation-task.html"
}
```
### Contributions
Thanks to [@patrickvonplaten](https://github.com/patrickvonplaten), [@mariamabarham](https://github.com/mariamabarham), [@thomwolf](https://github.com/thomwolf) for adding this dataset. |
wiki_hop | 2022-11-03T16:47:35.000Z | [
"task_categories:question-answering",
"task_ids:extractive-qa",
"annotations_creators:crowdsourced",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:en",
"license:cc-by-sa-3.0",
"multi-hop",
"arxiv:1710.06481",
"region:us"
] | null | WikiHop is open-domain and based on Wikipedia articles; the goal is to recover Wikidata information by hopping through documents. The goal is to answer text understanding queries by combining multiple facts that are spread across different documents. | @misc{welbl2018constructing,
title={Constructing Datasets for Multi-hop Reading Comprehension Across Documents},
author={Johannes Welbl and Pontus Stenetorp and Sebastian Riedel},
year={2018},
eprint={1710.06481},
archivePrefix={arXiv},
primaryClass={cs.CL}
} | null | 1 | 2,107 | ---
annotations_creators:
- crowdsourced
language_creators:
- expert-generated
language:
- en
license:
- cc-by-sa-3.0
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- question-answering
task_ids:
- extractive-qa
paperswithcode_id: wikihop
pretty_name: WikiHop
tags:
- multi-hop
dataset_info:
- config_name: original
features:
- name: id
dtype: string
- name: query
dtype: string
- name: answer
dtype: string
- name: candidates
sequence: string
- name: supports
sequence: string
- name: annotations
sequence:
sequence: string
splits:
- name: train
num_bytes: 325952974
num_examples: 43738
- name: validation
num_bytes: 41246536
num_examples: 5129
download_size: 339843061
dataset_size: 367199510
- config_name: masked
features:
- name: id
dtype: string
- name: question
dtype: string
- name: answer
dtype: string
- name: candidates
sequence: string
- name: supports
sequence: string
- name: annotations
sequence:
sequence: string
splits:
- name: train
num_bytes: 348249138
num_examples: 43738
- name: validation
num_bytes: 44066862
num_examples: 5129
download_size: 339843061
dataset_size: 392316000
---
# Dataset Card for WikiHop
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [QAngaroo](http://qangaroo.cs.ucl.ac.uk/)
- **Repository:** [If the dataset is hosted on github or has a github homepage, add URL here]()
- **Paper:** [Constructing Datasets for Multi-hop Reading Comprehension Across Documents](https://arxiv.org/abs/1710.06481)
- **Leaderboard:** [leaderboard](http://qangaroo.cs.ucl.ac.uk/leaderboard.html)
- **Point of Contact:** [Johannes Welbl](j.welbl@cs.ucl.ac.uk)
### Dataset Summary
[More Information Needed]
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
[More Information Needed]
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
[More Information Needed]
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@patil-suraj](https://github.com/patil-suraj) for adding this dataset. |
bigcode/the-stack-smol | 2023-05-02T10:14:19.000Z | [
"task_categories:text-generation",
"task_ids:language-modeling",
"language_creators:crowdsourced",
"multilinguality:multilingual",
"size_categories:unknown",
"language:code",
"region:us"
] | bigcode | null | null | null | 22 | 2,100 | ---
annotations_creators: []
language_creators:
- crowdsourced
language: ["code"]
multilinguality:
- multilingual
size_categories:
- unknown
source_datasets: []
task_categories:
- text-generation
task_ids:
- language-modeling
extra_gated_prompt: |-
## Terms of Use for The Stack
The Stack dataset is a collection of 3.1 TB of source code in 30 programming languages. We ask that you read and acknowledge the following points before using the dataset:
1. The Stack is a collection of source code from repositories with various licenses. Any use of all or part of the code gathered in The Stack must abide by the terms of the original licenses, including attribution clauses when relevant. We facilitate this by providing provenance information for each data point.
2. The Stack is regularly updated to enact validated data removal requests. By clicking on "Access repository", you agree to update your own version of The Stack to the most recent usable version specified by the maintainers in [the following thread](https://huggingface.co/datasets/bigcode/the-stack/discussions/7). If you have questions about dataset versions and allowed uses, please also ask them in the dataset’s [community discussions](https://huggingface.co/datasets/bigcode/the-stack/discussions/new). We will also notify users via email when the latest usable version changes.
3. To host, share, or otherwise provide access to The Stack dataset, you must include [these Terms of Use](https://huggingface.co/datasets/bigcode/the-stack#terms-of-use-for-the-stack) and require users to agree to it.
By clicking on "Access repository" below, you accept that your contact information (email address and username) can be shared with the dataset maintainers as well.
extra_gated_fields:
Email: text
I have read the License and agree with its terms: checkbox
---
## Dataset Description

A small subset (~0.1%) of [the-stack](https://huggingface.co/datasets/bigcode/the-stack) dataset, each programming language has 10,000 random samples from the original dataset. The dataset has 2.6GB of text (code).
## Languages
The dataset contains 30 programming languages:
````
"assembly", "batchfile", "c++", "c", "c-sharp", "cmake", "css", "dockerfile", "fortran", "go", "haskell", "html", "java",
"javascript", "julia", "lua", "makefile", "markdown", "perl", "php", "powershell", "python", "ruby", "rust",
"scala", "shell", "sql", "tex", "typescript", "visual-basic"
`````
## Dataset Structure
```python
from datasets import load_dataset
load_dataset("bigcode/the-stack-smol")
DatasetDict({
train: Dataset({
features: ['content', 'avg_line_length', 'max_line_length', 'alphanum_fraction', 'licenses', 'repository_name', 'path', 'size', 'lang'],
num_rows: 300000
})
})
```
### How to use it
You can either load the whole dataset like above, or load a specific language such as python by specifying the folder directory:
```python
load_dataset("bigcode/the-stack-smol", data_dir="data/python")
DatasetDict({
train: Dataset({
features: ['content', 'avg_line_length', 'max_line_length', 'alphanum_fraction', 'licenses', 'repository_name', 'path', 'size', 'lang'],
num_rows: 10000
})
})
```
|
EleutherAI/hendrycks_math | 2023-09-14T20:29:14.000Z | [
"region:us"
] | EleutherAI | MATH is a dataset of 12,500 challenging competition mathematics problems. Each
problem in Math has a full step-by-step solution which can be used to teach
models to generate answer derivations and explanations. | @article{hendrycksmath2021,
title={Measuring Mathematical Problem Solving With the Math Dataset},
author={Dan Hendrycks and Collin Burns and Saurav Kadavath and Akul Arora and Steven Basart and Eric Tang and Dawn Song and Jacob Steinhardt},
journal={NeurIPS},
year={2021}
} | null | 0 | 2,100 | Entry not found |
senti_lex | 2023-06-08T12:24:00.000Z | [
"task_categories:text-classification",
"task_ids:sentiment-classification",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:multilingual",
"size_categories:1K<n<10K",
"size_categories:n<1K",
"source_datasets:original",
"language:af",
"language:an",
"language:ar",
"language:az",
"language:be",
"language:bg",
"language:bn",
"language:br",
"language:bs",
"language:ca",
"language:cs",
"language:cy",
"language:da",
"language:de",
"language:el",
"language:eo",
"language:es",
"language:et",
"language:eu",
"language:fa",
"language:fi",
"language:fo",
"language:fr",
"language:fy",
"language:ga",
"language:gd",
"language:gl",
"language:gu",
"language:he",
"language:hi",
"language:hr",
"language:ht",
"language:hu",
"language:hy",
"language:ia",
"language:id",
"language:io",
"language:is",
"language:it",
"language:ja",
"language:ka",
"language:km",
"language:kn",
"language:ko",
"language:ku",
"language:ky",
"language:la",
"language:lb",
"language:lt",
"language:lv",
"language:mk",
"language:mr",
"language:ms",
"language:mt",
"language:nl",
"language:nn",
"language:no",
"language:pl",
"language:pt",
"language:rm",
"language:ro",
"language:ru",
"language:sk",
"language:sl",
"language:sq",
"language:sr",
"language:sv",
"language:sw",
"language:ta",
"language:te",
"language:th",
"language:tk",
"language:tl",
"language:tr",
"language:uk",
"language:ur",
"language:uz",
"language:vi",
"language:vo",
"language:wa",
"language:yi",
"language:zh",
"language:zhw",
"license:gpl-3.0",
"region:us"
] | null | This dataset add sentiment lexicons for 81 languages generated via graph propagation based on a knowledge graph--a graphical representation of real-world entities and the links between them. | @inproceedings{inproceedings,
author = {Chen, Yanqing and Skiena, Steven},
year = {2014},
month = {06},
pages = {383-389},
title = {Building Sentiment Lexicons for All Major Languages},
volume = {2},
journal = {52nd Annual Meeting of the Association for Computational Linguistics, ACL 2014 - Proceedings of the Conference},
doi = {10.3115/v1/P14-2063}
} | null | 5 | 2,089 | ---
annotations_creators:
- expert-generated
language_creators:
- expert-generated
language:
- af
- an
- ar
- az
- be
- bg
- bn
- br
- bs
- ca
- cs
- cy
- da
- de
- el
- eo
- es
- et
- eu
- fa
- fi
- fo
- fr
- fy
- ga
- gd
- gl
- gu
- he
- hi
- hr
- ht
- hu
- hy
- ia
- id
- io
- is
- it
- ja
- ka
- km
- kn
- ko
- ku
- ky
- la
- lb
- lt
- lv
- mk
- mr
- ms
- mt
- nl
- nn
- 'no'
- pl
- pt
- rm
- ro
- ru
- sk
- sl
- sq
- sr
- sv
- sw
- ta
- te
- th
- tk
- tl
- tr
- uk
- ur
- uz
- vi
- vo
- wa
- yi
- zh
- zhw
license:
- gpl-3.0
multilinguality:
- multilingual
size_categories:
- 1K<n<10K
- n<1K
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- sentiment-classification
pretty_name: SentiWS
dataset_info:
- config_name: af
features:
- name: word
dtype: string
- name: sentiment
dtype:
class_label:
names:
'0': negative
'1': positive
splits:
- name: train
num_bytes: 45954
num_examples: 2299
download_size: 0
dataset_size: 45954
- config_name: an
features:
- name: word
dtype: string
- name: sentiment
dtype:
class_label:
names:
'0': negative
'1': positive
splits:
- name: train
num_bytes: 1832
num_examples: 97
download_size: 0
dataset_size: 1832
- config_name: ar
features:
- name: word
dtype: string
- name: sentiment
dtype:
class_label:
names:
'0': negative
'1': positive
splits:
- name: train
num_bytes: 58707
num_examples: 2794
download_size: 0
dataset_size: 58707
- config_name: az
features:
- name: word
dtype: string
- name: sentiment
dtype:
class_label:
names:
'0': negative
'1': positive
splits:
- name: train
num_bytes: 40044
num_examples: 1979
download_size: 0
dataset_size: 40044
- config_name: be
features:
- name: word
dtype: string
- name: sentiment
dtype:
class_label:
names:
'0': negative
'1': positive
splits:
- name: train
num_bytes: 41915
num_examples: 1526
download_size: 0
dataset_size: 41915
- config_name: bg
features:
- name: word
dtype: string
- name: sentiment
dtype:
class_label:
names:
'0': negative
'1': positive
splits:
- name: train
num_bytes: 78779
num_examples: 2847
download_size: 0
dataset_size: 78779
- config_name: bn
features:
- name: word
dtype: string
- name: sentiment
dtype:
class_label:
names:
'0': negative
'1': positive
splits:
- name: train
num_bytes: 70928
num_examples: 2393
download_size: 0
dataset_size: 70928
- config_name: br
features:
- name: word
dtype: string
- name: sentiment
dtype:
class_label:
names:
'0': negative
'1': positive
splits:
- name: train
num_bytes: 3234
num_examples: 184
download_size: 0
dataset_size: 3234
- config_name: bs
features:
- name: word
dtype: string
- name: sentiment
dtype:
class_label:
names:
'0': negative
'1': positive
splits:
- name: train
num_bytes: 39890
num_examples: 2020
download_size: 0
dataset_size: 39890
- config_name: ca
features:
- name: word
dtype: string
- name: sentiment
dtype:
class_label:
names:
'0': negative
'1': positive
splits:
- name: train
num_bytes: 64512
num_examples: 3204
download_size: 0
dataset_size: 64512
- config_name: cs
features:
- name: word
dtype: string
- name: sentiment
dtype:
class_label:
names:
'0': negative
'1': positive
splits:
- name: train
num_bytes: 53194
num_examples: 2599
download_size: 0
dataset_size: 53194
- config_name: cy
features:
- name: word
dtype: string
- name: sentiment
dtype:
class_label:
names:
'0': negative
'1': positive
splits:
- name: train
num_bytes: 31546
num_examples: 1647
download_size: 0
dataset_size: 31546
- config_name: da
features:
- name: word
dtype: string
- name: sentiment
dtype:
class_label:
names:
'0': negative
'1': positive
splits:
- name: train
num_bytes: 66756
num_examples: 3340
download_size: 0
dataset_size: 66756
- config_name: de
features:
- name: word
dtype: string
- name: sentiment
dtype:
class_label:
names:
'0': negative
'1': positive
splits:
- name: train
num_bytes: 82223
num_examples: 3974
download_size: 0
dataset_size: 82223
- config_name: el
features:
- name: word
dtype: string
- name: sentiment
dtype:
class_label:
names:
'0': negative
'1': positive
splits:
- name: train
num_bytes: 76281
num_examples: 2703
download_size: 0
dataset_size: 76281
- config_name: eo
features:
- name: word
dtype: string
- name: sentiment
dtype:
class_label:
names:
'0': negative
'1': positive
splits:
- name: train
num_bytes: 50271
num_examples: 2604
download_size: 0
dataset_size: 50271
- config_name: es
features:
- name: word
dtype: string
- name: sentiment
dtype:
class_label:
names:
'0': negative
'1': positive
splits:
- name: train
num_bytes: 87157
num_examples: 4275
download_size: 0
dataset_size: 87157
- config_name: et
features:
- name: word
dtype: string
- name: sentiment
dtype:
class_label:
names:
'0': negative
'1': positive
splits:
- name: train
num_bytes: 41964
num_examples: 2105
download_size: 0
dataset_size: 41964
- config_name: eu
features:
- name: word
dtype: string
- name: sentiment
dtype:
class_label:
names:
'0': negative
'1': positive
splits:
- name: train
num_bytes: 39641
num_examples: 1979
download_size: 0
dataset_size: 39641
- config_name: fa
features:
- name: word
dtype: string
- name: sentiment
dtype:
class_label:
names:
'0': negative
'1': positive
splits:
- name: train
num_bytes: 53399
num_examples: 2477
download_size: 0
dataset_size: 53399
- config_name: fi
features:
- name: word
dtype: string
- name: sentiment
dtype:
class_label:
names:
'0': negative
'1': positive
splits:
- name: train
num_bytes: 68294
num_examples: 3295
download_size: 0
dataset_size: 68294
- config_name: fo
features:
- name: word
dtype: string
- name: sentiment
dtype:
class_label:
names:
'0': negative
'1': positive
splits:
- name: train
num_bytes: 2213
num_examples: 123
download_size: 0
dataset_size: 2213
- config_name: fr
features:
- name: word
dtype: string
- name: sentiment
dtype:
class_label:
names:
'0': negative
'1': positive
splits:
- name: train
num_bytes: 94832
num_examples: 4653
download_size: 0
dataset_size: 94832
- config_name: fy
features:
- name: word
dtype: string
- name: sentiment
dtype:
class_label:
names:
'0': negative
'1': positive
splits:
- name: train
num_bytes: 3916
num_examples: 224
download_size: 0
dataset_size: 3916
- config_name: ga
features:
- name: word
dtype: string
- name: sentiment
dtype:
class_label:
names:
'0': negative
'1': positive
splits:
- name: train
num_bytes: 21209
num_examples: 1073
download_size: 0
dataset_size: 21209
- config_name: gd
features:
- name: word
dtype: string
- name: sentiment
dtype:
class_label:
names:
'0': negative
'1': positive
splits:
- name: train
num_bytes: 6441
num_examples: 345
download_size: 0
dataset_size: 6441
- config_name: gl
features:
- name: word
dtype: string
- name: sentiment
dtype:
class_label:
names:
'0': negative
'1': positive
splits:
- name: train
num_bytes: 55279
num_examples: 2714
download_size: 0
dataset_size: 55279
- config_name: gu
features:
- name: word
dtype: string
- name: sentiment
dtype:
class_label:
names:
'0': negative
'1': positive
splits:
- name: train
num_bytes: 60025
num_examples: 2145
download_size: 0
dataset_size: 60025
- config_name: he
features:
- name: word
dtype: string
- name: sentiment
dtype:
class_label:
names:
'0': negative
'1': positive
splits:
- name: train
num_bytes: 54706
num_examples: 2533
download_size: 0
dataset_size: 54706
- config_name: hi
features:
- name: word
dtype: string
- name: sentiment
dtype:
class_label:
names:
'0': negative
'1': positive
splits:
- name: train
num_bytes: 103800
num_examples: 3640
download_size: 0
dataset_size: 103800
- config_name: hr
features:
- name: word
dtype: string
- name: sentiment
dtype:
class_label:
names:
'0': negative
'1': positive
splits:
- name: train
num_bytes: 43775
num_examples: 2208
download_size: 0
dataset_size: 43775
- config_name: ht
features:
- name: word
dtype: string
- name: sentiment
dtype:
class_label:
names:
'0': negative
'1': positive
splits:
- name: train
num_bytes: 8261
num_examples: 472
download_size: 0
dataset_size: 8261
- config_name: hu
features:
- name: word
dtype: string
- name: sentiment
dtype:
class_label:
names:
'0': negative
'1': positive
splits:
- name: train
num_bytes: 74203
num_examples: 3522
download_size: 0
dataset_size: 74203
- config_name: hy
features:
- name: word
dtype: string
- name: sentiment
dtype:
class_label:
names:
'0': negative
'1': positive
splits:
- name: train
num_bytes: 44593
num_examples: 1657
download_size: 0
dataset_size: 44593
- config_name: ia
features:
- name: word
dtype: string
- name: sentiment
dtype:
class_label:
names:
'0': negative
'1': positive
splits:
- name: train
num_bytes: 6401
num_examples: 326
download_size: 0
dataset_size: 6401
- config_name: id
features:
- name: word
dtype: string
- name: sentiment
dtype:
class_label:
names:
'0': negative
'1': positive
splits:
- name: train
num_bytes: 56879
num_examples: 2900
download_size: 0
dataset_size: 56879
- config_name: io
features:
- name: word
dtype: string
- name: sentiment
dtype:
class_label:
names:
'0': negative
'1': positive
splits:
- name: train
num_bytes: 3348
num_examples: 183
download_size: 0
dataset_size: 3348
- config_name: is
features:
- name: word
dtype: string
- name: sentiment
dtype:
class_label:
names:
'0': negative
'1': positive
splits:
- name: train
num_bytes: 34565
num_examples: 1770
download_size: 0
dataset_size: 34565
- config_name: it
features:
- name: word
dtype: string
- name: sentiment
dtype:
class_label:
names:
'0': negative
'1': positive
splits:
- name: train
num_bytes: 92165
num_examples: 4491
download_size: 0
dataset_size: 92165
- config_name: ja
features:
- name: word
dtype: string
- name: sentiment
dtype:
class_label:
names:
'0': negative
'1': positive
splits:
- name: train
num_bytes: 21770
num_examples: 1017
download_size: 0
dataset_size: 21770
- config_name: ka
features:
- name: word
dtype: string
- name: sentiment
dtype:
class_label:
names:
'0': negative
'1': positive
splits:
- name: train
num_bytes: 81286
num_examples: 2202
download_size: 0
dataset_size: 81286
- config_name: km
features:
- name: word
dtype: string
- name: sentiment
dtype:
class_label:
names:
'0': negative
'1': positive
splits:
- name: train
num_bytes: 23133
num_examples: 956
download_size: 0
dataset_size: 23133
- config_name: kn
features:
- name: word
dtype: string
- name: sentiment
dtype:
class_label:
names:
'0': negative
'1': positive
splits:
- name: train
num_bytes: 70449
num_examples: 2173
download_size: 0
dataset_size: 70449
- config_name: ko
features:
- name: word
dtype: string
- name: sentiment
dtype:
class_label:
names:
'0': negative
'1': positive
splits:
- name: train
num_bytes: 41716
num_examples: 2118
download_size: 0
dataset_size: 41716
- config_name: ku
features:
- name: word
dtype: string
- name: sentiment
dtype:
class_label:
names:
'0': negative
'1': positive
splits:
- name: train
num_bytes: 2510
num_examples: 145
download_size: 0
dataset_size: 2510
- config_name: ky
features:
- name: word
dtype: string
- name: sentiment
dtype:
class_label:
names:
'0': negative
'1': positive
splits:
- name: train
num_bytes: 5746
num_examples: 246
download_size: 0
dataset_size: 5746
- config_name: la
features:
- name: word
dtype: string
- name: sentiment
dtype:
class_label:
names:
'0': negative
'1': positive
splits:
- name: train
num_bytes: 39092
num_examples: 2033
download_size: 0
dataset_size: 39092
- config_name: lb
features:
- name: word
dtype: string
- name: sentiment
dtype:
class_label:
names:
'0': negative
'1': positive
splits:
- name: train
num_bytes: 4150
num_examples: 224
download_size: 0
dataset_size: 4150
- config_name: lt
features:
- name: word
dtype: string
- name: sentiment
dtype:
class_label:
names:
'0': negative
'1': positive
splits:
- name: train
num_bytes: 45274
num_examples: 2190
download_size: 0
dataset_size: 45274
- config_name: lv
features:
- name: word
dtype: string
- name: sentiment
dtype:
class_label:
names:
'0': negative
'1': positive
splits:
- name: train
num_bytes: 39879
num_examples: 1938
download_size: 0
dataset_size: 39879
- config_name: mk
features:
- name: word
dtype: string
- name: sentiment
dtype:
class_label:
names:
'0': negative
'1': positive
splits:
- name: train
num_bytes: 81619
num_examples: 2965
download_size: 0
dataset_size: 81619
- config_name: mr
features:
- name: word
dtype: string
- name: sentiment
dtype:
class_label:
names:
'0': negative
'1': positive
splits:
- name: train
num_bytes: 48601
num_examples: 1825
download_size: 0
dataset_size: 48601
- config_name: ms
features:
- name: word
dtype: string
- name: sentiment
dtype:
class_label:
names:
'0': negative
'1': positive
splits:
- name: train
num_bytes: 57265
num_examples: 2934
download_size: 0
dataset_size: 57265
- config_name: mt
features:
- name: word
dtype: string
- name: sentiment
dtype:
class_label:
names:
'0': negative
'1': positive
splits:
- name: train
num_bytes: 16913
num_examples: 863
download_size: 0
dataset_size: 16913
- config_name: nl
features:
- name: word
dtype: string
- name: sentiment
dtype:
class_label:
names:
'0': negative
'1': positive
splits:
- name: train
num_bytes: 80335
num_examples: 3976
download_size: 0
dataset_size: 80335
- config_name: nn
features:
- name: word
dtype: string
- name: sentiment
dtype:
class_label:
names:
'0': negative
'1': positive
splits:
- name: train
num_bytes: 35835
num_examples: 1894
download_size: 0
dataset_size: 35835
- config_name: 'no'
features:
- name: word
dtype: string
- name: sentiment
dtype:
class_label:
names:
'0': negative
'1': positive
splits:
- name: train
num_bytes: 61160
num_examples: 3089
download_size: 0
dataset_size: 61160
- config_name: pl
features:
- name: word
dtype: string
- name: sentiment
dtype:
class_label:
names:
'0': negative
'1': positive
splits:
- name: train
num_bytes: 73213
num_examples: 3533
download_size: 0
dataset_size: 73213
- config_name: pt
features:
- name: word
dtype: string
- name: sentiment
dtype:
class_label:
names:
'0': negative
'1': positive
splits:
- name: train
num_bytes: 80618
num_examples: 3953
download_size: 0
dataset_size: 80618
- config_name: rm
features:
- name: word
dtype: string
- name: sentiment
dtype:
class_label:
names:
'0': negative
'1': positive
splits:
- name: train
num_bytes: 2060
num_examples: 116
download_size: 0
dataset_size: 2060
- config_name: ro
features:
- name: word
dtype: string
- name: sentiment
dtype:
class_label:
names:
'0': negative
'1': positive
splits:
- name: train
num_bytes: 66071
num_examples: 3329
download_size: 0
dataset_size: 66071
- config_name: ru
features:
- name: word
dtype: string
- name: sentiment
dtype:
class_label:
names:
'0': negative
'1': positive
splits:
- name: train
num_bytes: 82966
num_examples: 2914
download_size: 0
dataset_size: 82966
- config_name: sk
features:
- name: word
dtype: string
- name: sentiment
dtype:
class_label:
names:
'0': negative
'1': positive
splits:
- name: train
num_bytes: 49751
num_examples: 2428
download_size: 0
dataset_size: 49751
- config_name: sl
features:
- name: word
dtype: string
- name: sentiment
dtype:
class_label:
names:
'0': negative
'1': positive
splits:
- name: train
num_bytes: 44430
num_examples: 2244
download_size: 0
dataset_size: 44430
- config_name: sq
features:
- name: word
dtype: string
- name: sentiment
dtype:
class_label:
names:
'0': negative
'1': positive
splits:
- name: train
num_bytes: 40484
num_examples: 2076
download_size: 0
dataset_size: 40484
- config_name: sr
features:
- name: word
dtype: string
- name: sentiment
dtype:
class_label:
names:
'0': negative
'1': positive
splits:
- name: train
num_bytes: 53257
num_examples: 2034
download_size: 0
dataset_size: 53257
- config_name: sv
features:
- name: word
dtype: string
- name: sentiment
dtype:
class_label:
names:
'0': negative
'1': positive
splits:
- name: train
num_bytes: 73939
num_examples: 3722
download_size: 0
dataset_size: 73939
- config_name: sw
features:
- name: word
dtype: string
- name: sentiment
dtype:
class_label:
names:
'0': negative
'1': positive
splits:
- name: train
num_bytes: 24962
num_examples: 1314
download_size: 0
dataset_size: 24962
- config_name: ta
features:
- name: word
dtype: string
- name: sentiment
dtype:
class_label:
names:
'0': negative
'1': positive
splits:
- name: train
num_bytes: 71071
num_examples: 2057
download_size: 0
dataset_size: 71071
- config_name: te
features:
- name: word
dtype: string
- name: sentiment
dtype:
class_label:
names:
'0': negative
'1': positive
splits:
- name: train
num_bytes: 77306
num_examples: 2523
download_size: 0
dataset_size: 77306
- config_name: th
features:
- name: word
dtype: string
- name: sentiment
dtype:
class_label:
names:
'0': negative
'1': positive
splits:
- name: train
num_bytes: 34209
num_examples: 1279
download_size: 0
dataset_size: 34209
- config_name: tk
features:
- name: word
dtype: string
- name: sentiment
dtype:
class_label:
names:
'0': negative
'1': positive
splits:
- name: train
num_bytes: 1425
num_examples: 78
download_size: 0
dataset_size: 1425
- config_name: tl
features:
- name: word
dtype: string
- name: sentiment
dtype:
class_label:
names:
'0': negative
'1': positive
splits:
- name: train
num_bytes: 36190
num_examples: 1858
download_size: 0
dataset_size: 36190
- config_name: tr
features:
- name: word
dtype: string
- name: sentiment
dtype:
class_label:
names:
'0': negative
'1': positive
splits:
- name: train
num_bytes: 49295
num_examples: 2500
download_size: 0
dataset_size: 49295
- config_name: uk
features:
- name: word
dtype: string
- name: sentiment
dtype:
class_label:
names:
'0': negative
'1': positive
splits:
- name: train
num_bytes: 80226
num_examples: 2827
download_size: 0
dataset_size: 80226
- config_name: ur
features:
- name: word
dtype: string
- name: sentiment
dtype:
class_label:
names:
'0': negative
'1': positive
splits:
- name: train
num_bytes: 28469
num_examples: 1347
download_size: 0
dataset_size: 28469
- config_name: uz
features:
- name: word
dtype: string
- name: sentiment
dtype:
class_label:
names:
'0': negative
'1': positive
splits:
- name: train
num_bytes: 1944
num_examples: 111
download_size: 0
dataset_size: 1944
- config_name: vi
features:
- name: word
dtype: string
- name: sentiment
dtype:
class_label:
names:
'0': negative
'1': positive
splits:
- name: train
num_bytes: 18100
num_examples: 1016
download_size: 0
dataset_size: 18100
- config_name: vo
features:
- name: word
dtype: string
- name: sentiment
dtype:
class_label:
names:
'0': negative
'1': positive
splits:
- name: train
num_bytes: 775
num_examples: 43
download_size: 0
dataset_size: 775
- config_name: wa
features:
- name: word
dtype: string
- name: sentiment
dtype:
class_label:
names:
'0': negative
'1': positive
splits:
- name: train
num_bytes: 3450
num_examples: 193
download_size: 0
dataset_size: 3450
- config_name: yi
features:
- name: word
dtype: string
- name: sentiment
dtype:
class_label:
names:
'0': negative
'1': positive
splits:
- name: train
num_bytes: 9001
num_examples: 395
download_size: 0
dataset_size: 9001
- config_name: zh
features:
- name: word
dtype: string
- name: sentiment
dtype:
class_label:
names:
'0': negative
'1': positive
splits:
- name: train
num_bytes: 33025
num_examples: 1879
download_size: 0
dataset_size: 33025
- config_name: zhw
features:
- name: word
dtype: string
- name: sentiment
dtype:
class_label:
names:
'0': negative
'1': positive
splits:
- name: train
num_bytes: 67675
num_examples: 3828
download_size: 0
dataset_size: 67675
config_names:
- 'no'
- af
- an
- ar
- az
- be
- bg
- bn
- br
- bs
- ca
- cs
- cy
- da
- de
- el
- eo
- es
- et
- eu
- fa
- fi
- fo
- fr
- fy
- ga
- gd
- gl
- gu
- he
- hi
- hr
- ht
- hu
- hy
- ia
- id
- io
- is
- it
- ja
- ka
- km
- kn
- ko
- ku
- ky
- la
- lb
- lt
- lv
- mk
- mr
- ms
- mt
- nl
- nn
- pl
- pt
- rm
- ro
- ru
- sk
- sl
- sq
- sr
- sv
- sw
- ta
- te
- th
- tk
- tl
- tr
- uk
- ur
- uz
- vi
- vo
- wa
- yi
- zh
- zhw
---
# Dataset Card for SentiWS
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://sites.google.com/site/datascienceslab/projects/multilingualsentiment
- **Repository:** https://www.kaggle.com/rtatman/sentiment-lexicons-for-81-languages
- **Paper:** https://aclanthology.org/P14-2063/
- **Leaderboard:** [Needs More Information]
- **Point of Contact:** [Needs More Information]
### Dataset Summary
This dataset add sentiment lexicons for 81 languages generated via graph propagation based on a knowledge graph--a graphical representation of real-world entities and the links between them
### Supported Tasks and Leaderboards
Sentiment-Classification
### Languages
Afrikaans
Aragonese
Arabic
Azerbaijani
Belarusian
Bulgarian
Bengali
Breton
Bosnian
Catalan; Valencian
Czech
Welsh
Danish
German
Greek, Modern
Esperanto
Spanish; Castilian
Estonian
Basque
Persian
Finnish
Faroese
French
Western Frisian
Irish
Scottish Gaelic; Gaelic
Galician
Gujarati
Hebrew (modern)
Hindi
Croatian
Haitian; Haitian Creole
Hungarian
Armenian
Interlingua
Indonesian
Ido
Icelandic
Italian
Japanese
Georgian
Khmer
Kannada
Korean
Kurdish
Kirghiz, Kyrgyz
Latin
Luxembourgish, Letzeburgesch
Lithuanian
Latvian
Macedonian
Marathi (Marāṭhī)
Malay
Maltese
Dutch
Norwegian Nynorsk
Norwegian
Polish
Portuguese
Romansh
Romanian, Moldavian, Moldovan
Russian
Slovak
Slovene
Albanian
Serbian
Swedish
Swahili
Tamil
Telugu
Thai
Turkmen
Tagalog
Turkish
Ukrainian
Urdu
Uzbek
Vietnamese
Volapük
Walloon
Yiddish
Chinese
Zhoa
## Dataset Structure
### Data Instances
```
{
"word":"die",
"sentiment": 0, #"negative"
}
```
### Data Fields
- word: one word as a string,
- sentiment-score: the sentiment classification of the word as a string either negative (0) or positive (1)
### Data Splits
[Needs More Information]
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
[Needs More Information]
## Considerations for Using the Data
### Social Impact of Dataset
[Needs More Information]
### Discussion of Biases
[Needs More Information]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
[Needs More Information]
### Licensing Information
GNU General Public License v3.
It is distributed here under the [GNU General Public License](http://www.gnu.org/licenses/gpl-3.0.html).
Note that this is the full GPL, which allows many free uses, but does not allow its incorporation into any type of distributed proprietary software, even in part or in translation.
For commercial applications please contact the dataset creators (see "Citation Information").
### Citation Information
This dataset was collected by Yanqing Chen and Steven Skiena. If you use it in your work, please cite the following paper:
```bibtex
@inproceedings{chen-skiena-2014-building,
title = "Building Sentiment Lexicons for All Major Languages",
author = "Chen, Yanqing and
Skiena, Steven",
booktitle = "Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)",
month = jun,
year = "2014",
address = "Baltimore, Maryland",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/P14-2063",
doi = "10.3115/v1/P14-2063",
pages = "383--389",
}
```
### Contributions
Thanks to [@KMFODA](https://github.com/KMFODA) for adding this dataset. |
inria-soda/tabular-benchmark | 2023-09-04T16:37:39.000Z | [
"task_categories:tabular-classification",
"task_categories:tabular-regression",
"region:us"
] | inria-soda | null | null | null | 13 | 2,089 |
---
annotations_creators: []
license: []
pretty_name: tabular_benchmark
tags: []
task_categories:
- tabular-classification
- tabular-regression
configs:
- config_name: clf_cat_albert
data_files: clf_cat/albert.csv
- config_name: clf_cat_compas-two-years
data_files: clf_cat/compas-two-years.csv
- config_name: clf_cat_covertype
data_files: clf_cat/covertype.csv
- config_name: clf_cat_default-of-credit-card-clients
data_files: clf_cat/default-of-credit-card-clients.csv
- config_name: clf_cat_electricity
data_files: clf_cat/electricity.csv
- config_name: clf_cat_eye_movements
data_files: clf_cat/eye_movements.csv
- config_name: clf_cat_road-safety
data_files: clf_cat/road-safety.csv
- config_name: clf_num_Bioresponse
data_files: clf_num/Bioresponse.csv
- config_name: clf_num_Diabetes130US
data_files: clf_num/Diabetes130US.csv
- config_name: clf_num_Higgs
data_files: clf_num/Higgs.csv
- config_name: clf_num_MagicTelescope
data_files: clf_num/MagicTelescope.csv
- config_name: clf_num_MiniBooNE
data_files: clf_num/MiniBooNE.csv
- config_name: clf_num_bank-marketing
data_files: clf_num/bank-marketing.csv
- config_name: clf_num_california
data_files: clf_num/california.csv
- config_name: clf_num_covertype
data_files: clf_num/covertype.csv
- config_name: clf_num_credit
data_files: clf_num/credit.csv
- config_name: clf_num_default-of-credit-card-clients
data_files: clf_num/default-of-credit-card-clients.csv
- config_name: clf_num_electricity
data_files: clf_num/electricity.csv
- config_name: clf_num_eye_movements
data_files: clf_num/eye_movements.csv
- config_name: clf_num_heloc
data_files: clf_num/heloc.csv
- config_name: clf_num_house_16H
data_files: clf_num/house_16H.csv
- config_name: clf_num_jannis
data_files: clf_num/jannis.csv
- config_name: clf_num_pol
data_files: clf_num/pol.csv
- config_name: reg_cat_Airlines_DepDelay_1M
data_files: reg_cat/Airlines_DepDelay_1M.csv
- config_name: reg_cat_Allstate_Claims_Severity
data_files: reg_cat/Allstate_Claims_Severity.csv
- config_name: reg_cat_Bike_Sharing_Demand
data_files: reg_cat/Bike_Sharing_Demand.csv
- config_name: reg_cat_Brazilian_houses
data_files: reg_cat/Brazilian_houses.csv
- config_name: reg_cat_Mercedes_Benz_Greener_Manufacturing
data_files: reg_cat/Mercedes_Benz_Greener_Manufacturing.csv
- config_name: reg_cat_SGEMM_GPU_kernel_performance
data_files: reg_cat/SGEMM_GPU_kernel_performance.csv
- config_name: reg_cat_abalone
data_files: reg_cat/abalone.csv
- config_name: reg_cat_analcatdata_supreme
data_files: reg_cat/analcatdata_supreme.csv
- config_name: reg_cat_delays_zurich_transport
data_files: reg_cat/delays_zurich_transport.csv
- config_name: reg_cat_diamonds
data_files: reg_cat/diamonds.csv
- config_name: reg_cat_house_sales
data_files: reg_cat/house_sales.csv
- config_name: reg_cat_medical_charges
data_files: reg_cat/medical_charges.csv
- config_name: reg_cat_nyc-taxi-green-dec-2016
data_files: reg_cat/nyc-taxi-green-dec-2016.csv
- config_name: reg_cat_particulate-matter-ukair-2017
data_files: reg_cat/particulate-matter-ukair-2017.csv
- config_name: reg_cat_seattlecrime6
data_files: reg_cat/seattlecrime6.csv
- config_name: reg_cat_topo_2_1
data_files: reg_cat/topo_2_1.csv
- config_name: reg_cat_visualizing_soil
data_files: reg_cat/visualizing_soil.csv
- config_name: reg_num_Ailerons
data_files: reg_num/Ailerons.csv
- config_name: reg_num_Bike_Sharing_Demand
data_files: reg_num/Bike_Sharing_Demand.csv
- config_name: reg_num_Brazilian_houses
data_files: reg_num/Brazilian_houses.csv
- config_name: reg_num_MiamiHousing2016
data_files: reg_num/MiamiHousing2016.csv
- config_name: reg_num_abalone
data_files: reg_num/abalone.csv
- config_name: reg_num_cpu_act
data_files: reg_num/cpu_act.csv
- config_name: reg_num_delays_zurich_transport
data_files: reg_num/delays_zurich_transport.csv
- config_name: reg_num_diamonds
data_files: reg_num/diamonds.csv
- config_name: reg_num_elevators
data_files: reg_num/elevators.csv
- config_name: reg_num_house_16H
data_files: reg_num/house_16H.csv
- config_name: reg_num_house_sales
data_files: reg_num/house_sales.csv
- config_name: reg_num_houses
data_files: reg_num/houses.csv
- config_name: reg_num_medical_charges
data_files: reg_num/medical_charges.csv
- config_name: reg_num_nyc-taxi-green-dec-2016
data_files: reg_num/nyc-taxi-green-dec-2016.csv
- config_name: reg_num_pol
data_files: reg_num/pol.csv
- config_name: reg_num_sulfur
data_files: reg_num/sulfur.csv
- config_name: reg_num_superconduct
data_files: reg_num/superconduct.csv
- config_name: reg_num_wine_quality
data_files: reg_num/wine_quality.csv
- config_name: reg_num_yprop_4_1
data_files: reg_num/yprop_4_1.csv
---
# Tabular Benchmark
## Dataset Description
This dataset is a curation of various datasets from [openML](https://www.openml.org/) and is curated to benchmark performance of various machine learning algorithms.
- **Repository:** https://github.com/LeoGrin/tabular-benchmark/community
- **Paper:** https://hal.archives-ouvertes.fr/hal-03723551v2/document
### Dataset Summary
Benchmark made of curation of various tabular data learning tasks, including:
- Regression from Numerical and Categorical Features
- Regression from Numerical Features
- Classification from Numerical and Categorical Features
- Classification from Numerical Features
### Supported Tasks and Leaderboards
- `tabular-regression`
- `tabular-classification`
## Dataset Structure
### Data Splits
This dataset consists of four splits (folders) based on tasks and datasets included in tasks.
- reg_num: Task identifier for regression on numerical features.
- reg_cat: Task identifier for regression on numerical and categorical features.
- clf_num: Task identifier for classification on numerical features.
- clf_cat: Task identifier for classification on categorical features.
Depending on the dataset you want to load, you can load the dataset by passing `task_name/dataset_name` to `data_files` argument of `load_dataset` like below:
```python
from datasets import load_dataset
dataset = load_dataset("inria-soda/tabular-benchmark", data_files="reg_cat/house_sales.csv")
```
## Dataset Creation
### Curation Rationale
This dataset is curated to benchmark performance of tree based models against neural networks. The process of picking the datasets for curation is mentioned in the paper as below:
- **Heterogeneous columns**. Columns should correspond to features of different nature. This excludes
images or signal datasets where each column corresponds to the same signal on different sensors.
- **Not high dimensional**. We only keep datasets with a d/n ratio below 1/10.
- **Undocumented datasets** We remove datasets where too little information is available. We did keep
datasets with hidden column names if it was clear that the features were heterogeneous.
- **I.I.D. data**. We remove stream-like datasets or time series.
- **Real-world data**. We remove artificial datasets but keep some simulated datasets. The difference is
subtle, but we try to keep simulated datasets if learning these datasets are of practical importance
(like the Higgs dataset), and not just a toy example to test specific model capabilities.
- **Not too small**. We remove datasets with too few features (< 4) and too few samples (< 3 000). For
benchmarks on numerical features only, we remove categorical features before checking if enough
features and samples are remaining.
- **Not too easy**. We remove datasets which are too easy. Specifically, we remove a dataset if a simple model (max of a single tree and a regression, logistic or OLS)
reaches a score whose relative difference with the score of both a default Resnet (from Gorishniy et al. [2021]) and a default HistGradientBoosting model (from scikit learn)
is below 5%. Other benchmarks use different metrics to remove too easy datasets, like removing datasets perfectly separated by a single decision classifier [Bischl et al., 2021],
but this ignores varying Bayes rate across datasets. As tree ensembles are superior to simple trees and logistic regresison [Fernández-Delgado et al., 2014],
a close score for the simple and powerful models suggests that we are already close to the best achievable score.
- **Not deterministic**. We remove datasets where the target is a deterministic function of the data. This
mostly means removing datasets on games like poker and chess. Indeed, we believe that these
datasets are very different from most real-world tabular datasets, and should be studied separately
### Source Data
**Numerical Classification**
|dataset_name|n_samples|n_features|original_link|new_link|
|---|---|---|---|---|
|electricity|38474.0|7.0|https://www.openml.org/d/151|https://www.openml.org/d/44120|
|covertype|566602.0|10.0|https://www.openml.org/d/293|https://www.openml.org/d/44121|
|pol|10082.0|26.0|https://www.openml.org/d/722|https://www.openml.org/d/44122|
|house_16H|13488.0|16.0|https://www.openml.org/d/821|https://www.openml.org/d/44123|
|MagicTelescope|13376.0|10.0|https://www.openml.org/d/1120|https://www.openml.org/d/44125|
|bank-marketing|10578.0|7.0|https://www.openml.org/d/1461|https://www.openml.org/d/44126|
|Bioresponse|3434.0|419.0|https://www.openml.org/d/4134|https://www.openml.org/d/45019|
|MiniBooNE|72998.0|50.0|https://www.openml.org/d/41150|https://www.openml.org/d/44128|
|default-of-credit-card-clients|13272.0|20.0|https://www.openml.org/d/42477|https://www.openml.org/d/45020|
|Higgs|940160.0|24.0|https://www.openml.org/d/42769|https://www.openml.org/d/44129|
|eye_movements|7608.0|20.0|https://www.openml.org/d/1044|https://www.openml.org/d/44130|
|Diabetes130US|71090.0|7.0|https://www.openml.org/d/4541|https://www.openml.org/d/45022|
|jannis|57580.0|54.0|https://www.openml.org/d/41168|https://www.openml.org/d/45021|
|heloc|10000.0|22.0|"https://www.kaggle.com/datasets/averkiyoliabev/home-equity-line-of-creditheloc?select=heloc_dataset_v1+%281%29.csv"|https://www.openml.org/d/45026|
|credit|16714.0|10.0|"https://www.kaggle.com/c/GiveMeSomeCredit/data?select=cs-training.csv"|https://www.openml.org/d/44089|
|california|20634.0|8.0|"https://www.dcc.fc.up.pt/ltorgo/Regression/cal_housing.html"|https://www.openml.org/d/45028|
**Categorical Classification**
|dataset_name|n_samples|n_features|original_link|new_link|
|---|---|---|---|---|
|electricity|38474.0|8.0|https://www.openml.org/d/151|https://www.openml.org/d/44156|
|eye_movements|7608.0|23.0|https://www.openml.org/d/1044|https://www.openml.org/d/44157|
|covertype|423680.0|54.0|https://www.openml.org/d/1596|https://www.openml.org/d/44159|
|albert|58252.0|31.0|https://www.openml.org/d/41147|https://www.openml.org/d/45035|
|compas-two-years|4966.0|11.0|https://www.openml.org/d/42192|https://www.openml.org/d/45039|
|default-of-credit-card-clients|13272.0|21.0|https://www.openml.org/d/42477|https://www.openml.org/d/45036|
|road-safety|111762.0|32.0|https://www.openml.org/d/42803|https://www.openml.org/d/45038|
**Numerical Regression**
|dataset_name|n_samples|n_features|original_link|new_link|
|---|---|---|---|---|
|cpu_act|8192.0|21.0|https://www.openml.org/d/197|https://www.openml.org/d/44132|
|pol|15000.0|26.0|https://www.openml.org/d/201|https://www.openml.org/d/44133|
|elevators|16599.0|16.0|https://www.openml.org/d/216|https://www.openml.org/d/44134|
|wine_quality|6497.0|11.0|https://www.openml.org/d/287|https://www.openml.org/d/44136|
|Ailerons|13750.0|33.0|https://www.openml.org/d/296|https://www.openml.org/d/44137|
|yprop_4_1|8885.0|42.0|https://www.openml.org/d/416|https://www.openml.org/d/45032|
|houses|20640.0|8.0|https://www.openml.org/d/537|https://www.openml.org/d/44138|
|house_16H|22784.0|16.0|https://www.openml.org/d/574|https://www.openml.org/d/44139|
|delays_zurich_transport|5465575.0|9.0|https://www.openml.org/d/40753|https://www.openml.org/d/45034|
|diamonds|53940.0|6.0|https://www.openml.org/d/42225|https://www.openml.org/d/44140|
|Brazilian_houses|10692.0|8.0|https://www.openml.org/d/42688|https://www.openml.org/d/44141|
|Bike_Sharing_Demand|17379.0|6.0|https://www.openml.org/d/42712|https://www.openml.org/d/44142|
|nyc-taxi-green-dec-2016|581835.0|9.0|https://www.openml.org/d/42729|https://www.openml.org/d/44143|
|house_sales|21613.0|15.0|https://www.openml.org/d/42731|https://www.openml.org/d/44144|
|sulfur|10081.0|6.0|https://www.openml.org/d/23515|https://www.openml.org/d/44145|
|medical_charges|163065.0|5.0|https://www.openml.org/d/42720|https://www.openml.org/d/44146|
|MiamiHousing2016|13932.0|14.0|https://www.openml.org/d/43093|https://www.openml.org/d/44147|
|superconduct|21263.0|79.0|https://www.openml.org/d/43174|https://www.openml.org/d/44148|
**Categorical Regression**
|dataset_name|n_samples|n_features|original_link|new_link|
|---|---|---|---|---|
|topo_2_1|8885.0|255.0|https://www.openml.org/d/422|https://www.openml.org/d/45041|
|analcatdata_supreme|4052.0|7.0|https://www.openml.org/d/504|https://www.openml.org/d/44055|
|visualizing_soil|8641.0|4.0|https://www.openml.org/d/688|https://www.openml.org/d/44056|
|delays_zurich_transport|5465575.0|12.0|https://www.openml.org/d/40753|https://www.openml.org/d/45045|
|diamonds|53940.0|9.0|https://www.openml.org/d/42225|https://www.openml.org/d/44059|
|Allstate_Claims_Severity|188318.0|124.0|https://www.openml.org/d/42571|https://www.openml.org/d/45046|
|Mercedes_Benz_Greener_Manufacturing|4209.0|359.0|https://www.openml.org/d/42570|https://www.openml.org/d/44061|
|Brazilian_houses|10692.0|11.0|https://www.openml.org/d/42688|https://www.openml.org/d/44062|
|Bike_Sharing_Demand|17379.0|11.0|https://www.openml.org/d/42712|https://www.openml.org/d/44063|
|Airlines_DepDelay_1M|1000000.0|5.0|https://www.openml.org/d/42721|https://www.openml.org/d/45047|
|nyc-taxi-green-dec-2016|581835.0|16.0|https://www.openml.org/d/42729|https://www.openml.org/d/44065|
|abalone|4177.0|8.0|https://www.openml.org/d/42726|https://www.openml.org/d/45042|
|house_sales|21613.0|17.0|https://www.openml.org/d/42731|https://www.openml.org/d/44066|
|seattlecrime6|52031.0|4.0|https://www.openml.org/d/42496|https://www.openml.org/d/45043|
|medical_charges|163065.0|5.0|https://www.openml.org/d/42720|https://www.openml.org/d/45048|
|particulate-matter-ukair-2017|394299.0|6.0|https://www.openml.org/d/42207|https://www.openml.org/d/44068|
|SGEMM_GPU_kernel_performance|241600.0|9.0|https://www.openml.org/d/43144|https://www.openml.org/d/44069|
### Dataset Curators
Léo Grinsztajn, Edouard Oyallon, Gaël Varoquaux.
### Licensing Information
[More Information Needed]
### Citation Information
Léo Grinsztajn, Edouard Oyallon, Gaël Varoquaux. Why do tree-based models still outperform deep
learning on typical tabular data?. NeurIPS 2022 Datasets and Benchmarks Track, Nov 2022, New
Orleans, United States. ffhal-03723551v2f
|
mteb/sts13-sts | 2022-09-27T19:12:02.000Z | [
"language:en",
"region:us"
] | mteb | null | null | null | 1 | 2,085 | ---
language:
- en
--- |
bot-yaya/undl_text | 2023-10-07T00:31:07.000Z | [
"region:us"
] | bot-yaya | null | null | null | 0 | 2,079 | ---
dataset_info:
features:
- name: ar
dtype: string
- name: zh
dtype: string
- name: en
dtype: string
- name: fr
dtype: string
- name: ru
dtype: string
- name: es
dtype: string
- name: de
dtype: string
- name: record
dtype: string
splits:
- name: train
num_bytes: 48667711040
num_examples: 165840
download_size: 18648916788
dataset_size: 48667711040
---
# Dataset Card for "undl_text"
pandoc转docx出的源文本,所用命令为:pandoc -i {filepath} -t plain -o {outpath} --strip-comments
这些文本可能仍需一定的步骤去噪,比如去掉全是横线的分隔符、去掉表格元素,才能用于后续的翻译及对齐步骤 |
mkshing/xlsum_ja | 2023-06-20T23:28:48.000Z | [
"task_categories:summarization",
"task_categories:text-classification",
"language:ja",
"license:cc-by-nc-sa-4.0",
"arxiv:2305.10403",
"region:us"
] | mkshing | null | null | null | 2 | 2,074 | ---
license: cc-by-nc-sa-4.0
task_categories:
- summarization
- text-classification
language:
- ja
---
This is the filtered Japanese subset of [XL-Sum](https://huggingface.co/datasets/csebuetnlp/xlsum) followed by [PaLM 2](https://arxiv.org/abs/2305.10403)
**filters**
- 15-gram overlap
\* code: https://gist.github.com/mkshing/d6371cbfdd50d4f352cee247fd4dd86a
**number of examples**
- train: 4215 (before: 7113)
- validation: 758 (before: 889)
- test: 766 (before: 889)
|
mteb/amazon_massive_intent | 2022-09-27T19:10:30.000Z | [
"language:af",
"language:am",
"language:ar",
"language:az",
"language:bn",
"language:cy",
"language:da",
"language:de",
"language:el",
"language:en",
"language:es",
"language:fa",
"language:fr",
"language:he",
"language:hi",
"language:hu",
"language:hy",
"language:id",
"language:is",
"language:it",
"language:ja",
"language:jv",
"language:ka",
"language:km",
"language:kn",
"language:ko",
"language:lv",
"language:ml",
"language:mn",
"language:ms",
"language:my",
"language:nb",
"language:nl",
"language:pl",
"language:pt",
"language:ro",
"language:ru",
"language:sl",
"language:sq",
"language:sv",
"language:sw",
"language:ta",
"language:te",
"language:th",
"language:tl",
"language:tr",
"language:ur",
"language:vi",
"language:zh",
"region:us"
] | mteb | MASSIVE is a parallel dataset of > 1M utterances across 51 languages with annotations
for the Natural Language Understanding tasks of intent prediction and slot annotation.
Utterances span 60 intents and include 55 slot types. MASSIVE was created by localizing
the SLURP dataset, composed of general Intelligent Voice Assistant single-shot interactions. | null | null | 6 | 2,062 | ---
language:
- af
- am
- ar
- az
- bn
- cy
- da
- de
- el
- en
- es
- fa
- fr
- he
- hi
- hu
- hy
- id
- is
- it
- ja
- jv
- ka
- km
- kn
- ko
- lv
- ml
- mn
- ms
- my
- nb
- nl
- pl
- pt
- ro
- ru
- sl
- sq
- sv
- sw
- ta
- te
- th
- tl
- tr
- ur
- vi
- zh
--- |
lansinuote/ChnSentiCorp | 2023-02-28T05:31:30.000Z | [
"region:us"
] | lansinuote | null | null | null | 8 | 2,046 | Entry not found |
LDJnr/Puffin | 2023-08-10T22:28:55.000Z | [
"task_categories:conversational",
"task_categories:question-answering",
"task_categories:text-generation",
"size_categories:1K<n<10K",
"language:en",
"license:apache-2.0",
"Physics",
"Biology",
"Math",
"Chemistry",
"Culture",
"Logic",
"Roleplay",
"region:us"
] | LDJnr | null | null | null | 59 | 2,038 | ---
license: apache-2.0
task_categories:
- conversational
- question-answering
- text-generation
language:
- en
tags:
- Physics
- Biology
- Math
- Chemistry
- Culture
- Logic
- Roleplay
pretty_name: Puffin
size_categories:
- 1K<n<10K
---
## This is the Official Puffin dataset. Exactly 3,000 examples with each response created using GPT-4.
- Comprised of over 2,000 multi-turn conversations between GPT-4 and real humans.
- Average context length per conversation is over 1,000 tokens. (will measure this more accurately soon)
- Average turns per conversation is more than 10. (will measure this more accurately soon)
- The other portion of Puffin is made of manually curated subsets of the following (All responses synthesized using GPT-4):
CamelAI/Physics
CamelAI/Math
CamelAI/Biology
CamelAI/Chemistry
A majority of the real multi-turn conversations are made up of a curated subset of the original ShareGPT dataset.
- Extensive cleaning was done to filter out instances of overt AI moralizing or related behaviour, such as "As an AI language model" and "September 2021"
- Most importantly, we narrowed down the ShareGPT dataset to strictly only GPT-4 examples. Knowing which ShareGPT examples were GPT-4 vs GPT-3.5 was a task that would've been much more arduous if it wasn't for the help of folks over at OpenChat, whom annoteated the neccessary examples.
During the curation process, there can be some relatively arduos steps when it comes to actually executing on the best experimentation or concepts for how to filter examples out. Luckily there is folks over at NousResearch that helped expedite this process with little to no sacrifices in quality, big thank you to J-Supha specifically for making these types of significant contributions.
Along with J-Supha, some other people are worth mentioning, these are the folks that helped on long late night calls to help debug and/or get Puffin training on Llama-2 Asap, all within 12 hours of Llama-2 being announced.
- Emozilla, Teknium, Caseus. And of course thank you to RedmondAI for sponsoring the training compute!
## Future Plans & How you can help!
This is a relatively early build amongst the grand plans for the future of what I plan to work on!
In the near future we plan on leveraging the help of domain specific expert volunteers to eliminate any mathematically/verifiably incorrect answers from our training curations.
If you have at-least a bachelors in mathematics, physics, biology or chemistry and would like to volunteer even just 30 minutes of your expertise time, please contact LDJ on discord!
|
conceptofmind/FLAN_2022 | 2023-05-25T15:37:54.000Z | [
"region:us"
] | conceptofmind | null | null | null | 71 | 2,015 | ---
dataset_info:
features:
- name: inputs
dtype: string
- name: targets
dtype: string
- name: task_source
dtype: string
- name: task_name
dtype: string
- name: template_type
dtype: string
splits:
- name: train
num_bytes: 19462822989
num_examples: 11313842
download_size: 11605682767
dataset_size: 19462822989
---
# Dataset Card for "FLAN_2022"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
medical_questions_pairs | 2023-01-25T14:40:20.000Z | [
"task_categories:text-classification",
"task_ids:semantic-similarity-classification",
"annotations_creators:expert-generated",
"language_creators:other",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:en",
"license:unknown",
"arxiv:2008.13546",
"region:us"
] | null | This dataset consists of 3048 similar and dissimilar medical question pairs hand-generated and labeled by Curai's doctors. | @misc{mccreery2020effective,
title={Effective Transfer Learning for Identifying Similar Questions: Matching User Questions to COVID-19 FAQs},
author={Clara H. McCreery and Namit Katariya and Anitha Kannan and Manish Chablani and Xavier Amatriain},
year={2020},
eprint={2008.13546},
archivePrefix={arXiv},
primaryClass={cs.IR}
} | null | 27 | 2,002 | ---
annotations_creators:
- expert-generated
language_creators:
- other
language:
- en
license:
- unknown
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- semantic-similarity-classification
pretty_name: MedicalQuestionsPairs
dataset_info:
features:
- name: dr_id
dtype: int32
- name: question_1
dtype: string
- name: question_2
dtype: string
- name: label
dtype:
class_label:
names:
'0': 0
'1': 1
splits:
- name: train
num_bytes: 701650
num_examples: 3048
download_size: 665688
dataset_size: 701650
---
# Dataset Card for [medical_questions_pairs]
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Repository:** [Medical questions pairs repository](https://github.com/curai/medical-question-pair-dataset)
- **Paper:** [Effective Transfer Learning for Identifying Similar Questions:Matching User Questions to COVID-19 FAQs](https://arxiv.org/abs/2008.13546)
### Dataset Summary
This dataset consists of 3048 similar and dissimilar medical question pairs hand-generated and labeled by Curai's doctors. Doctors with a list of 1524 patient-asked questions randomly sampled from the publicly available crawl of [HealthTap](https://github.com/durakkerem/Medical-Question-Answer-Datasets). Each question results in one similar and one different pair through the following instructions provided to the labelers:
- Rewrite the original question in a different way while maintaining the same intent. Restructure the syntax as much as possible and change medical details that would not impact your response. e.g. "I'm a 22-y-o female" could become "My 26 year old daughter"
- Come up with a related but dissimilar question for which the answer to the original question would be WRONG OR IRRELEVANT. Use similar key words.
The first instruction generates a positive question pair (similar) and the second generates a negative question pair (different). With the above instructions, the task was intentionally framed such that positive question pairs can look very different by superficial metrics, and negative question pairs can conversely look very similar. This ensures that the task is not trivial.
### Supported Tasks and Leaderboards
- `text-classification` : The dataset can be used to train a model to identify similar and non similar medical question pairs.
### Languages
The text in the dataset is in English.
## Dataset Structure
### Data Instances
The dataset contains dr_id, question_1, question_2, label. 11 different doctors were used for this task so dr_id ranges from 1 to 11. The label is 1 if the question pair is similar and 0 otherwise.
### Data Fields
- `dr_id`: 11 different doctors were used for this task so dr_id ranges from 1 to 11
- `question_1`: Original Question
- `question_2`: Rewritten Question maintaining the same intent like Original Question
- `label`: The label is 1 if the question pair is similar and 0 otherwise.
### Data Splits
The dataset as of now consists of only one split(train) but can be split seperately based on the requirement
| | train |
|----------------------------|------:|
| Non similar Question Pairs | 1524 |
| Similar Question Pairs | 1524 |
## Dataset Creation
Doctors with a list of 1524 patient-asked questions randomly sampled from the publicly available crawl of [HealthTap](https://github.com/durakkerem/Medical-Question-Answer-Datasets). Each question results in one similar and one different pair through the following instructions provided to the labelers:
- Rewrite the original question in a different way while maintaining the same intent. Restructure the syntax as much as possible and change medical details that would not impact your response. e.g. "I'm a 22-y-o female" could become "My 26 year old daughter"
- Come up with a related but dissimilar question for which the answer to the original question would be WRONG OR IRRELEVANT. Use similar key words.
The first instruction generates a positive question pair (similar) and the second generates a negative question pair (different). With the above instructions, the task was intentionally framed such that positive question pairs can look very different by superficial metrics, and negative question pairs can conversely look very similar. This ensures that the task is not trivial.
### Curation Rationale
[More Information Needed]
### Source Data
1524 patient-asked questions randomly sampled from the publicly available crawl of [HealthTap](https://github.com/durakkerem/Medical-Question-Answer-Datasets)
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
[More Information Needed]
#### Annotation process
Doctors with a list of 1524 patient-asked questions randomly sampled from the publicly available crawl of [HealthTap](https://github.com/durakkerem/Medical-Question-Answer-Datasets). Each question results in one similar and one different pair through the following instructions provided to the labelers:
- Rewrite the original question in a different way while maintaining the same intent. Restructure the syntax as much as possible and change medical details that would not impact your response. e.g. "I'm a 22-y-o female" could become "My 26 year old daughter"
- Come up with a related but dissimilar question for which the answer to the original question would be WRONG OR IRRELEVANT. Use similar key words.
The first instruction generates a positive question pair (similar) and the second generates a negative question pair (different). With the above instructions, the task was intentionally framed such that positive question pairs can look very different by superficial metrics, and negative question pairs can conversely look very similar. This ensures that the task is not trivial.
#### Who are the annotators?
**Curai's doctors**
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
[More Information Needed]
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
[More Information Needed]
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
```
@misc{mccreery2020effective,
title={Effective Transfer Learning for Identifying Similar Questions: Matching User Questions to COVID-19 FAQs},
author={Clara H. McCreery and Namit Katariya and Anitha Kannan and Manish Chablani and Xavier Amatriain},
year={2020},
eprint={2008.13546},
archivePrefix={arXiv},
primaryClass={cs.IR}
}
```
### Contributions
Thanks to [@tuner007](https://github.com/tuner007) for adding this dataset. |
BeIR/arguana-qrels | 2022-10-23T06:06:46.000Z | [
"task_categories:text-retrieval",
"task_ids:entity-linking-retrieval",
"task_ids:fact-checking-retrieval",
"multilinguality:monolingual",
"language:en",
"license:cc-by-sa-4.0",
"region:us"
] | BeIR | null | null | null | 0 | 1,997 | ---
annotations_creators: []
language_creators: []
language:
- en
license:
- cc-by-sa-4.0
multilinguality:
- monolingual
paperswithcode_id: beir
pretty_name: BEIR Benchmark
size_categories:
msmarco:
- 1M<n<10M
trec-covid:
- 100k<n<1M
nfcorpus:
- 1K<n<10K
nq:
- 1M<n<10M
hotpotqa:
- 1M<n<10M
fiqa:
- 10K<n<100K
arguana:
- 1K<n<10K
touche-2020:
- 100K<n<1M
cqadupstack:
- 100K<n<1M
quora:
- 100K<n<1M
dbpedia:
- 1M<n<10M
scidocs:
- 10K<n<100K
fever:
- 1M<n<10M
climate-fever:
- 1M<n<10M
scifact:
- 1K<n<10K
source_datasets: []
task_categories:
- text-retrieval
- zero-shot-retrieval
- information-retrieval
- zero-shot-information-retrieval
task_ids:
- passage-retrieval
- entity-linking-retrieval
- fact-checking-retrieval
- tweet-retrieval
- citation-prediction-retrieval
- duplication-question-retrieval
- argument-retrieval
- news-retrieval
- biomedical-information-retrieval
- question-answering-retrieval
---
# Dataset Card for BEIR Benchmark
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://github.com/UKPLab/beir
- **Repository:** https://github.com/UKPLab/beir
- **Paper:** https://openreview.net/forum?id=wCu6T5xFjeJ
- **Leaderboard:** https://docs.google.com/spreadsheets/d/1L8aACyPaXrL8iEelJLGqlMqXKPX2oSP_R10pZoy77Ns
- **Point of Contact:** nandan.thakur@uwaterloo.ca
### Dataset Summary
BEIR is a heterogeneous benchmark that has been built from 18 diverse datasets representing 9 information retrieval tasks:
- Fact-checking: [FEVER](http://fever.ai), [Climate-FEVER](http://climatefever.ai), [SciFact](https://github.com/allenai/scifact)
- Question-Answering: [NQ](https://ai.google.com/research/NaturalQuestions), [HotpotQA](https://hotpotqa.github.io), [FiQA-2018](https://sites.google.com/view/fiqa/)
- Bio-Medical IR: [TREC-COVID](https://ir.nist.gov/covidSubmit/index.html), [BioASQ](http://bioasq.org), [NFCorpus](https://www.cl.uni-heidelberg.de/statnlpgroup/nfcorpus/)
- News Retrieval: [TREC-NEWS](https://trec.nist.gov/data/news2019.html), [Robust04](https://trec.nist.gov/data/robust/04.guidelines.html)
- Argument Retrieval: [Touche-2020](https://webis.de/events/touche-20/shared-task-1.html), [ArguAna](tp://argumentation.bplaced.net/arguana/data)
- Duplicate Question Retrieval: [Quora](https://www.quora.com/q/quoradata/First-Quora-Dataset-Release-Question-Pairs), [CqaDupstack](http://nlp.cis.unimelb.edu.au/resources/cqadupstack/)
- Citation-Prediction: [SCIDOCS](https://allenai.org/data/scidocs)
- Tweet Retrieval: [Signal-1M](https://research.signal-ai.com/datasets/signal1m-tweetir.html)
- Entity Retrieval: [DBPedia](https://github.com/iai-group/DBpedia-Entity/)
All these datasets have been preprocessed and can be used for your experiments.
```python
```
### Supported Tasks and Leaderboards
The dataset supports a leaderboard that evaluates models against task-specific metrics such as F1 or EM, as well as their ability to retrieve supporting information from Wikipedia.
The current best performing models can be found [here](https://eval.ai/web/challenges/challenge-page/689/leaderboard/).
### Languages
All tasks are in English (`en`).
## Dataset Structure
All BEIR datasets must contain a corpus, queries and qrels (relevance judgments file). They must be in the following format:
- `corpus` file: a `.jsonl` file (jsonlines) that contains a list of dictionaries, each with three fields `_id` with unique document identifier, `title` with document title (optional) and `text` with document paragraph or passage. For example: `{"_id": "doc1", "title": "Albert Einstein", "text": "Albert Einstein was a German-born...."}`
- `queries` file: a `.jsonl` file (jsonlines) that contains a list of dictionaries, each with two fields `_id` with unique query identifier and `text` with query text. For example: `{"_id": "q1", "text": "Who developed the mass-energy equivalence formula?"}`
- `qrels` file: a `.tsv` file (tab-seperated) that contains three columns, i.e. the `query-id`, `corpus-id` and `score` in this order. Keep 1st row as header. For example: `q1 doc1 1`
### Data Instances
A high level example of any beir dataset:
```python
corpus = {
"doc1" : {
"title": "Albert Einstein",
"text": "Albert Einstein was a German-born theoretical physicist. who developed the theory of relativity, \
one of the two pillars of modern physics (alongside quantum mechanics). His work is also known for \
its influence on the philosophy of science. He is best known to the general public for his mass–energy \
equivalence formula E = mc2, which has been dubbed 'the world's most famous equation'. He received the 1921 \
Nobel Prize in Physics 'for his services to theoretical physics, and especially for his discovery of the law \
of the photoelectric effect', a pivotal step in the development of quantum theory."
},
"doc2" : {
"title": "", # Keep title an empty string if not present
"text": "Wheat beer is a top-fermented beer which is brewed with a large proportion of wheat relative to the amount of \
malted barley. The two main varieties are German Weißbier and Belgian witbier; other types include Lambic (made\
with wild yeast), Berliner Weisse (a cloudy, sour beer), and Gose (a sour, salty beer)."
},
}
queries = {
"q1" : "Who developed the mass-energy equivalence formula?",
"q2" : "Which beer is brewed with a large proportion of wheat?"
}
qrels = {
"q1" : {"doc1": 1},
"q2" : {"doc2": 1},
}
```
### Data Fields
Examples from all configurations have the following features:
### Corpus
- `corpus`: a `dict` feature representing the document title and passage text, made up of:
- `_id`: a `string` feature representing the unique document id
- `title`: a `string` feature, denoting the title of the document.
- `text`: a `string` feature, denoting the text of the document.
### Queries
- `queries`: a `dict` feature representing the query, made up of:
- `_id`: a `string` feature representing the unique query id
- `text`: a `string` feature, denoting the text of the query.
### Qrels
- `qrels`: a `dict` feature representing the query document relevance judgements, made up of:
- `_id`: a `string` feature representing the query id
- `_id`: a `string` feature, denoting the document id.
- `score`: a `int32` feature, denoting the relevance judgement between query and document.
### Data Splits
| Dataset | Website| BEIR-Name | Type | Queries | Corpus | Rel D/Q | Down-load | md5 |
| -------- | -----| ---------| --------- | ----------- | ---------| ---------| :----------: | :------:|
| MSMARCO | [Homepage](https://microsoft.github.io/msmarco/)| ``msmarco`` | ``train``<br>``dev``<br>``test``| 6,980 | 8.84M | 1.1 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/msmarco.zip) | ``444067daf65d982533ea17ebd59501e4`` |
| TREC-COVID | [Homepage](https://ir.nist.gov/covidSubmit/index.html)| ``trec-covid``| ``test``| 50| 171K| 493.5 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/trec-covid.zip) | ``ce62140cb23feb9becf6270d0d1fe6d1`` |
| NFCorpus | [Homepage](https://www.cl.uni-heidelberg.de/statnlpgroup/nfcorpus/) | ``nfcorpus`` | ``train``<br>``dev``<br>``test``| 323 | 3.6K | 38.2 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/nfcorpus.zip) | ``a89dba18a62ef92f7d323ec890a0d38d`` |
| BioASQ | [Homepage](http://bioasq.org) | ``bioasq``| ``train``<br>``test`` | 500 | 14.91M | 8.05 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#2-bioasq) |
| NQ | [Homepage](https://ai.google.com/research/NaturalQuestions) | ``nq``| ``train``<br>``test``| 3,452 | 2.68M | 1.2 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/nq.zip) | ``d4d3d2e48787a744b6f6e691ff534307`` |
| HotpotQA | [Homepage](https://hotpotqa.github.io) | ``hotpotqa``| ``train``<br>``dev``<br>``test``| 7,405 | 5.23M | 2.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/hotpotqa.zip) | ``f412724f78b0d91183a0e86805e16114`` |
| FiQA-2018 | [Homepage](https://sites.google.com/view/fiqa/) | ``fiqa`` | ``train``<br>``dev``<br>``test``| 648 | 57K | 2.6 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/fiqa.zip) | ``17918ed23cd04fb15047f73e6c3bd9d9`` |
| Signal-1M(RT) | [Homepage](https://research.signal-ai.com/datasets/signal1m-tweetir.html)| ``signal1m`` | ``test``| 97 | 2.86M | 19.6 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#4-signal-1m) |
| TREC-NEWS | [Homepage](https://trec.nist.gov/data/news2019.html) | ``trec-news`` | ``test``| 57 | 595K | 19.6 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#1-trec-news) |
| ArguAna | [Homepage](http://argumentation.bplaced.net/arguana/data) | ``arguana``| ``test`` | 1,406 | 8.67K | 1.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/arguana.zip) | ``8ad3e3c2a5867cdced806d6503f29b99`` |
| Touche-2020| [Homepage](https://webis.de/events/touche-20/shared-task-1.html) | ``webis-touche2020``| ``test``| 49 | 382K | 19.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/webis-touche2020.zip) | ``46f650ba5a527fc69e0a6521c5a23563`` |
| CQADupstack| [Homepage](http://nlp.cis.unimelb.edu.au/resources/cqadupstack/) | ``cqadupstack``| ``test``| 13,145 | 457K | 1.4 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/cqadupstack.zip) | ``4e41456d7df8ee7760a7f866133bda78`` |
| Quora| [Homepage](https://www.quora.com/q/quoradata/First-Quora-Dataset-Release-Question-Pairs) | ``quora``| ``dev``<br>``test``| 10,000 | 523K | 1.6 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/quora.zip) | ``18fb154900ba42a600f84b839c173167`` |
| DBPedia | [Homepage](https://github.com/iai-group/DBpedia-Entity/) | ``dbpedia-entity``| ``dev``<br>``test``| 400 | 4.63M | 38.2 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/dbpedia-entity.zip) | ``c2a39eb420a3164af735795df012ac2c`` |
| SCIDOCS| [Homepage](https://allenai.org/data/scidocs) | ``scidocs``| ``test``| 1,000 | 25K | 4.9 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/scidocs.zip) | ``38121350fc3a4d2f48850f6aff52e4a9`` |
| FEVER | [Homepage](http://fever.ai) | ``fever``| ``train``<br>``dev``<br>``test``| 6,666 | 5.42M | 1.2| [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/fever.zip) | ``5a818580227bfb4b35bb6fa46d9b6c03`` |
| Climate-FEVER| [Homepage](http://climatefever.ai) | ``climate-fever``|``test``| 1,535 | 5.42M | 3.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/climate-fever.zip) | ``8b66f0a9126c521bae2bde127b4dc99d`` |
| SciFact| [Homepage](https://github.com/allenai/scifact) | ``scifact``| ``train``<br>``test``| 300 | 5K | 1.1 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/scifact.zip) | ``5f7d1de60b170fc8027bb7898e2efca1`` |
| Robust04 | [Homepage](https://trec.nist.gov/data/robust/04.guidelines.html) | ``robust04``| ``test``| 249 | 528K | 69.9 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#3-robust04) |
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
[Needs More Information]
## Considerations for Using the Data
### Social Impact of Dataset
[Needs More Information]
### Discussion of Biases
[Needs More Information]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
[Needs More Information]
### Licensing Information
[Needs More Information]
### Citation Information
Cite as:
```
@inproceedings{
thakur2021beir,
title={{BEIR}: A Heterogeneous Benchmark for Zero-shot Evaluation of Information Retrieval Models},
author={Nandan Thakur and Nils Reimers and Andreas R{\"u}ckl{\'e} and Abhishek Srivastava and Iryna Gurevych},
booktitle={Thirty-fifth Conference on Neural Information Processing Systems Datasets and Benchmarks Track (Round 2)},
year={2021},
url={https://openreview.net/forum?id=wCu6T5xFjeJ}
}
```
### Contributions
Thanks to [@Nthakur20](https://github.com/Nthakur20) for adding this dataset. |
Helsinki-NLP/tatoeba_mt | 2022-10-21T15:50:25.000Z | [
"annotations_creators:no-annotation",
"language_creators:crowdsourced",
"multilinguality:translation",
"size_categories:unknown",
"source_datasets:original",
"language:af",
"language:ar",
"language:az",
"language:be",
"language:bg",
"language:bn",
"language:br",
"language:bs",
"language:ca",
"language:ch",
"language:cs",
"language:cv",
"language:cy",
"language:da",
"language:de",
"language:el",
"language:en",
"language:eo",
"language:es",
"language:et",
"language:eu",
"language:fa",
"language:fi",
"language:fo",
"language:fr",
"language:fy",
"language:ga",
"language:gd",
"language:gl",
"language:gn",
"language:he",
"language:hi",
"language:hr",
"language:hu",
"language:hy",
"language:ia",
"language:id",
"language:ie",
"language:io",
"language:is",
"language:it",
"language:ja",
"language:jv",
"language:ka",
"language:kk",
"language:km",
"language:ko",
"language:ku",
"language:kw",
"language:la",
"language:lb",
"language:lt",
"language:lv",
"language:mi",
"language:mk",
"language:ml",
"language:mn",
"language:mr",
"language:ms",
"language:mt",
"language:my",
"language:nb",
"language:nl",
"language:nn",
"language:no",
"language:oc",
"language:pl",
"language:pt",
"language:qu",
"language:rn",
"language:ro",
"language:ru",
"language:sh",
"language:sl",
"language:sq",
"language:sr",
"language:sv",
"language:sw",
"language:ta",
"language:te",
"language:th",
"language:tk",
"language:tl",
"language:tr",
"language:tt",
"language:ug",
"language:uk",
"language:ur",
"language:uz",
"language:vi",
"language:vo",
"language:yi",
"language:zh",
"license:cc-by-2.0",
"region:us"
] | Helsinki-NLP | The Tatoeba Translation Challenge is a multilingual data set of
machine translation benchmarks derived from user-contributed
translations collected by [Tatoeba.org](https://tatoeba.org/) and
provided as parallel corpus from [OPUS](https://opus.nlpl.eu/). This
dataset includes test and development data sorted by language pair. It
includes test sets for hundreds of language pairs and is continuously
updated. Please, check the version number tag to refer to the release
that your are using. | @inproceedings{tiedemann-2020-tatoeba,
title = "The {T}atoeba {T}ranslation {C}hallenge {--} {R}ealistic Data Sets for Low Resource and Multilingual {MT}",
author = {Tiedemann, J{\"o}rg},
booktitle = "Proceedings of the Fifth Conference on Machine Translation",
month = nov,
year = "2020",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2020.wmt-1.139",
pages = "1174--1182",
} | null | 40 | 1,981 | ---
annotations_creators:
- no-annotation
language_creators:
- crowdsourced
language:
- af
- ar
- az
- be
- bg
- bn
- br
- bs
- ca
- ch
- cs
- cv
- cy
- da
- de
- el
- en
- eo
- es
- et
- eu
- fa
- fi
- fo
- fr
- fy
- ga
- gd
- gl
- gn
- he
- hi
- hr
- hu
- hy
- ia
- id
- ie
- io
- is
- it
- ja
- jv
- ka
- kk
- km
- ko
- ku
- kw
- la
- lb
- lt
- lv
- mi
- mk
- ml
- mn
- mr
- ms
- mt
- my
- nb
- nl
- nn
- 'no'
- oc
- pl
- pt
- qu
- rn
- ro
- ru
- sh
- sl
- sq
- sr
- sv
- sw
- ta
- te
- th
- tk
- tl
- tr
- tt
- ug
- uk
- ur
- uz
- vi
- vo
- yi
- zh
license:
- cc-by-2.0
multilinguality:
- translation
pretty_name: The Tatoeba Translation Challenge
size_categories:
- unknown
source_datasets:
- original
task_categories:
- conditional-text-generation
task_ids:
- machine-translation
---
# Dataset Card for [Dataset Name]
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://github.com/Helsinki-NLP/Tatoeba-Challenge/
- **Repository:** https://github.com/Helsinki-NLP/Tatoeba-Challenge/
- **Paper:** [The Tatoeba Translation Challenge – Realistic Data Sets for Low Resource and Multilingual MT](https://aclanthology.org/2020.wmt-1.139/)
- **Leaderboard:**
- **Point of Contact:** [Jörg Tiedemann](mailto:jorg.tiedemann@helsinki.fi)
### Dataset Summary
The Tatoeba Translation Challenge is a multilingual data set of machine translation benchmarks derived from user-contributed translations collected by [Tatoeba.org](https://tatoeba.org/) and provided as parallel corpus from [OPUS](https://opus.nlpl.eu/). This dataset includes test and development data sorted by language pair. It includes test sets for hundreds of language pairs and is continuously updated. Please, check the version number tag to refer to the release that your are using.
### Supported Tasks and Leaderboards
The translation task is described in detail in the [Tatoeba-Challenge repository](https://github.com/Helsinki-NLP/Tatoeba-Challenge) and covers various sub-tasks with different data coverage and resources. [Training data](https://github.com/Helsinki-NLP/Tatoeba-Challenge/blob/master/data/README.md) is also available from the same repository and [results](https://github.com/Helsinki-NLP/Tatoeba-Challenge/blob/master/results/tatoeba-results-all.md) are published and collected as well. [Models](https://github.com/Helsinki-NLP/Tatoeba-Challenge/blob/master/results/tatoeba-models-all.md) are also released for public use and are also partially available from the [huggingface model hub](https://huggingface.co/Helsinki-NLP).
### Languages
The data set covers hundreds of languages and language pairs and are organized by ISO-639-3 languages. The current release covers the following language: Afrikaans, Arabic, Azerbaijani, Belarusian, Bulgarian, Bengali, Breton, Bosnian, Catalan, Chamorro, Czech, Chuvash, Welsh, Danish, German, Modern Greek, English, Esperanto, Spanish, Estonian, Basque, Persian, Finnish, Faroese, French, Western Frisian, Irish, Scottish Gaelic, Galician, Guarani, Hebrew, Hindi, Croatian, Hungarian, Armenian, Interlingua, Indonesian, Interlingue, Ido, Icelandic, Italian, Japanese, Javanese, Georgian, Kazakh, Khmer, Korean, Kurdish, Cornish, Latin, Luxembourgish, Lithuanian, Latvian, Maori, Macedonian, Malayalam, Mongolian, Marathi, Malay, Maltese, Burmese, Norwegian Bokmål, Dutch, Norwegian Nynorsk, Norwegian, Occitan, Polish, Portuguese, Quechua, Rundi, Romanian, Russian, Serbo-Croatian, Slovenian, Albanian, Serbian, Swedish, Swahili, Tamil, Telugu, Thai, Turkmen, Tagalog, Turkish, Tatar, Uighur, Ukrainian, Urdu, Uzbek, Vietnamese, Volapük, Yiddish, Chinese
## Dataset Structure
### Data Instances
Data instances are given as translation units in TAB-separated files with four columns: source and target language ISO-639-3 codes, source language text and target language text. Note that we do not imply a translation direction and consider the data set to be symmetric and to be used as a test set in both directions. Language-pair-specific subsets are only provided under the label of one direction using sorted ISO-639-3 language IDs.
Some subsets contain several sub-languages or language variants. They may refer to macro-languages such as Serbo-Croatian languages that are covered by the ISO code `hbs`. Language variants may also include different writing systems and in that case the ISO15924 script codes are attached to the language codes. Here are a few examples from the English to Serbo-Croation test set including examples for Bosnian, Croatian and Serbian in Cyrillic and in Latin scripts:
```
eng bos_Latn Children are the flowers of our lives. Djeca su cvijeće našeg života.
eng hrv A bird was flying high up in the sky. Ptica je visoko letjela nebom.
eng srp_Cyrl A bird in the hand is worth two in the bush. Боље врабац у руци, него голуб на грани.
eng srp_Latn Canada is the motherland of ice hockey. Kanada je zemlja-majka hokeja na ledu.
```
There are also data sets with sentence pairs in the same language. In most cases, those are variants with minor spelling differences but they also include rephrased sentences. Here are a few examples from the English test set:
```
eng eng All of us got into the car. We all got in the car.
eng eng All of us hope that doesn't happen. All of us hope that that doesn't happen.
eng eng All the seats are booked. The seats are all sold out.
```
### Data Splits
Test and development data sets are disjoint with respect to sentence pairs but may include overlaps in individual source or target language sentences. Development data should not be used in training directly. The goal of the data splits is to create test sets of reasonable size with a large language coverage. Test sets include at most 10,000 instances. Development data do not exist for all language pairs.
To be comparable with other results, models should use the training data distributed from the [Tatoeba MT Challenge Repository](https://github.com/Helsinki-NLP/Tatoeba-Challenge/) including monolingual data sets also listed there.
## Dataset Creation
### Curation Rationale
The Tatoeba MT data set will be updated continuously and the data preparation procedures are also public and released on [github](https://github.com/Helsinki-NLP/Tatoeba-Challenge/). High language coverage is the main goal of the project and data sets are prepared to be consistent and systematic with standardized language labels and distribution formats.
### Source Data
#### Initial Data Collection and Normalization
The Tatoeba data sets are collected from user-contributed translations submitted to [Tatoeba.org](https://tatoeba.org/) and compiled into a multi-parallel corpus in [OPUS](https://opus.nlpl.eu/Tatoeba.php). The test and development sets are incrementally updated with new releases of the Tatoeba data collection at OPUS. New releases extend the existing data sets. Test sets should not overlap with any of the released development data sets.
#### Who are the source language producers?
The data sets come from [Tatoeba.org](https://tatoeba.org/), which provides a large database of sentences and their translations into a wide varity of languages. Its content is constantly growing as a result of voluntary contributions of thousands of users.
The original project was founded by Trang Ho in 2006, hosted on Sourceforge under the codename of multilangdict.
### Annotations
#### Annotation process
Sentences are translated by volunteers and the Tatoeba database also provides additional metadata about each record including user ratings etc. However, the metadata is currently not used in any way for the compilation of the MT benchmark. Language skills of contributors naturally vary quite a bit and not all translations are done by native speakers of the target language. More information about the contributions can be found at [Tatoeba.org](https://tatoeba.org/).
#### Who are the annotators?
### Personal and Sensitive Information
For information about handling personal and sensitive information we refer to the [original provider](https://tatoeba.org/) of the data. This data set has not been processed in any way to detect or remove potentially sensitve or personal information.
## Considerations for Using the Data
### Social Impact of Dataset
The language coverage is high and with that it represents a highly valuable resource for machine translation development especially for lesser resourced languages and language pairs. The constantly growing database also represents a dynamic resource and its value wil grow further.
### Discussion of Biases
The original source lives from its contributors and there interest and background will to certain subjective and cultural biases. Language coverage and translation quality is also biased by the skills of the contributors.
### Other Known Limitations
The sentences are typically quite short and, therefore, rather easy to translate. For high-resource languages, this leads to results that will be less useful than more challenging benchmarks. For lesser resource language pairs, the limited complexity of the examples is actually a good thing to measure progress even in very challenging setups.
## Additional Information
### Dataset Curators
The data set is curated by the University of Helsinki and its [language technology research group](https://blogs.helsinki.fi/language-technology/). Data and tools used for creating and using the resource are [open source](https://github.com/Helsinki-NLP/Tatoeba-Challenge/) and will be maintained as part of the [OPUS ecosystem](https://opus.nlpl.eu/) for parallel data and machine translation research.
### Licensing Information
The data sets are distributed under the same licence agreement as the original Tatoeba database using a
[CC-BY 2.0 license](https://creativecommons.org/licenses/by/2.0/fr/). More information about the terms of use of the original data sets is listed [here](https://tatoeba.org/eng/terms_of_use).
### Citation Information
If you use the data sets then, please, cite the following paper: [The Tatoeba Translation Challenge – Realistic Data Sets for Low Resource and Multilingual MT](https://aclanthology.org/2020.wmt-1.139/)
```
@inproceedings{tiedemann-2020-tatoeba,
title = "The Tatoeba Translation Challenge {--} Realistic Data Sets for Low Resource and Multilingual {MT}",
author = {Tiedemann, J{\"o}rg},
booktitle = "Proceedings of the Fifth Conference on Machine Translation",
month = nov,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2020.wmt-1.139",
pages = "1174--1182",
}
```
### Contributions
Thanks to [@jorgtied](https://github.com/jorgtied) and [@Helsinki-NLP](https://github.com/Helsinki-NLP) for adding this dataset.
Thanks also to [CSC Finland](https://www.csc.fi/en/solutions-for-research) for providing computational resources and storage space for the work on OPUS and other MT projects.
|
yahoo_answers_topics | 2023-01-25T15:03:25.000Z | [
"task_categories:text-classification",
"task_ids:topic-classification",
"annotations_creators:found",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1M<n<10M",
"source_datasets:extended|other-yahoo-answers-corpus",
"language:en",
"license:unknown",
"region:us"
] | null | Yahoo! Answers Topic Classification is text classification dataset. The dataset is the Yahoo! Answers corpus as of 10/25/2007. The Yahoo! Answers topic classification dataset is constructed using 10 largest main categories. From all the answers and other meta-information, this dataset only used the best answer content and the main category information. | null | null | 26 | 1,975 | ---
annotations_creators:
- found
language_creators:
- found
language:
- en
license:
- unknown
multilinguality:
- monolingual
size_categories:
- 1M<n<10M
source_datasets:
- extended|other-yahoo-answers-corpus
task_categories:
- text-classification
task_ids:
- topic-classification
pretty_name: YahooAnswersTopics
dataset_info:
features:
- name: id
dtype: int32
- name: topic
dtype:
class_label:
names:
'0': Society & Culture
'1': Science & Mathematics
'2': Health
'3': Education & Reference
'4': Computers & Internet
'5': Sports
'6': Business & Finance
'7': Entertainment & Music
'8': Family & Relationships
'9': Politics & Government
- name: question_title
dtype: string
- name: question_content
dtype: string
- name: best_answer
dtype: string
config_name: yahoo_answers_topics
splits:
- name: train
num_bytes: 760460695
num_examples: 1400000
- name: test
num_bytes: 32661362
num_examples: 60000
download_size: 319476345
dataset_size: 793122057
train-eval-index:
- config: yahoo_answers_topics
task: text-classification
task_id: multi_class_classification
splits:
train_split: train
eval_split: test
col_mapping:
question_content: text
topic: target
metrics:
- type: accuracy
name: Accuracy
- type: f1
name: F1 macro
args:
average: macro
- type: f1
name: F1 micro
args:
average: micro
- type: f1
name: F1 weighted
args:
average: weighted
- type: precision
name: Precision macro
args:
average: macro
- type: precision
name: Precision micro
args:
average: micro
- type: precision
name: Precision weighted
args:
average: weighted
- type: recall
name: Recall macro
args:
average: macro
- type: recall
name: Recall micro
args:
average: micro
- type: recall
name: Recall weighted
args:
average: weighted
---
# Dataset Card for "Yahoo Answers Topics"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [Add homepage URL here if available (unless it's a GitHub repository)]()
- **Repository:** https://github.com/LC-John/Yahoo-Answers-Topic-Classification-Dataset
- **Paper:** [If the dataset was introduced by a paper or there was a paper written describing the dataset, add URL here (landing page for Arxiv paper preferred)]()
- **Leaderboard:** [If the dataset supports an active leaderboard, add link here]()
- **Point of Contact:** [If known, name and email of at least one person the reader can contact for questions about the dataset.]()
### Dataset Summary
[More Information Needed]
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
[More Information Needed]
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
[More Information Needed]
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@patil-suraj](https://github.com/patil-suraj) for adding this dataset. |
wiqa | 2023-04-05T13:43:43.000Z | [
"language:en",
"region:us"
] | null | The WIQA dataset V1 has 39705 questions containing a perturbation and a possible effect in the context of a paragraph.
The dataset is split into 29808 train questions, 6894 dev questions and 3003 test questions. | @article{wiqa,
author = {Niket Tandon and Bhavana Dalvi Mishra and Keisuke Sakaguchi and Antoine Bosselut and Peter Clark}
title = {WIQA: A dataset for "What if..." reasoning over procedural text},
journal = {arXiv:1909.04739v1},
year = {2019},
} | null | 2 | 1,973 | ---
language:
- en
paperswithcode_id: wiqa
pretty_name: What-If Question Answering
dataset_info:
features:
- name: question_stem
dtype: string
- name: question_para_step
sequence: string
- name: answer_label
dtype: string
- name: answer_label_as_choice
dtype: string
- name: choices
sequence:
- name: text
dtype: string
- name: label
dtype: string
- name: metadata_question_id
dtype: string
- name: metadata_graph_id
dtype: string
- name: metadata_para_id
dtype: string
- name: metadata_question_type
dtype: string
- name: metadata_path_len
dtype: int32
splits:
- name: train
num_bytes: 17089298
num_examples: 29808
- name: test
num_bytes: 1532223
num_examples: 3003
- name: validation
num_bytes: 3779584
num_examples: 6894
download_size: 5247733
dataset_size: 22401105
---
# Dataset Card for "wiqa"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://allenai.org/data/wiqa](https://allenai.org/data/wiqa)
- **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of downloaded dataset files:** 5.24 MB
- **Size of the generated dataset:** 22.40 MB
- **Total amount of disk used:** 27.65 MB
### Dataset Summary
The WIQA dataset V1 has 39705 questions containing a perturbation and a possible effect in the context of a paragraph.
The dataset is split into 29808 train questions, 6894 dev questions and 3003 test questions.
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Dataset Structure
### Data Instances
#### default
- **Size of downloaded dataset files:** 5.24 MB
- **Size of the generated dataset:** 22.40 MB
- **Total amount of disk used:** 27.65 MB
An example of 'validation' looks as follows.
```
{
"answer_label": "more",
"answer_label_as_choice": "A",
"choices": {
"label": ["A", "B", "C"],
"text": ["more", "less", "no effect"]
},
"metadata_graph_id": "481",
"metadata_para_id": "528",
"metadata_path_len": 3,
"metadata_question_id": "influence_graph:528:481:77#0",
"metadata_question_type": "INPARA_EFFECT",
"question_para_step": ["A male and female rabbit mate", "The female rabbit becomes pregnant", "Baby rabbits form inside of the mother rabbit", "The female rabbit gives birth to a litter", "The newborn rabbits grow up to become adults", "The adult rabbits find mates."],
"question_stem": "suppose the female is sterile happens, how will it affect LESS rabbits."
}
```
### Data Fields
The data fields are the same among all splits.
#### default
- `question_stem`: a `string` feature.
- `question_para_step`: a `list` of `string` features.
- `answer_label`: a `string` feature.
- `answer_label_as_choice`: a `string` feature.
- `choices`: a dictionary feature containing:
- `text`: a `string` feature.
- `label`: a `string` feature.
- `metadata_question_id`: a `string` feature.
- `metadata_graph_id`: a `string` feature.
- `metadata_para_id`: a `string` feature.
- `metadata_question_type`: a `string` feature.
- `metadata_path_len`: a `int32` feature.
### Data Splits
| name |train|validation|test|
|-------|----:|---------:|---:|
|default|29808| 6894|3003|
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@article{wiqa,
author = {Niket Tandon and Bhavana Dalvi Mishra and Keisuke Sakaguchi and Antoine Bosselut and Peter Clark}
title = {WIQA: A dataset for "What if..." reasoning over procedural text},
journal = {arXiv:1909.04739v1},
year = {2019},
}
```
### Contributions
Thanks to [@patrickvonplaten](https://github.com/patrickvonplaten), [@lewtun](https://github.com/lewtun), [@thomwolf](https://github.com/thomwolf), [@mariamabarham](https://github.com/mariamabarham), [@lhoestq](https://github.com/lhoestq) for adding this dataset. |
Gustavosta/Stable-Diffusion-Prompts | 2022-09-18T22:38:59.000Z | [
"annotations_creators:no-annotation",
"language_creators:found",
"source_datasets:original",
"language:en",
"license:unknown",
"region:us"
] | Gustavosta | null | null | null | 330 | 1,972 | ---
license:
- unknown
annotations_creators:
- no-annotation
language_creators:
- found
language:
- en
source_datasets:
- original
---
# Stable Diffusion Dataset
This is a set of about 80,000 prompts filtered and extracted from the image finder for Stable Diffusion: "[Lexica.art](https://lexica.art/)". It was a little difficult to extract the data, since the search engine still doesn't have a public API without being protected by cloudflare.
If you want to test the model with a demo, you can go to: "[spaces/Gustavosta/MagicPrompt-Stable-Diffusion](https://huggingface.co/spaces/Gustavosta/MagicPrompt-Stable-Diffusion)".
If you want to see the model, go to: "[Gustavosta/MagicPrompt-Stable-Diffusion](https://huggingface.co/Gustavosta/MagicPrompt-Stable-Diffusion)". |
hf-internal-testing/example-documents | 2022-08-04T12:42:46.000Z | [
"region:us"
] | hf-internal-testing | null | null | null | 1 | 1,956 | Entry not found |
mteb/sts14-sts | 2022-09-27T19:11:37.000Z | [
"language:en",
"region:us"
] | mteb | null | null | null | 1 | 1,949 | ---
language:
- en
--- |
frgfm/imagenette | 2022-12-11T22:26:06.000Z | [
"task_categories:image-classification",
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"size_categories:1K<n<10K",
"source_datasets:extended",
"language:en",
"license:apache-2.0",
"region:us"
] | frgfm | Imagenette is a subset of 10 easily classified classes from Imagenet
(tench, English springer, cassette player, chain saw, church, French
horn, garbage truck, gas pump, golf ball, parachute). | @software{Howard_Imagenette_2019,
title={Imagenette: A smaller subset of 10 easily classified classes from Imagenet},
author={Jeremy Howard},
year={2019},
month={March},
publisher = {GitHub},
url = {https://github.com/fastai/imagenette}
} | null | 7 | 1,923 | ---
annotations_creators:
- crowdsourced
language_creators:
- crowdsourced
language:
- en
license:
- apache-2.0
multilinguality: []
size_categories:
- 1K<n<10K
source_datasets:
- extended
task_categories:
- image-classification
task_ids: []
paperswithcode_id: imagenette
pretty_name: Imagenette
---
# Dataset Card for Imagenette
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://github.com/fastai/imagenette
- **Repository:** https://github.com/fastai/imagenette
- **Leaderboard:** https://paperswithcode.com/sota/image-classification-on-imagenette
### Dataset Summary
A smaller subset of 10 easily classified classes from [Imagenet](https://huggingface.co/datasets/imagenet-1k#dataset-summary), and a little more French.
This dataset was created by [Jeremy Howard](https://twitter.com/jeremyphoward), and this repository is only there to share his work on this platform. The repository owner takes no credit of any kind in the creation, curation or packaging of the dataset.
### Supported Tasks and Leaderboards
- `image-classification`: The dataset can be used to train a model for Image Classification.
### Languages
The class labels in the dataset are in English.
## Dataset Structure
### Data Instances
A data point comprises an image URL and its classification label.
```
{
'image': <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=320x320 at 0x19FA12186D8>,
'label': 'tench',
}
```
### Data Fields
- `image`: A `PIL.Image.Image` object containing the image.
- `label`: the expected class label of the image.
### Data Splits
| |train|validation|
|----------|----:|---------:|
|imagenette| 9469| 3925|
## Dataset Creation
### Curation Rationale
cf. https://huggingface.co/datasets/imagenet-1k#curation-rationale
### Source Data
#### Initial Data Collection and Normalization
Imagenette is a subset of [ImageNet](https://huggingface.co/datasets/imagenet-1k). Information about data collection of the source data can be found [here](https://huggingface.co/datasets/imagenet-1k#initial-data-collection-and-normalization).
### Annotations
#### Annotation process
cf. https://huggingface.co/datasets/imagenet-1k#annotation-process
#### Who are the annotators?
cf. https://huggingface.co/datasets/imagenet-1k#who-are-the-annotators
### Personal and Sensitive Information
cf. https://huggingface.co/datasets/imagenet-1k#personal-and-sensitive-information
## Considerations for Using the Data
### Social Impact of Dataset
cf. https://huggingface.co/datasets/imagenet-1k#social-impact-of-dataset
### Discussion of Biases
cf. https://huggingface.co/datasets/imagenet-1k#discussion-of-biases
### Other Known Limitations
cf. https://huggingface.co/datasets/imagenet-1k#other-known-limitations
## Additional Information
### Dataset Curators
cf. https://huggingface.co/datasets/imagenet-1k#dataset-curators
and Jeremy Howard
### Licensing Information
[Apache License 2.0](https://www.apache.org/licenses/LICENSE-2.0).
### Citation Information
```
@software{Howard_Imagenette_2019,
title={Imagenette: A smaller subset of 10 easily classified classes from Imagenet},
author={Jeremy Howard},
year={2019},
month={March},
publisher = {GitHub},
url = {https://github.com/fastai/imagenette}
}
```
### Contributions
This dataset was created by [Jeremy Howard](https://twitter.com/jeremyphoward) and published on [Github](https://github.com/fastai/imagenette). It was then only integrated into HuggingFace Datasets by [@frgfm](https://huggingface.co/frgfm).
|
cppe-5 | 2023-03-06T18:48:26.000Z | [
"task_categories:object-detection",
"annotations_creators:crowdsourced",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:en",
"license:unknown",
"medical-personal-protective-equipment-detection",
"arxiv:2112.09569",
"region:us"
] | null | CPPE - 5 (Medical Personal Protective Equipment) is a new challenging dataset with the goal
to allow the study of subordinate categorization of medical personal protective equipments,
which is not possible with other popular data sets that focus on broad level categories. | @misc{dagli2021cppe5,
title={CPPE-5: Medical Personal Protective Equipment Dataset},
author={Rishit Dagli and Ali Mustufa Shaikh},
year={2021},
eprint={2112.09569},
archivePrefix={arXiv},
primaryClass={cs.CV}
} | null | 7 | 1,919 | ---
annotations_creators:
- crowdsourced
language_creators:
- found
language:
- en
license:
- unknown
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- object-detection
task_ids: []
paperswithcode_id: cppe-5
pretty_name: CPPE - 5
tags:
- medical-personal-protective-equipment-detection
dataset_info:
features:
- name: image_id
dtype: int64
- name: image
dtype: image
- name: width
dtype: int32
- name: height
dtype: int32
- name: objects
sequence:
- name: id
dtype: int64
- name: area
dtype: int64
- name: bbox
sequence: float32
length: 4
- name: category
dtype:
class_label:
names:
'0': Coverall
'1': Face_Shield
'2': Gloves
'3': Goggles
'4': Mask
splits:
- name: train
num_bytes: 240481257
num_examples: 1000
- name: test
num_bytes: 4172715
num_examples: 29
download_size: 238482705
dataset_size: 244653972
---
# Dataset Card for CPPE - 5
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:** https://github.com/Rishit-dagli/CPPE-Dataset
- **Paper:** [CPPE-5: Medical Personal Protective Equipment Dataset](https://arxiv.org/abs/2112.09569)
- **Leaderboard:** https://paperswithcode.com/sota/object-detection-on-cppe-5
- **Point of Contact:** rishit.dagli@gmail.com
### Dataset Summary
CPPE - 5 (Medical Personal Protective Equipment) is a new challenging dataset with the goal to allow the study of subordinate categorization of medical personal protective equipments, which is not possible with other popular data sets that focus on broad level categories.
Some features of this dataset are:
* high quality images and annotations (~4.6 bounding boxes per image)
* real-life images unlike any current such dataset
* majority of non-iconic images (allowing easy deployment to real-world environments)
### Supported Tasks and Leaderboards
- `object-detection`: The dataset can be used to train a model for Object Detection. This task has an active leaderboard which can be found at https://paperswithcode.com/sota/object-detection-on-cppe-5. The metrics for this task are adopted from the COCO detection evaluation criteria, and include the mean Average Precision (AP) across IoU thresholds ranging from 0.50 to 0.95 at different scales.
### Languages
English
## Dataset Structure
### Data Instances
A data point comprises an image and its object annotations.
```
{
'image_id': 15,
'image': <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=943x663 at 0x2373B065C18>,
'width': 943,
'height': 663,
'objects': {
'id': [114, 115, 116, 117],
'area': [3796, 1596, 152768, 81002],
'bbox': [
[302.0, 109.0, 73.0, 52.0],
[810.0, 100.0, 57.0, 28.0],
[160.0, 31.0, 248.0, 616.0],
[741.0, 68.0, 202.0, 401.0]
],
'category': [4, 4, 0, 0]
}
}
```
### Data Fields
- `image`: the image id
- `image`: `PIL.Image.Image` object containing the image. Note that when accessing the image column: `dataset[0]["image"]` the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the `"image"` column, *i.e.* `dataset[0]["image"]` should **always** be preferred over `dataset["image"][0]`
- `width`: the image width
- `height`: the image height
- `objects`: a dictionary containing bounding box metadata for the objects present on the image
- `id`: the annotation id
- `area`: the area of the bounding box
- `bbox`: the object's bounding box (in the [coco](https://albumentations.ai/docs/getting_started/bounding_boxes_augmentation/#coco) format)
- `category`: the object's category, with possible values including `Coverall` (0),`Face_Shield` (1),`Gloves` (2),`Goggles` (3) and `Mask` (4)
### Data Splits
The data is split into training and testing set. The training set contains 1000 images and test set 29 images.
## Dataset Creation
### Curation Rationale
From the paper:
> With CPPE-5 dataset, we hope to facilitate research and use in applications at multiple public places to autonomously identify if a PPE (Personal Protective Equipment) kit has been worn and also which part of the PPE kit has been worn. One of the main aims with this dataset was to also capture a higher ratio of non-iconic images or non-canonical perspectives [5] of the objects in this dataset. We further hope to see high use of this dataset to aid in medical scenarios which would have a huge effect
worldwide.
### Source Data
#### Initial Data Collection and Normalization
The images in the CPPE-5 dataset were collected using the following process:
* Obtain Images from Flickr: Following the object categories we identified earlier, we first download images from Flickr and save them at the "Original" size. On Flickr, images are served at multiple different sizes (Square 75, Small 240, Large 1024, X-Large 4K etc.), the "Original" size is an exact copy of the image uploaded by author.
* Extract relevant metadata: Flickr contains images each with searchable metadata, we extract the following relevant
metadata:
* A direct link to the original image on Flickr
* Width and height of the image
* Title given to the image by the author
* Date and time the image was uploaded on
* Flickr username of the author of the image
* Flickr Name of the author of the image
* Flickr profile of the author of the image
* The License image is licensed under
* MD5 hash of the original image
* Obtain Images from Google Images: Due to the reasons we mention earlier, we only collect a very small proportion
of images from Google Images. For these set of images we extract the following metadata:
* A direct link to the original image
* Width and height of the image
* MD5 hash of the original image
* Filter inappropriate images: Though very rare in the collected images, we also remove images containing inappropriate content using the safety filters on Flickr and Google Safe Search.
* Filter near-similar images: We then remove near-duplicate images in the dataset using GIST descriptors
#### Who are the source language producers?
The images for this dataset were collected from Flickr and Google Images.
### Annotations
#### Annotation process
The dataset was labelled in two phases: the first phase included labelling 416 images and the second phase included labelling 613 images. For all the images in the dataset volunteers were provided the following table:
|Item |Description |
|------------|--------------------------------------------------------------------- |
|coveralls | Coveralls are hospital gowns worn by medical professionals as in order to provide a barrier between patient and professional, these usually cover most of the exposed skin surfaces of the professional medics.|
|mask | Mask prevents airborne transmission of infections between patients and/or treating personnel by blocking the movement of pathogens (primarily bacteria and viruses) shed in respiratory droplets and aerosols into and from the wearer’s mouth and nose.|
face shield | Face shield aims to protect the wearer’s entire face (or part of it) from hazards such as flying objects and road debris, chemical splashes (in laboratories or in industry), or potentially infectious materials (in medical and laboratory environments).|
gloves | Gloves are used during medical examinations and procedures to help prevent cross-contamination between caregivers and patients.|
|goggles | Goggles, or safety glasses, are forms of protective eye wear that usually enclose or protect the area surrounding the eye in order to prevent particulates, water or chemicals from striking the eyes.|
as well as examples of: correctly labelled images, incorrectly labelled images, and not applicable images. Before the labelling task, each volunteer was provided with an exercise to verify if the volunteer was able to correctly identify categories as well as identify if an annotated image is correctly labelled, incorrectly labelled, or not applicable. The labelling process first involved two volunteers independently labelling an image from the dataset. In any of the cases that: the number of bounding boxes are different, the labels for on or more of the bounding boxes are different or two volunteer annotations are sufficiently different; a third volunteer compiles the result from the two annotations to come up with a correctly labelled image. After this step, a volunteer verifies the bounding box annotations. Following this method of labelling the dataset we ensured that all images were labelled accurately and contained exhaustive
annotations. As a result of this, our dataset consists of 1029 high-quality, majorly non-iconic, and accurately annotated images.
#### Who are the annotators?
In both the phases crowd-sourcing techniques were used with multiple volunteers labelling the dataset using the open-source tool LabelImg.
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
Dagli, Rishit, and Ali Mustufa Shaikh.
### Licensing Information
[More Information Needed]
### Citation Information
```
@misc{dagli2021cppe5,
title={CPPE-5: Medical Personal Protective Equipment Dataset},
author={Rishit Dagli and Ali Mustufa Shaikh},
year={2021},
eprint={2112.09569},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
```
### Contributions
Thanks to [@mariosasko](https://github.com/mariosasko) for adding this dataset. |
fka/awesome-chatgpt-prompts | 2023-03-07T10:04:18.000Z | [
"license:cc0-1.0",
"ChatGPT",
"region:us"
] | fka | null | null | null | 3,546 | 1,909 | ---
license: cc0-1.0
tags:
- ChatGPT
---
<p align="center"><h1>🧠 Awesome ChatGPT Prompts [CSV dataset]</h1></p>
This is a Dataset Repository of **Awesome ChatGPT Prompts**
**[View All Prompts on GitHub](https://github.com/f/awesome-chatgpt-prompts)**
# License
CC-0
|
JulesBelveze/tldr_news | 2022-08-05T12:17:50.000Z | [
"task_categories:summarization",
"task_categories:text2text-generation",
"task_categories:text-generation",
"task_ids:news-articles-headline-generation",
"task_ids:text-simplification",
"task_ids:language-modeling",
"annotations_creators:other",
"language_creators:other",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:en",
"region:us"
] | JulesBelveze | The `tldr_news` dataset was constructed by collecting a daily tech newsletter (available at
https://tldr.tech/newsletter). Then for every piece of news, the "headline" and its corresponding "content" were
collected. Such a dataset can be used to train a model to generate a headline from a input piece of text. | null | null | 8 | 1,903 | ---
annotations_creators:
- other
language_creators:
- other
language:
- en
multilinguality:
- monolingual
pretty_name: tldr_news
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- summarization
- text2text-generation
- text-generation
task_ids:
- news-articles-headline-generation
- text-simplification
- language-modeling
---
# Dataset Card for `tldr_news`
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** https://tldr.tech/newsletter
### Dataset Summary
The `tldr_news` dataset was constructed by collecting a daily tech newsletter (available
[here](https://tldr.tech/newsletter)). Then, for every piece of news, the `headline` and its corresponding `
content` were extracted.
Also, the newsletter contain different sections. We add this extra information to every piece of news.
Such a dataset can be used to train a model to generate a headline from a input piece of text.
### Supported Tasks and Leaderboards
There is no official supported tasks nor leaderboard for this dataset. However, it could be used for the following
tasks:
- summarization
- headline generation
### Languages
en
## Dataset Structure
### Data Instances
A data point comprises a "headline" and its corresponding "content".
An example is as follows:
```
{
"headline": "Cana Unveils Molecular Beverage Printer, a ‘Netflix for Drinks’ That Can Make Nearly Any Type of Beverage ",
"content": "Cana has unveiled a drink machine that can synthesize almost any drink. The machine uses a cartridge that contains flavor compounds that can be combined to create the flavor of nearly any type of drink. It is about the size of a toaster and could potentially save people from throwing hundreds of containers away every month by allowing people to create whatever drinks they want at home. Around $30 million was spent building Cana’s proprietary hardware platform and chemistry system. Cana plans to start full production of the device and will release pricing by the end of February.",
"category": "Science and Futuristic Technology"
}
```
### Data Fields
- `headline (str)`: the piece of news' headline
- `content (str)`: the piece of news
- `category (str)`: newsletter section
### Data Splits
- `all`: all existing daily newsletters available [here](https://tldr.tech/newsletter).
## Dataset Creation
### Curation Rationale
This dataset was obtained by scrapping the collecting all the existing newsletter
available [here](https://tldr.tech/newsletter).
Every single newsletter was then processed to extract all the different pieces of news. Then for every collected piece
of news the headline and the news content were extracted.
### Source Data
#### Initial Data Collection and Normalization
The dataset was has been collected from https://tldr.tech/newsletter.
In order to clean up the samples and to construct a dataset better suited for headline generation we have applied a
couple of normalization steps:
1. The headlines initially contain an estimated read time in parentheses; we stripped this information from the
headline.
2. Some news are sponsored and thus do not belong to any newsletter section. We create an additional category "Sponsor"
for such samples.
#### Who are the source language producers?
The people (or person) behind the https://tldr.tech/ newsletter.
### Annotations
#### Annotation process
Disclaimers: The dataset was generated from a daily newsletter. The author had no intention for those newsletters to be
used as such.
#### Who are the annotators?
The newsletters were written by the people behind *TLDR tech*.
### Personal and Sensitive Information
[Needs More Information]
## Considerations for Using the Data
### Social Impact of Dataset
[Needs More Information]
### Discussion of Biases
This dataset only contains tech news. A model trained on such a dataset might not be able to generalize to other domain.
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
The dataset was obtained by collecting newsletters from this website: https://tldr.tech/newsletter
### Contributions
Thanks to [@JulesBelveze](https://github.com/JulesBelveze) for adding this dataset. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.