id stringlengths 2 115 | lastModified stringlengths 24 24 | tags list | author stringlengths 2 42 ⌀ | description stringlengths 0 68.7k ⌀ | citation stringlengths 0 10.7k ⌀ | cardData null | likes int64 0 3.55k | downloads int64 0 10.1M | card stringlengths 0 1.01M |
|---|---|---|---|---|---|---|---|---|---|
joelniklaus/Multi_Legal_Pile | 2023-05-15T20:48:26.000Z | [
"task_categories:fill-mask",
"annotations_creators:other",
"language_creators:found",
"multilinguality:multilingual",
"size_categories:10M<n<100M",
"source_datasets:original",
"language:bg",
"language:cs",
"language:da",
"language:de",
"language:el",
"language:en",
"language:es",
"language:et",
"language:fi",
"language:fr",
"language:ga",
"language:hr",
"language:hu",
"language:it",
"language:lt",
"language:lv",
"language:mt",
"language:nl",
"language:pl",
"language:pt",
"language:ro",
"language:sk",
"language:sl",
"language:sv",
"license:cc-by-nc-sa-4.0",
"region:us"
] | joelniklaus | Multi Legal Pile is a dataset of legal documents in the 24 EU languages. | null | 28 | 342 | ---
annotations_creators:
- other
language_creators:
- found
language:
- bg
- cs
- da
- de
- el
- en
- es
- et
- fi
- fr
- ga
- hr
- hu
- it
- lt
- lv
- mt
- nl
- pl
- pt
- ro
- sk
- sl
- sv
license:
- cc-by-nc-sa-4.0
multilinguality:
- multilingual
paperswithcode_id: null
pretty_name: "MultiLegalPile: A Large-Scale Multilingual Corpus for the Legal Domain"
size_categories:
- 10M<n<100M
source_datasets:
- original
task_categories:
- fill-mask
---
# Dataset Card for MultiLegalPile: A Large-Scale Multilingual Corpus for the Legal Domain
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:** [Joel Niklaus](mailto:joel.niklaus.2@bfh.ch)
### Dataset Summary
The Multi_Legal_Pile is a large-scale multilingual legal dataset suited for pretraining language models.
It spans over 24 languages and five legal text types.
### Supported Tasks and Leaderboards
The dataset supports the tasks of fill-mask.
### Languages
The following languages are supported: bg, cs, da, de, el, en, es, et, fi, fr, ga, hr, hu, it, lt, lv, mt, nl, pl, pt,
ro, sk, sl, sv
## Dataset Structure
It is structured in the following format:
type -> language -> jurisdiction.jsonl.xz
type is one of the following:
- caselaw
- contracts
- legislation
- other
- mc4_legal
`mc4_legal` is a subset of the other type but is listed separately so it can be easily excluded since it is less
permissively licensed than the other types.
Use the dataset like this:
```python
from datasets import load_dataset
config = 'en_contracts' # {language}_{type}
dataset = load_dataset('joelito/Multi_Legal_Pile', config, split='train', streaming=True)
```
'config' is a combination of language and text_type, e.g. 'en_contracts' or 'de_caselaw'.
To load all the languages or all the text_types, use 'all' instead of the language or text_type (e.g., '
all_legislation').
### Data Instances
The file format is jsonl.xz and there is one split available ("train").
The complete dataset (689GB) consists of four large subsets:
- Native Multi Legal Pile (112GB)
- Eurlex Resources (179GB)
- Legal MC4 (106GB)
- Pile of Law (292GB)
#### Native Multilingual Legal Pile data
| | Language | Text Type | Jurisdiction | Source | Size (MB) | Words | Documents | Words/Document | URL | License |
|---:|:-----------|:------------|:---------------|:-----------------------------------|------------:|------------:|------------:|-----------------:|:-----------------------------------------------------------------------------------------------------------------------------------------------------|:----------------------------------------------------------------------------------------------------|
| 0 | bg | legislation | Bulgaria | MARCELL | 8015 | 308946116 | 82777 | 3732 | https://elrc-share.eu/repository/browse/marcell-bulgarian-legislative-subcorpus-v2/946267fe8d8711eb9c1a00155d026706d2c9267e5cdf4d75b5f02168f01906c6/ | [CC0-1.0](https://elrc-share.eu/static/metashare/licences/CC0-1.0.pdf) |
| 1 | cs | caselaw | Czechia | CzCDC Constitutional Court | 11151 | 574336489 | 296652 | 1936 | https://lindat.mff.cuni.cz/repository/xmlui/handle/11372/LRT-3052 | [CC BY-NC 4.0](https://creativecommons.org/licenses/by-nc/4.0/) |
| 2 | cs | caselaw | Czechia | CzCDC Supreme Administrative Court | 11151 | 574336489 | 296652 | 1936 | https://lindat.mff.cuni.cz/repository/xmlui/handle/11372/LRT-3052 | [CC BY-NC 4.0](https://creativecommons.org/licenses/by-nc/4.0/) |
| 3 | cs | caselaw | Czechia | CzCDC Supreme Court | 11151 | 574336489 | 296652 | 1936 | https://lindat.mff.cuni.cz/repository/xmlui/handle/11372/LRT-3052 | [CC BY-NC 4.0](https://creativecommons.org/licenses/by-nc/4.0/) |
| 4 | da | caselaw | Denmark | DDSC | 3469 | 210730560 | 89702 | 2349 | https://huggingface.co/DDSC | [CC BY 4.0 and other, depending on the dataset](https://creativecommons.org/licenses/by-nc/4.0/) |
| 5 | da | legislation | Denmark | DDSC | 10736 | 653153146 | 265868 | 2456 | https://huggingface.co/DDSC | [CC BY 4.0 and other, depending on the dataset](https://creativecommons.org/licenses/by-nc/4.0/) |
| 6 | de | caselaw | Germany | openlegaldata | 31527 | 1785439383 | 596800 | 2991 | https://de.openlegaldata.io/ | [ODbL-1.0](https://opendatacommons.org/licenses/odbl/1-0/) |
| 7 | de | caselaw | Switzerland | entscheidsuche | 31527 | 1785439383 | 596800 | 2991 | https://entscheidsuche.ch/ | [See description](https://entscheidsuche.ch/dataUsage) |
| 8 | de | legislation | Germany | openlegaldata | 8934 | 512840663 | 276034 | 1857 | https://de.openlegaldata.io/ | [ODbL-1.0](https://opendatacommons.org/licenses/odbl/1-0/) |
| 9 | de | legislation | Switzerland | lexfind | 8934 | 512840663 | 276034 | 1857 | https://www.lexfind.ch/fe/de/search | No information provided |
| 10 | fr | caselaw | Switzerland | entscheidsuche | 18313 | 1170335690 | 435569 | 2686 | https://entscheidsuche.ch/ | [See description](https://entscheidsuche.ch/dataUsage) |
| 11 | fr | caselaw | Belgium | jurportal | 18313 | 1170335690 | 435569 | 2686 | https://juportal.be/home/welkom | [See description](https://juportal.be/home/disclaimer) |
| 12 | fr | caselaw | France | CASS | 18313 | 1170335690 | 435569 | 2686 | https://echanges.dila.gouv.fr/OPENDATA/CASS/ | [Open Licence 2.0](https://echanges.dila.gouv.fr/OPENDATA/CASS/DILA_CASS_Presentation_20170824.pdf) |
| 13 | fr | caselaw | Luxembourg | judoc | 18313 | 1170335690 | 435569 | 2686 | https://justice.public.lu/fr.html | [See description](https://justice.public.lu/fr/support/aspects-legaux/conditions-generales.html) |
| 14 | it | caselaw | Switzerland | entscheidsuche | 6483 | 406520336 | 156630 | 2595 | https://entscheidsuche.ch/ | [See description](https://entscheidsuche.ch/dataUsage) |
| 15 | en | legislation | Switzerland | lexfind | 36587 | 2537696894 | 657805 | 3857 | https://www.lexfind.ch/fe/de/search | No information provided |
| 16 | en | legislation | UK | uk-lex | 36587 | 2537696894 | 657805 | 3857 | https://zenodo.org/record/6355465 | [CC BY 4.0](https://creativecommons.org/licenses/by/4.0/legalcode) |
| 17 | fr | legislation | Switzerland | lexfind | 9297 | 600170792 | 243313 | 2466 | https://www.lexfind.ch/fe/fr/search | No information provided |
| 18 | fr | legislation | Belgium | ejustice | 9297 | 600170792 | 243313 | 2466 | https://www.ejustice.just.fgov.be/cgi/welcome.pl | No information provided |
| 19 | it | legislation | Switzerland | lexfind | 8332 | 542579039 | 227968 | 2380 | https://www.lexfind.ch/fe/it/search | No information provided |
| 20 | nl | legislation | Belgium | ejustice | 8484 | 550788527 | 232204 | 2372 | https://www.ejustice.just.fgov.be/cgi/welcome.pl | No information provided |
| 21 | hu | legislation | Hungary | MARCELL | 5744 | 264572303 | 86862 | 3045 | https://elrc-share.eu/repository/browse/marcell-hungarian-legislative-subcorpus-v2/a87295ec8d6511eb9c1a00155d0267065f7e56dc7db34ce5aaae0b48a329daaa/ | [CC0-1.0](https://elrc-share.eu/static/metashare/licences/CC0-1.0.pdf) |
| 22 | pl | legislation | Poland | MARCELL | 5459 | 299334705 | 89264 | 3353 | https://elrc-share.eu/repository/browse/marcell-polish-legislative-subcorpus-v2/dd14fa1c8d6811eb9c1a00155d026706c4718ddc9c6e4a92a88923816ca8b219/ | [CC0-1.0](https://elrc-share.eu/static/metashare/licences/CC0-1.0.pdf) |
| 23 | pt | caselaw | Brazil | RulingBR | 196919 | 12611760973 | 17251236 | 731 | https://github.com/diego-feijo/rulingbr | No information provided |
| 24 | pt | caselaw | Brazil | CRETA | 196919 | 12611760973 | 17251236 | 731 | https://www.kaggle.com/datasets/eliasjacob/brcad5?resource=download&select=language_modeling_texts.parquet | [CC BY-NC-SA 4.0](https://creativecommons.org/licenses/by-nc-sa/4.0/) |
| 25 | pt | caselaw | Brazil | CJPG | 196919 | 12611760973 | 17251236 | 731 | https://esaj.tjsp.jus.br/cjsg/consultaCompleta.do?f=1 | No information provided |
| 26 | ro | legislation | Romania | MARCELL | 10464 | 559092153 | 215694 | 2592 | https://elrc-share.eu/repository/browse/marcell-romanian-legislative-subcorpus-v2/2da548428b9d11eb9c1a00155d026706ce94a6b59ffc4b0e9fb5cd9cebe6889e/ | [CC0-1.0](https://elrc-share.eu/static/metashare/licences/CC0-1.0.pdf) |
| 27 | sk | legislation | Slovakia | MARCELL | 5208 | 280182047 | 76760 | 3650 | https://elrc-share.eu/repository/browse/marcell-slovak-legislative-subcorpus-v2/6bdee1d68c8311eb9c1a00155d0267063398d3f1a3af40e1b728468dcbd6efdd/ | [CC0-1.0](https://elrc-share.eu/static/metashare/licences/CC0-1.0.pdf) |
| 28 | sl | legislation | Slovenia | MARCELL | 6057 | 365513763 | 88651 | 4123 | https://elrc-share.eu/repository/browse/marcell-slovenian-legislative-subcorpus-v2/e2a779868d4611eb9c1a00155d026706983c845a30d741b78e051faf91828b0d/ | [CC-BY-4.0](https://elrc-share.eu/static/metashare/licences/CC-BY-4.0.pdf)
| total | all | all | all | 1297609 | xxx | 81214262514 | 57305071 | 1417 | |
#### Eurlex Resources
See [Eurlex Resources](https://huggingface.co/datasets/joelito/eurlex_resources#data-instances) for more information.
#### Legal-MC4
See [Legal-MC4](https://huggingface.co/datasets/joelito/legal-mc4#data-instances) for more information.
#### Pile-of-Law
See [Pile-of-Law](https://huggingface.co/datasets/pile-of-law/pile-of-law#data-instances) for more information.
| Language | Type | Jurisdiction | Source | Size (MB) | Tokens | Documents | Tokens/Document | Part of Multi_Legal_Pile |
|:-----------|:------------|:---------------|:-------------------------------------|------------:|------------:|------------:|------------------:|:---------------------------|
| en | all | all | all | 503712 | 50547777921 | 9872444 | 5120 | yes |
| en | caselaw | EU | echr | 298 | 28374996 | 8480 | 3346 | yes |
| en | caselaw | Canada | canadian_decisions | 486 | 45438083 | 11343 | 4005 | yes |
| en | caselaw | US | dol_ecab | 942 | 99113541 | 28211 | 3513 | no |
| en | caselaw | US | scotus_oral_arguments | 1092 | 108228951 | 7996 | 13535 | no |
| en | caselaw | US | tax_rulings | 1704 | 166915887 | 54064 | 3087 | no |
| en | caselaw | US | nlrb_decisions | 2652 | 294471818 | 32080 | 9179 | no |
| en | caselaw | US | scotus_filings | 4018 | 593870413 | 63775 | 9311 | yes |
| en | caselaw | US | bva_opinions | 35238 | 4084140080 | 839523 | 4864 | no |
| en | caselaw | US | courtlistener_docket_entry_documents | 139006 | 12713614864 | 1983436 | 6409 | yes |
| en | caselaw | US | courtlistener_opinions | 158110 | 15899704961 | 4518445 | 3518 | yes |
| en | contracts | -- | tos | 4 | 391890 | 50 | 7837 | no |
| en | contracts | US | cfpb_creditcard_contracts | 188 | 25984824 | 2638 | 9850 | yes |
| en | contracts | US | edgar | 28698 | 2936402810 | 987926 | 2972 | yes |
| en | contracts | US | atticus_contracts | 78300 | 7997013703 | 650833 | 12287 | yes |
| en | legislation | US | fre | 2 | 173325 | 68 | 2548 | no |
| en | legislation | US | frcp | 4 | 427614 | 92 | 4647 | no |
| en | legislation | US | eoir | 62 | 6109737 | 2229 | 2741 | no |
| en | legislation | -- | constitutions | 66 | 5984865 | 187 | 32004 | yes |
| en | legislation | US | federal_register | 424 | 39854787 | 5414 | 7361 | yes |
| en | legislation | US | uscode | 716 | 78466325 | 58 | 1352867 | yes |
| en | legislation | EU | euro_parl | 808 | 71344326 | 9672 | 7376 | no |
| en | legislation | US | cfr | 1788 | 160849007 | 243 | 661930 | yes |
| en | legislation | US | us_bills | 3394 | 320723838 | 112483 | 2851 | yes |
| en | legislation | EU | eurlex | 3504 | 401324829 | 142036 | 2825 | no |
| en | legislation | US | state_codes | 18066 | 1858333235 | 217 | 8563747 | yes |
| en | other | -- | bar_exam_outlines | 4 | 346924 | 59 | 5880 | no |
| en | other | US | ftc_advisory_opinions | 4 | 509025 | 145 | 3510 | no |
| en | other | US | olc_memos | 98 | 12764635 | 1384 | 9223 | yes |
| en | other | -- | cc_casebooks | 258 | 24857378 | 73 | 340512 | no |
| en | other | -- | un_debates | 360 | 31152497 | 8481 | 3673 | no |
| en | other | -- | r_legaladvice | 798 | 72605386 | 146671 | 495 | no |
| en | other | US | founding_docs | 1118 | 100390231 | 183664 | 546 | no |
| en | other | US | oig | 5056 | 566782244 | 38954 | 14550 | yes |
| en | other | US | congressional_hearings | 16448 | 1801110892 | 31514 | 57152 | no |
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
```
TODO add citation
```
### Contributions
Thanks to [@JoelNiklaus](https://github.com/joelniklaus) for adding this dataset.
| |
liuhaotian/LLaVA-Instruct-150K | 2023-10-06T22:18:34.000Z | [
"task_categories:visual-question-answering",
"task_categories:question-answering",
"size_categories:100K<n<1M",
"language:en",
"license:cc-by-nc-4.0",
"region:us"
] | liuhaotian | null | null | null | 138 | 342 | ---
license: cc-by-nc-4.0
task_categories:
- visual-question-answering
- question-answering
language:
- en
pretty_name: LLaVA Visual Instruct 150K
size_categories:
- 100K<n<1M
---
# LLaVA Visual Instruct 150K Dataset Card
## Dataset details
**Dataset type:**
LLaVA Visual Instruct 150K is a set of GPT-generated multimodal instruction-following data.
It is constructed for visual instruction tuning and for building large multimodal towards GPT-4 vision/language capability.
**Dataset date:**
LLaVA Visual Instruct 150K was collected in April 2023, by prompting GPT-4-0314 API.
**Paper or resources for more information:**
https://llava-vl.github.io/
**License:**
Attribution-NonCommercial 4.0 International
It should abide by the policy of OpenAI: https://openai.com/policies/terms-of-use
**Where to send questions or comments about the model:**
https://github.com/haotian-liu/LLaVA/issues
## Intended use
**Primary intended uses:**
The primary use of LLaVA is research on large multimodal models and chatbots.
**Primary intended users:**
The primary intended users of the model are researchers and hobbyists in computer vision, natural language processing, machine learning, and artificial intelligence. |
bbz662bbz/databricks-dolly-15k-ja-gozarinnemon | 2023-05-31T14:44:34.000Z | [
"license:cc-by-sa-3.0",
"region:us"
] | bbz662bbz | null | null | null | 2 | 340 | ---
license: cc-by-sa-3.0
---
This dataset was using "kunishou/databricks-dolly-15k-ja"
This dataset is licensed under CC BY SA 3.0
Last Update : 2023-05-28
databricks-dolly-15k-ja-gozarinnemon
kunishou/databricks-dolly-15k-ja
https://huggingface.co/datasets/kunishou/databricks-dolly-15k-ja
|
GEM/dart | 2022-10-24T15:30:16.000Z | [
"task_categories:table-to-text",
"annotations_creators:none",
"language_creators:unknown",
"multilinguality:unknown",
"size_categories:unknown",
"source_datasets:original",
"language:en",
"license:mit",
"data-to-text",
"arxiv:1910.13461",
"arxiv:1908.09022",
"arxiv:2007.02871",
"arxiv:1709.00103",
"arxiv:1706.09254",
"arxiv:1810.01170",
"region:us"
] | GEM | DART is a large and open-domain structured DAta Record to Text generation corpus
with high-quality sentence annotations with each input being a set of
entity-relation triples following a tree-structured ontology. It consists of
82191 examples across different domains with each input being a semantic RDF
triple set derived from data records in tables and the tree ontology of table
schema, annotated with sentence description that covers all facts in the triple set. | @inproceedings{nan-etal-2021-dart,
title = "{DART}: Open-Domain Structured Data Record to Text Generation",
author = "Nan, Linyong and
Radev, Dragomir and
Zhang, Rui and
Rau, Amrit and
Sivaprasad, Abhinand and
Hsieh, Chiachun and
Tang, Xiangru and
Vyas, Aadit and
Verma, Neha and
Krishna, Pranav and
Liu, Yangxiaokang and
Irwanto, Nadia and
Pan, Jessica and
Rahman, Faiaz and
Zaidi, Ahmad and
Mutuma, Mutethia and
Tarabar, Yasin and
Gupta, Ankit and
Yu, Tao and
Tan, Yi Chern and
Lin, Xi Victoria and
Xiong, Caiming and
Socher, Richard and
Rajani, Nazneen Fatema",
booktitle = "Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
month = jun,
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.naacl-main.37",
doi = "10.18653/v1/2021.naacl-main.37",
pages = "432--447",
abstract = "We present DART, an open domain structured DAta Record to Text generation dataset with over 82k instances (DARTs). Data-to-text annotations can be a costly process, especially when dealing with tables which are the major source of structured data and contain nontrivial structures. To this end, we propose a procedure of extracting semantic triples from tables that encodes their structures by exploiting the semantic dependencies among table headers and the table title. Our dataset construction framework effectively merged heterogeneous sources from open domain semantic parsing and spoken dialogue systems by utilizing techniques including tree ontology annotation, question-answer pair to declarative sentence conversion, and predicate unification, all with minimum post-editing. We present systematic evaluation on DART as well as new state-of-the-art results on WebNLG 2017 to show that DART (1) poses new challenges to existing data-to-text datasets and (2) facilitates out-of-domain generalization. Our data and code can be found at https://github.com/Yale-LILY/dart.",
} | null | 0 | 338 | ---
annotations_creators:
- none
language_creators:
- unknown
language:
- en
license:
- mit
multilinguality:
- unknown
size_categories:
- unknown
source_datasets:
- original
task_categories:
- table-to-text
task_ids: []
pretty_name: dart
tags:
- data-to-text
---
# Dataset Card for GEM/dart
## Dataset Description
- **Homepage:** n/a
- **Repository:** https://github.com/Yale-LILY/dart
- **Paper:** https://aclanthology.org/2021.naacl-main.37/
- **Leaderboard:** https://github.com/Yale-LILY/dart#leaderboard
- **Point of Contact:** Dragomir Radev, Rui Zhang, Nazneen Rajani
### Link to Main Data Card
You can find the main data card on the [GEM Website](https://gem-benchmark.com/data_cards/dart).
### Dataset Summary
DART is an English dataset aggregating multiple other data-to-text dataset in a common triple-based format. The new format is completely flat, thus not requiring a model to learn hierarchical structures, while still retaining the full information.
You can load the dataset via:
```
import datasets
data = datasets.load_dataset('GEM/dart')
```
The data loader can be found [here](https://huggingface.co/datasets/GEM/dart).
#### website
n/a
#### paper
[ACL Anthology](https://aclanthology.org/2021.naacl-main.37/)
#### authors
Linyong Nan, Dragomir Radev, Rui Zhang, Amrit Rau, Abhinand Sivaprasad, Chiachun Hsieh, Xiangru Tang, Aadit Vyas, Neha Verma, Pranav Krishna, Yangxiaokang Liu, Nadia Irwanto, Jessica Pan, Faiaz Rahman, Ahmad Zaidi, Mutethia Mutuma, Yasin Tarabar, Ankit Gupta, Tao Yu, Yi Chern Tan, Xi Victoria Lin, Caiming Xiong, Richard Socher, Nazneen Fatema Rajani
## Dataset Overview
### Where to find the Data and its Documentation
#### Download
<!-- info: What is the link to where the original dataset is hosted? -->
<!-- scope: telescope -->
[Github](https://github.com/Yale-LILY/dart)
#### Paper
<!-- info: What is the link to the paper describing the dataset (open access preferred)? -->
<!-- scope: telescope -->
[ACL Anthology](https://aclanthology.org/2021.naacl-main.37/)
#### BibTex
<!-- info: Provide the BibTex-formatted reference for the dataset. Please use the correct published version (ACL anthology, etc.) instead of google scholar created Bibtex. -->
<!-- scope: microscope -->
```
@inproceedings{nan-etal-2021-dart,
title = "{DART}: Open-Domain Structured Data Record to Text Generation",
author = "Nan, Linyong and
Radev, Dragomir and
Zhang, Rui and
Rau, Amrit and
Sivaprasad, Abhinand and
Hsieh, Chiachun and
Tang, Xiangru and
Vyas, Aadit and
Verma, Neha and
Krishna, Pranav and
Liu, Yangxiaokang and
Irwanto, Nadia and
Pan, Jessica and
Rahman, Faiaz and
Zaidi, Ahmad and
Mutuma, Mutethia and
Tarabar, Yasin and
Gupta, Ankit and
Yu, Tao and
Tan, Yi Chern and
Lin, Xi Victoria and
Xiong, Caiming and
Socher, Richard and
Rajani, Nazneen Fatema",
booktitle = "Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
month = jun,
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.naacl-main.37",
doi = "10.18653/v1/2021.naacl-main.37",
pages = "432--447",
abstract = "We present DART, an open domain structured DAta Record to Text generation dataset with over 82k instances (DARTs). Data-to-text annotations can be a costly process, especially when dealing with tables which are the major source of structured data and contain nontrivial structures. To this end, we propose a procedure of extracting semantic triples from tables that encodes their structures by exploiting the semantic dependencies among table headers and the table title. Our dataset construction framework effectively merged heterogeneous sources from open domain semantic parsing and spoken dialogue systems by utilizing techniques including tree ontology annotation, question-answer pair to declarative sentence conversion, and predicate unification, all with minimum post-editing. We present systematic evaluation on DART as well as new state-of-the-art results on WebNLG 2017 to show that DART (1) poses new challenges to existing data-to-text datasets and (2) facilitates out-of-domain generalization. Our data and code can be found at https://github.com/Yale-LILY/dart.",
}
```
#### Contact Name
<!-- quick -->
<!-- info: If known, provide the name of at least one person the reader can contact for questions about the dataset. -->
<!-- scope: periscope -->
Dragomir Radev, Rui Zhang, Nazneen Rajani
#### Contact Email
<!-- info: If known, provide the email of at least one person the reader can contact for questions about the dataset. -->
<!-- scope: periscope -->
{dragomir.radev, r.zhang}@yale.edu, {nazneen.rajani}@salesforce.com
#### Has a Leaderboard?
<!-- info: Does the dataset have an active leaderboard? -->
<!-- scope: telescope -->
yes
#### Leaderboard Link
<!-- info: Provide a link to the leaderboard. -->
<!-- scope: periscope -->
[Leaderboard](https://github.com/Yale-LILY/dart#leaderboard)
#### Leaderboard Details
<!-- info: Briefly describe how the leaderboard evaluates models. -->
<!-- scope: microscope -->
Several state-of-the-art table-to-text models were evaluated on DART, such as BART ([Lewis et al., 2020](https://arxiv.org/pdf/1910.13461.pdf)), Seq2Seq-Att ([MELBOURNE](https://webnlg-challenge.loria.fr/files/melbourne_report.pdf)) and End-to-End Transformer ([Castro Ferreira et al., 2019](https://arxiv.org/pdf/1908.09022.pdf)).
The leaderboard reports BLEU, METEOR, TER, MoverScore, BERTScore and BLEURT scores.
### Languages and Intended Use
#### Multilingual?
<!-- quick -->
<!-- info: Is the dataset multilingual? -->
<!-- scope: telescope -->
no
#### Covered Dialects
<!-- info: What dialects are covered? Are there multiple dialects per language? -->
<!-- scope: periscope -->
It is an aggregated from multiple other datasets that use general US-American or British English without differentiation between dialects.
#### Covered Languages
<!-- quick -->
<!-- info: What languages/dialects are covered in the dataset? -->
<!-- scope: telescope -->
`English`
#### Whose Language?
<!-- info: Whose language is in the dataset? -->
<!-- scope: periscope -->
The dataset is aggregated from multiple others that were crowdsourced on different platforms.
#### License
<!-- quick -->
<!-- info: What is the license of the dataset? -->
<!-- scope: telescope -->
mit: MIT License
#### Intended Use
<!-- info: What is the intended use of the dataset? -->
<!-- scope: microscope -->
The dataset is aimed to further research in natural language generation from semantic data.
#### Primary Task
<!-- info: What primary task does the dataset support? -->
<!-- scope: telescope -->
Data-to-Text
#### Communicative Goal
<!-- quick -->
<!-- info: Provide a short description of the communicative goal of a model trained for this task on this dataset. -->
<!-- scope: periscope -->
The speaker is required to produce coherent sentences and construct a trees structured ontology of the column headers.
### Credit
#### Curation Organization Type(s)
<!-- info: In what kind of organization did the dataset curation happen? -->
<!-- scope: telescope -->
`academic`, `industry`
#### Curation Organization(s)
<!-- info: Name the organization(s). -->
<!-- scope: periscope -->
Yale University, Salesforce Research, Penn State University, The University of Hong Kong, MIT
#### Dataset Creators
<!-- info: Who created the original dataset? List the people involved in collecting the dataset and their affiliation(s). -->
<!-- scope: microscope -->
Linyong Nan, Dragomir Radev, Rui Zhang, Amrit Rau, Abhinand Sivaprasad, Chiachun Hsieh, Xiangru Tang, Aadit Vyas, Neha Verma, Pranav Krishna, Yangxiaokang Liu, Nadia Irwanto, Jessica Pan, Faiaz Rahman, Ahmad Zaidi, Mutethia Mutuma, Yasin Tarabar, Ankit Gupta, Tao Yu, Yi Chern Tan, Xi Victoria Lin, Caiming Xiong, Richard Socher, Nazneen Fatema Rajani
#### Who added the Dataset to GEM?
<!-- info: Who contributed to the data card and adding the dataset to GEM? List the people+affiliations involved in creating this data card and who helped integrate this dataset into GEM. -->
<!-- scope: microscope -->
Miruna Clinciu contributed the original data card and Yacine Jernite wrote the initial data loader. Sebastian Gehrmann migrated the data card and the loader to the new format.
### Dataset Structure
#### Data Fields
<!-- info: List and describe the fields present in the dataset. -->
<!-- scope: telescope -->
-`tripleset`: a list of tuples, each tuple has 3 items
-`subtree_was_extended`: a boolean variable (true or false)
-`annotations`: a list of dict, each with source and text keys.
-`source`: a string mentioning the name of the source table.
-`text`: a sentence string.
#### Reason for Structure
<!-- info: How was the dataset structure determined? -->
<!-- scope: microscope -->
The structure is supposed to be able more complex structures beyond "flat" attribute-value pairs, instead encoding hierarchical relationships.
#### How were labels chosen?
<!-- info: How were the labels chosen? -->
<!-- scope: microscope -->
They are a combination of those from existing datasets and new annotations that take advantage of the hierarchical structure
#### Example Instance
<!-- info: Provide a JSON formatted example of a typical instance in the dataset. -->
<!-- scope: periscope -->
```
{
"tripleset": [
[
"Ben Mauk",
"High school",
"Kenton"
],
[
"Ben Mauk",
"College",
"Wake Forest Cincinnati"
]
],
"subtree_was_extended": false,
"annotations": [
{
"source": "WikiTableQuestions_lily",
"text": "Ben Mauk, who attended Kenton High School, attended Wake Forest Cincinnati for college."
}
]
}
```
#### Data Splits
<!-- info: Describe and name the splits in the dataset if there are more than one. -->
<!-- scope: periscope -->
|Input Unit | Examples | Vocab Size | Words per SR | Sents per SR | Tables |
| ------------- | ------------- || ------------- || ------------- || ------------- || ------------- |
|Triple Set | 82,191 | 33.2K | 21.6 | 1.5 | 5,623 |
| Train | Dev | Test|
| ------------- | ------------- || ------------- |
| 62,659 | 6,980 | 12,552|
Statistics of DART decomposed by different collection methods. DART exhibits a great deal of topical variety in terms of the number of unique predicates, the number of unique triples, and the vocabulary size. These statistics are computed from DART v1.1.1; the number of unique predicates reported is post-unification (see Section 3.4). SR: Surface Realization.
([details in Table 1 and 2](https://arxiv.org/pdf/2007.02871.pdf)).
#### Splitting Criteria
<!-- info: Describe any criteria for splitting the data, if used. If there are differences between the splits (e.g., if the training annotations are machine-generated and the dev and test ones are created by humans, or if different numbers of annotators contributed to each example), describe them here. -->
<!-- scope: microscope -->
For WebNLG 2017 and Cleaned E2E, DART use the original data splits. For the new annotation on WikiTableQuestions and WikiSQL, random splitting will make train, dev, and test splits contain similar tables and similar <triple-set, sentence> examples. They are thus split based on Jaccard similarity such that no training examples has a similarity with a test example of over 0.5
## Dataset in GEM
### Rationale for Inclusion in GEM
#### Why is the Dataset in GEM?
<!-- info: What does this dataset contribute toward better generation evaluation and why is it part of GEM? -->
<!-- scope: microscope -->
DART is a large and open-domain structured DAta Record to Text generation corpus with high-quality sentence annotations with each input being a set of entity-relation triples following a tree-structured ontology.
#### Similar Datasets
<!-- info: Do other datasets for the high level task exist? -->
<!-- scope: telescope -->
yes
#### Unique Language Coverage
<!-- info: Does this dataset cover other languages than other datasets for the same task? -->
<!-- scope: periscope -->
no
#### Difference from other GEM datasets
<!-- info: What else sets this dataset apart from other similar datasets in GEM? -->
<!-- scope: microscope -->
The tree structure is unique among GEM datasets
#### Ability that the Dataset measures
<!-- info: What aspect of model ability can be measured with this dataset? -->
<!-- scope: periscope -->
Reasoning, surface realization
### GEM-Specific Curation
#### Modificatied for GEM?
<!-- info: Has the GEM version of the dataset been modified in any way (data, processing, splits) from the original curated data? -->
<!-- scope: telescope -->
no
#### Additional Splits?
<!-- info: Does GEM provide additional splits to the dataset? -->
<!-- scope: telescope -->
no
### Getting Started with the Task
#### Pointers to Resources
<!-- info: Getting started with in-depth research on the task. Add relevant pointers to resources that researchers can consult when they want to get started digging deeper into the task. -->
<!-- scope: microscope -->
Experimental results on DART shows that BART model as the highest performance among three models with a BLEU score of 37.06. This is attributed to BART’s generalization ability due to pretraining ([Table 4](https://arxiv.org/pdf/2007.02871.pdf)).
## Previous Results
### Previous Results
#### Measured Model Abilities
<!-- info: What aspect of model ability can be measured with this dataset? -->
<!-- scope: telescope -->
Reasoning, surface realization
#### Metrics
<!-- info: What metrics are typically used for this task? -->
<!-- scope: periscope -->
`BLEU`, `MoverScore`, `BERT-Score`, `BLEURT`
#### Proposed Evaluation
<!-- info: List and describe the purpose of the metrics and evaluation methodology (including human evaluation) that the dataset creators used when introducing this task. -->
<!-- scope: microscope -->
The leaderboard uses the combination of BLEU, METEOR, TER, MoverScore, BERTScore, PARENT and BLEURT to overcome the limitations of the n-gram overlap metrics.
A small scale human annotation of 100 data points was conducted along the dimensions of (1) fluency - a sentence is natural and grammatical, and (2) semantic faithfulness - a sentence is supported by the input triples.
#### Previous results available?
<!-- info: Are previous results available? -->
<!-- scope: telescope -->
yes
#### Other Evaluation Approaches
<!-- info: What evaluation approaches have others used? -->
<!-- scope: periscope -->
n/a
#### Relevant Previous Results
<!-- info: What are the most relevant previous results for this task/dataset? -->
<!-- scope: microscope -->
BART currently achieves the best performance according to the leaderboard.
## Dataset Curation
### Original Curation
#### Original Curation Rationale
<!-- info: Original curation rationale -->
<!-- scope: telescope -->
The dataset creators encourage through DART further research in natural language generation from semantic data. DART provides high-quality sentence annotations with each input being a set of entity-relation triples in a tree structure.
#### Communicative Goal
<!-- info: What was the communicative goal? -->
<!-- scope: periscope -->
The speaker is required to produce coherent sentences and construct a trees structured ontology of the column headers.
#### Sourced from Different Sources
<!-- info: Is the dataset aggregated from different data sources? -->
<!-- scope: telescope -->
yes
#### Source Details
<!-- info: List the sources (one per line) -->
<!-- scope: periscope -->
- human annotation on open-domain Wikipedia tables from WikiTableQuestions ([Pasupat and Liang,
2015](https://www.aclweb.org/anthology/P15-1142.pdf)) and WikiSQL ([Zhong et al., 2017](https://arxiv.org/pdf/1709.00103.pdf))
- automatic conversion of questions in WikiSQL to declarative sentences
- incorporation of existing datasets including WebNLG 2017 (Gardent et al., 2017[a](https://www.aclweb.org/anthology/P17-1017.pdf),[b](https://www.aclweb.org/anthology/W17-3518.pdf); [Shimorina and Gardent, 2018](https://www.aclweb.org/anthology/W18-6543.pdf)) and Cleaned E2E ([Novikova et al., 2017b](https://arxiv.org/pdf/1706.09254.pdf); Dušek et al., [2018](https://arxiv.org/pdf/1810.01170.pdf), [2019](https://www.aclweb.org/anthology/W19-8652.pdf))
### Language Data
#### How was Language Data Obtained?
<!-- info: How was the language data obtained? -->
<!-- scope: telescope -->
`Found`, `Created for the dataset`
#### Where was it found?
<!-- info: If found, where from? -->
<!-- scope: telescope -->
`Offline media collection`
#### Creation Process
<!-- info: If created for the dataset, describe the creation process. -->
<!-- scope: microscope -->
Creators proposed a two-stage annotation process for constructing triple set sentence pairs based on a tree-structured ontology of each table. First, internal skilled annotators denote the parent column for each column header. Then, a larger number of annotators provide a sentential description of an automatically-chosen subset of table cells in a row. To form a triple set sentence pair, the highlighted cells can be converted to a connected triple set automatically according to the column ontology for the given table.
#### Language Producers
<!-- info: What further information do we have on the language producers? -->
<!-- scope: microscope -->
No further information about the MTurk workers has been provided.
#### Topics Covered
<!-- info: Does the language in the dataset focus on specific topics? How would you describe them? -->
<!-- scope: periscope -->
The sub-datasets are from Wikipedia, DBPedia, and artificially created restaurant data.
#### Data Validation
<!-- info: Was the text validated by a different worker or a data curator? -->
<!-- scope: telescope -->
validated by crowdworker
#### Was Data Filtered?
<!-- info: Were text instances selected or filtered? -->
<!-- scope: telescope -->
not filtered
### Structured Annotations
#### Additional Annotations?
<!-- quick -->
<!-- info: Does the dataset have additional annotations for each instance? -->
<!-- scope: telescope -->
none
#### Annotation Service?
<!-- info: Was an annotation service used? -->
<!-- scope: telescope -->
no
### Consent
#### Any Consent Policy?
<!-- info: Was there a consent policy involved when gathering the data? -->
<!-- scope: telescope -->
no
#### Justification for Using the Data
<!-- info: If not, what is the justification for reusing the data? -->
<!-- scope: microscope -->
The new annotations are based on Wikipedia which is in the public domain and the other two datasets permit reuse (with attribution)
### Private Identifying Information (PII)
#### Contains PII?
<!-- quick -->
<!-- info: Does the source language data likely contain Personal Identifying Information about the data creators or subjects? -->
<!-- scope: telescope -->
no PII
#### Justification for no PII
<!-- info: Provide a justification for selecting `no PII` above. -->
<!-- scope: periscope -->
None of the datasets talk about individuals
### Maintenance
#### Any Maintenance Plan?
<!-- info: Does the original dataset have a maintenance plan? -->
<!-- scope: telescope -->
no
## Broader Social Context
### Previous Work on the Social Impact of the Dataset
#### Usage of Models based on the Data
<!-- info: Are you aware of cases where models trained on the task featured in this dataset ore related tasks have been used in automated systems? -->
<!-- scope: telescope -->
no
### Impact on Under-Served Communities
#### Addresses needs of underserved Communities?
<!-- info: Does this dataset address the needs of communities that are traditionally underserved in language technology, and particularly language generation technology? Communities may be underserved for exemple because their language, language variety, or social or geographical context is underepresented in NLP and NLG resources (datasets and models). -->
<!-- scope: telescope -->
no
### Discussion of Biases
#### Any Documented Social Biases?
<!-- info: Are there documented social biases in the dataset? Biases in this context are variations in the ways members of different social categories are represented that can have harmful downstream consequences for members of the more disadvantaged group. -->
<!-- scope: telescope -->
no
#### Are the Language Producers Representative of the Language?
<!-- info: Does the distribution of language producers in the dataset accurately represent the full distribution of speakers of the language world-wide? If not, how does it differ? -->
<!-- scope: periscope -->
No, the annotators are raters on crowdworking platforms and thus only represent their demographics.
## Considerations for Using the Data
### PII Risks and Liability
### Licenses
#### Copyright Restrictions on the Dataset
<!-- info: Based on your answers in the Intended Use part of the Data Overview Section, which of the following best describe the copyright and licensing status of the dataset? -->
<!-- scope: periscope -->
`open license - commercial use allowed`
#### Copyright Restrictions on the Language Data
<!-- info: Based on your answers in the Language part of the Data Curation Section, which of the following best describe the copyright and licensing status of the underlying language data? -->
<!-- scope: periscope -->
`open license - commercial use allowed`
### Known Technical Limitations
#### Technical Limitations
<!-- info: Describe any known technical limitations, such as spurrious correlations, train/test overlap, annotation biases, or mis-annotations, and cite the works that first identified these limitations when possible. -->
<!-- scope: microscope -->
The dataset may contain some social biases, as the input sentences are based on Wikipedia (WikiTableQuestions, WikiSQL, WebNLG). Studies have shown that the English Wikipedia contains gender biases([Dinan et al., 2020](https://www.aclweb.org/anthology/2020.emnlp-main.23.pdf)), racial biases([Papakyriakopoulos et al., 2020 (https://dl.acm.org/doi/pdf/10.1145/3351095.3372843)) and geographical bias([Livingstone et al., 2010](https://doi.org/10.5204/mcj.315)). [More info](https://en.wikipedia.org/wiki/Racial_bias_on_Wikipedia#cite_note-23).
#### Unsuited Applications
<!-- info: When using a model trained on this dataset in a setting where users or the public may interact with its predictions, what are some pitfalls to look out for? In particular, describe some applications of the general task featured in this dataset that its curation or properties make it less suitable for. -->
<!-- scope: microscope -->
The end-to-end transformer has the lowest performance since the transformer model needs intermediate pipeline planning steps to have higher performance. Similar findings can be found in [Castro Ferreira et al., 2019](https://arxiv.org/pdf/1908.09022.pdf).
|
AlekseyKorshuk/drama-books | 2022-06-11T13:26:37.000Z | [
"region:us"
] | AlekseyKorshuk | null | null | null | 1 | 338 | Entry not found |
tapaco | 2023-06-08T13:14:46.000Z | [
"task_categories:text2text-generation",
"task_categories:translation",
"task_categories:text-classification",
"task_ids:semantic-similarity-classification",
"annotations_creators:machine-generated",
"language_creators:crowdsourced",
"multilinguality:multilingual",
"size_categories:100K<n<1M",
"size_categories:10K<n<100K",
"size_categories:1K<n<10K",
"size_categories:1M<n<10M",
"size_categories:n<1K",
"source_datasets:extended|other-tatoeba",
"language:af",
"language:ar",
"language:az",
"language:be",
"language:ber",
"language:bg",
"language:bn",
"language:br",
"language:ca",
"language:cbk",
"language:cmn",
"language:cs",
"language:da",
"language:de",
"language:el",
"language:en",
"language:eo",
"language:es",
"language:et",
"language:eu",
"language:fi",
"language:fr",
"language:gl",
"language:gos",
"language:he",
"language:hi",
"language:hr",
"language:hu",
"language:hy",
"language:ia",
"language:id",
"language:ie",
"language:io",
"language:is",
"language:it",
"language:ja",
"language:jbo",
"language:kab",
"language:ko",
"language:kw",
"language:la",
"language:lfn",
"language:lt",
"language:mk",
"language:mr",
"language:nb",
"language:nds",
"language:nl",
"language:orv",
"language:ota",
"language:pes",
"language:pl",
"language:pt",
"language:rn",
"language:ro",
"language:ru",
"language:sl",
"language:sr",
"language:sv",
"language:tk",
"language:tl",
"language:tlh",
"language:tok",
"language:tr",
"language:tt",
"language:ug",
"language:uk",
"language:ur",
"language:vi",
"language:vo",
"language:war",
"language:wuu",
"language:yue",
"license:cc-by-2.0",
"paraphrase-generation",
"region:us"
] | null | A freely available paraphrase corpus for 73 languages extracted from the Tatoeba database. Tatoeba is a crowdsourcing project mainly geared towards language learners. Its aim is to provide example sentences and translations for particular linguistic constructions and words. The paraphrase corpus is created by populating a graph with Tatoeba sentences and equivalence links between sentences “meaning the same thing”. This graph is then traversed to extract sets of paraphrases. Several language-independent filters and pruning steps are applied to remove uninteresting sentences. A manual evaluation performed on three languages shows that between half and three quarters of inferred paraphrases are correct and that most remaining ones are either correct but trivial, or near-paraphrases that neutralize a morphological distinction. The corpus contains a total of 1.9 million sentences, with 200 – 250 000 sentences per language. It covers a range of languages for which, to our knowledge,no other paraphrase dataset exists. | @dataset{scherrer_yves_2020_3707949,
author = {Scherrer, Yves},
title = {{TaPaCo: A Corpus of Sentential Paraphrases for 73 Languages}},
month = mar,
year = 2020,
publisher = {Zenodo},
version = {1.0},
doi = {10.5281/zenodo.3707949},
url = {https://doi.org/10.5281/zenodo.3707949}
} | null | 30 | 337 | ---
annotations_creators:
- machine-generated
language_creators:
- crowdsourced
language:
- af
- ar
- az
- be
- ber
- bg
- bn
- br
- ca
- cbk
- cmn
- cs
- da
- de
- el
- en
- eo
- es
- et
- eu
- fi
- fr
- gl
- gos
- he
- hi
- hr
- hu
- hy
- ia
- id
- ie
- io
- is
- it
- ja
- jbo
- kab
- ko
- kw
- la
- lfn
- lt
- mk
- mr
- nb
- nds
- nl
- orv
- ota
- pes
- pl
- pt
- rn
- ro
- ru
- sl
- sr
- sv
- tk
- tl
- tlh
- tok
- tr
- tt
- ug
- uk
- ur
- vi
- vo
- war
- wuu
- yue
license:
- cc-by-2.0
multilinguality:
- multilingual
size_categories:
- 100K<n<1M
- 10K<n<100K
- 1K<n<10K
- 1M<n<10M
- n<1K
source_datasets:
- extended|other-tatoeba
task_categories:
- text2text-generation
- translation
- text-classification
task_ids:
- semantic-similarity-classification
paperswithcode_id: tapaco
pretty_name: TaPaCo Corpus
tags:
- paraphrase-generation
dataset_info:
- config_name: all_languages
features:
- name: paraphrase_set_id
dtype: string
- name: sentence_id
dtype: string
- name: paraphrase
dtype: string
- name: lists
sequence: string
- name: tags
sequence: string
- name: language
dtype: string
splits:
- name: train
num_bytes: 162802556
num_examples: 1926192
download_size: 32213126
dataset_size: 162802556
- config_name: af
features:
- name: paraphrase_set_id
dtype: string
- name: sentence_id
dtype: string
- name: paraphrase
dtype: string
- name: lists
sequence: string
- name: tags
sequence: string
- name: language
dtype: string
splits:
- name: train
num_bytes: 21219
num_examples: 307
download_size: 32213126
dataset_size: 21219
- config_name: ar
features:
- name: paraphrase_set_id
dtype: string
- name: sentence_id
dtype: string
- name: paraphrase
dtype: string
- name: lists
sequence: string
- name: tags
sequence: string
- name: language
dtype: string
splits:
- name: train
num_bytes: 546200
num_examples: 6446
download_size: 32213126
dataset_size: 546200
- config_name: az
features:
- name: paraphrase_set_id
dtype: string
- name: sentence_id
dtype: string
- name: paraphrase
dtype: string
- name: lists
sequence: string
- name: tags
sequence: string
- name: language
dtype: string
splits:
- name: train
num_bytes: 44461
num_examples: 624
download_size: 32213126
dataset_size: 44461
- config_name: be
features:
- name: paraphrase_set_id
dtype: string
- name: sentence_id
dtype: string
- name: paraphrase
dtype: string
- name: lists
sequence: string
- name: tags
sequence: string
- name: language
dtype: string
splits:
- name: train
num_bytes: 140376
num_examples: 1512
download_size: 32213126
dataset_size: 140376
- config_name: ber
features:
- name: paraphrase_set_id
dtype: string
- name: sentence_id
dtype: string
- name: paraphrase
dtype: string
- name: lists
sequence: string
- name: tags
sequence: string
- name: language
dtype: string
splits:
- name: train
num_bytes: 5118620
num_examples: 67484
download_size: 32213126
dataset_size: 5118620
- config_name: bg
features:
- name: paraphrase_set_id
dtype: string
- name: sentence_id
dtype: string
- name: paraphrase
dtype: string
- name: lists
sequence: string
- name: tags
sequence: string
- name: language
dtype: string
splits:
- name: train
num_bytes: 590535
num_examples: 6324
download_size: 32213126
dataset_size: 590535
- config_name: bn
features:
- name: paraphrase_set_id
dtype: string
- name: sentence_id
dtype: string
- name: paraphrase
dtype: string
- name: lists
sequence: string
- name: tags
sequence: string
- name: language
dtype: string
splits:
- name: train
num_bytes: 146654
num_examples: 1440
download_size: 32213126
dataset_size: 146654
- config_name: br
features:
- name: paraphrase_set_id
dtype: string
- name: sentence_id
dtype: string
- name: paraphrase
dtype: string
- name: lists
sequence: string
- name: tags
sequence: string
- name: language
dtype: string
splits:
- name: train
num_bytes: 177919
num_examples: 2536
download_size: 32213126
dataset_size: 177919
- config_name: ca
features:
- name: paraphrase_set_id
dtype: string
- name: sentence_id
dtype: string
- name: paraphrase
dtype: string
- name: lists
sequence: string
- name: tags
sequence: string
- name: language
dtype: string
splits:
- name: train
num_bytes: 39404
num_examples: 518
download_size: 32213126
dataset_size: 39404
- config_name: cbk
features:
- name: paraphrase_set_id
dtype: string
- name: sentence_id
dtype: string
- name: paraphrase
dtype: string
- name: lists
sequence: string
- name: tags
sequence: string
- name: language
dtype: string
splits:
- name: train
num_bytes: 19404
num_examples: 262
download_size: 32213126
dataset_size: 19404
- config_name: cmn
features:
- name: paraphrase_set_id
dtype: string
- name: sentence_id
dtype: string
- name: paraphrase
dtype: string
- name: lists
sequence: string
- name: tags
sequence: string
- name: language
dtype: string
splits:
- name: train
num_bytes: 964514
num_examples: 12549
download_size: 32213126
dataset_size: 964514
- config_name: cs
features:
- name: paraphrase_set_id
dtype: string
- name: sentence_id
dtype: string
- name: paraphrase
dtype: string
- name: lists
sequence: string
- name: tags
sequence: string
- name: language
dtype: string
splits:
- name: train
num_bytes: 482292
num_examples: 6659
download_size: 32213126
dataset_size: 482292
- config_name: da
features:
- name: paraphrase_set_id
dtype: string
- name: sentence_id
dtype: string
- name: paraphrase
dtype: string
- name: lists
sequence: string
- name: tags
sequence: string
- name: language
dtype: string
splits:
- name: train
num_bytes: 848886
num_examples: 11220
download_size: 32213126
dataset_size: 848886
- config_name: de
features:
- name: paraphrase_set_id
dtype: string
- name: sentence_id
dtype: string
- name: paraphrase
dtype: string
- name: lists
sequence: string
- name: tags
sequence: string
- name: language
dtype: string
splits:
- name: train
num_bytes: 10593377
num_examples: 125091
download_size: 32213126
dataset_size: 10593377
- config_name: el
features:
- name: paraphrase_set_id
dtype: string
- name: sentence_id
dtype: string
- name: paraphrase
dtype: string
- name: lists
sequence: string
- name: tags
sequence: string
- name: language
dtype: string
splits:
- name: train
num_bytes: 926054
num_examples: 10072
download_size: 32213126
dataset_size: 926054
- config_name: en
features:
- name: paraphrase_set_id
dtype: string
- name: sentence_id
dtype: string
- name: paraphrase
dtype: string
- name: lists
sequence: string
- name: tags
sequence: string
- name: language
dtype: string
splits:
- name: train
num_bytes: 15070349
num_examples: 158053
download_size: 32213126
dataset_size: 15070349
- config_name: eo
features:
- name: paraphrase_set_id
dtype: string
- name: sentence_id
dtype: string
- name: paraphrase
dtype: string
- name: lists
sequence: string
- name: tags
sequence: string
- name: language
dtype: string
splits:
- name: train
num_bytes: 16810965
num_examples: 207105
download_size: 32213126
dataset_size: 16810965
- config_name: es
features:
- name: paraphrase_set_id
dtype: string
- name: sentence_id
dtype: string
- name: paraphrase
dtype: string
- name: lists
sequence: string
- name: tags
sequence: string
- name: language
dtype: string
splits:
- name: train
num_bytes: 6851135
num_examples: 85064
download_size: 32213126
dataset_size: 6851135
- config_name: et
features:
- name: paraphrase_set_id
dtype: string
- name: sentence_id
dtype: string
- name: paraphrase
dtype: string
- name: lists
sequence: string
- name: tags
sequence: string
- name: language
dtype: string
splits:
- name: train
num_bytes: 17127
num_examples: 241
download_size: 32213126
dataset_size: 17127
- config_name: eu
features:
- name: paraphrase_set_id
dtype: string
- name: sentence_id
dtype: string
- name: paraphrase
dtype: string
- name: lists
sequence: string
- name: tags
sequence: string
- name: language
dtype: string
splits:
- name: train
num_bytes: 42702
num_examples: 573
download_size: 32213126
dataset_size: 42702
- config_name: fi
features:
- name: paraphrase_set_id
dtype: string
- name: sentence_id
dtype: string
- name: paraphrase
dtype: string
- name: lists
sequence: string
- name: tags
sequence: string
- name: language
dtype: string
splits:
- name: train
num_bytes: 2520167
num_examples: 31753
download_size: 32213126
dataset_size: 2520167
- config_name: fr
features:
- name: paraphrase_set_id
dtype: string
- name: sentence_id
dtype: string
- name: paraphrase
dtype: string
- name: lists
sequence: string
- name: tags
sequence: string
- name: language
dtype: string
splits:
- name: train
num_bytes: 9481426
num_examples: 116733
download_size: 32213126
dataset_size: 9481426
- config_name: gl
features:
- name: paraphrase_set_id
dtype: string
- name: sentence_id
dtype: string
- name: paraphrase
dtype: string
- name: lists
sequence: string
- name: tags
sequence: string
- name: language
dtype: string
splits:
- name: train
num_bytes: 26551
num_examples: 351
download_size: 32213126
dataset_size: 26551
- config_name: gos
features:
- name: paraphrase_set_id
dtype: string
- name: sentence_id
dtype: string
- name: paraphrase
dtype: string
- name: lists
sequence: string
- name: tags
sequence: string
- name: language
dtype: string
splits:
- name: train
num_bytes: 18442
num_examples: 279
download_size: 32213126
dataset_size: 18442
- config_name: he
features:
- name: paraphrase_set_id
dtype: string
- name: sentence_id
dtype: string
- name: paraphrase
dtype: string
- name: lists
sequence: string
- name: tags
sequence: string
- name: language
dtype: string
splits:
- name: train
num_bytes: 6024345
num_examples: 68350
download_size: 32213126
dataset_size: 6024345
- config_name: hi
features:
- name: paraphrase_set_id
dtype: string
- name: sentence_id
dtype: string
- name: paraphrase
dtype: string
- name: lists
sequence: string
- name: tags
sequence: string
- name: language
dtype: string
splits:
- name: train
num_bytes: 209382
num_examples: 1913
download_size: 32213126
dataset_size: 209382
- config_name: hr
features:
- name: paraphrase_set_id
dtype: string
- name: sentence_id
dtype: string
- name: paraphrase
dtype: string
- name: lists
sequence: string
- name: tags
sequence: string
- name: language
dtype: string
splits:
- name: train
num_bytes: 36638
num_examples: 505
download_size: 32213126
dataset_size: 36638
- config_name: hu
features:
- name: paraphrase_set_id
dtype: string
- name: sentence_id
dtype: string
- name: paraphrase
dtype: string
- name: lists
sequence: string
- name: tags
sequence: string
- name: language
dtype: string
splits:
- name: train
num_bytes: 5289610
num_examples: 67964
download_size: 32213126
dataset_size: 5289610
- config_name: hy
features:
- name: paraphrase_set_id
dtype: string
- name: sentence_id
dtype: string
- name: paraphrase
dtype: string
- name: lists
sequence: string
- name: tags
sequence: string
- name: language
dtype: string
splits:
- name: train
num_bytes: 49230
num_examples: 603
download_size: 32213126
dataset_size: 49230
- config_name: ia
features:
- name: paraphrase_set_id
dtype: string
- name: sentence_id
dtype: string
- name: paraphrase
dtype: string
- name: lists
sequence: string
- name: tags
sequence: string
- name: language
dtype: string
splits:
- name: train
num_bytes: 194035
num_examples: 2548
download_size: 32213126
dataset_size: 194035
- config_name: id
features:
- name: paraphrase_set_id
dtype: string
- name: sentence_id
dtype: string
- name: paraphrase
dtype: string
- name: lists
sequence: string
- name: tags
sequence: string
- name: language
dtype: string
splits:
- name: train
num_bytes: 124568
num_examples: 1602
download_size: 32213126
dataset_size: 124568
- config_name: ie
features:
- name: paraphrase_set_id
dtype: string
- name: sentence_id
dtype: string
- name: paraphrase
dtype: string
- name: lists
sequence: string
- name: tags
sequence: string
- name: language
dtype: string
splits:
- name: train
num_bytes: 31956
num_examples: 488
download_size: 32213126
dataset_size: 31956
- config_name: io
features:
- name: paraphrase_set_id
dtype: string
- name: sentence_id
dtype: string
- name: paraphrase
dtype: string
- name: lists
sequence: string
- name: tags
sequence: string
- name: language
dtype: string
splits:
- name: train
num_bytes: 33892
num_examples: 480
download_size: 32213126
dataset_size: 33892
- config_name: is
features:
- name: paraphrase_set_id
dtype: string
- name: sentence_id
dtype: string
- name: paraphrase
dtype: string
- name: lists
sequence: string
- name: tags
sequence: string
- name: language
dtype: string
splits:
- name: train
num_bytes: 132062
num_examples: 1641
download_size: 32213126
dataset_size: 132062
- config_name: it
features:
- name: paraphrase_set_id
dtype: string
- name: sentence_id
dtype: string
- name: paraphrase
dtype: string
- name: lists
sequence: string
- name: tags
sequence: string
- name: language
dtype: string
splits:
- name: train
num_bytes: 15073750
num_examples: 198919
download_size: 32213126
dataset_size: 15073750
- config_name: ja
features:
- name: paraphrase_set_id
dtype: string
- name: sentence_id
dtype: string
- name: paraphrase
dtype: string
- name: lists
sequence: string
- name: tags
sequence: string
- name: language
dtype: string
splits:
- name: train
num_bytes: 4314423
num_examples: 44267
download_size: 32213126
dataset_size: 4314423
- config_name: jbo
features:
- name: paraphrase_set_id
dtype: string
- name: sentence_id
dtype: string
- name: paraphrase
dtype: string
- name: lists
sequence: string
- name: tags
sequence: string
- name: language
dtype: string
splits:
- name: train
num_bytes: 201564
num_examples: 2704
download_size: 32213126
dataset_size: 201564
- config_name: kab
features:
- name: paraphrase_set_id
dtype: string
- name: sentence_id
dtype: string
- name: paraphrase
dtype: string
- name: lists
sequence: string
- name: tags
sequence: string
- name: language
dtype: string
splits:
- name: train
num_bytes: 1211051
num_examples: 15944
download_size: 32213126
dataset_size: 1211051
- config_name: ko
features:
- name: paraphrase_set_id
dtype: string
- name: sentence_id
dtype: string
- name: paraphrase
dtype: string
- name: lists
sequence: string
- name: tags
sequence: string
- name: language
dtype: string
splits:
- name: train
num_bytes: 40458
num_examples: 503
download_size: 32213126
dataset_size: 40458
- config_name: kw
features:
- name: paraphrase_set_id
dtype: string
- name: sentence_id
dtype: string
- name: paraphrase
dtype: string
- name: lists
sequence: string
- name: tags
sequence: string
- name: language
dtype: string
splits:
- name: train
num_bytes: 88577
num_examples: 1328
download_size: 32213126
dataset_size: 88577
- config_name: la
features:
- name: paraphrase_set_id
dtype: string
- name: sentence_id
dtype: string
- name: paraphrase
dtype: string
- name: lists
sequence: string
- name: tags
sequence: string
- name: language
dtype: string
splits:
- name: train
num_bytes: 485749
num_examples: 6889
download_size: 32213126
dataset_size: 485749
- config_name: lfn
features:
- name: paraphrase_set_id
dtype: string
- name: sentence_id
dtype: string
- name: paraphrase
dtype: string
- name: lists
sequence: string
- name: tags
sequence: string
- name: language
dtype: string
splits:
- name: train
num_bytes: 203383
num_examples: 2313
download_size: 32213126
dataset_size: 203383
- config_name: lt
features:
- name: paraphrase_set_id
dtype: string
- name: sentence_id
dtype: string
- name: paraphrase
dtype: string
- name: lists
sequence: string
- name: tags
sequence: string
- name: language
dtype: string
splits:
- name: train
num_bytes: 599166
num_examples: 8042
download_size: 32213126
dataset_size: 599166
- config_name: mk
features:
- name: paraphrase_set_id
dtype: string
- name: sentence_id
dtype: string
- name: paraphrase
dtype: string
- name: lists
sequence: string
- name: tags
sequence: string
- name: language
dtype: string
splits:
- name: train
num_bytes: 1240185
num_examples: 14678
download_size: 32213126
dataset_size: 1240185
- config_name: mr
features:
- name: paraphrase_set_id
dtype: string
- name: sentence_id
dtype: string
- name: paraphrase
dtype: string
- name: lists
sequence: string
- name: tags
sequence: string
- name: language
dtype: string
splits:
- name: train
num_bytes: 1838921
num_examples: 16413
download_size: 32213126
dataset_size: 1838921
- config_name: nb
features:
- name: paraphrase_set_id
dtype: string
- name: sentence_id
dtype: string
- name: paraphrase
dtype: string
- name: lists
sequence: string
- name: tags
sequence: string
- name: language
dtype: string
splits:
- name: train
num_bytes: 85371
num_examples: 1094
download_size: 32213126
dataset_size: 85371
- config_name: nds
features:
- name: paraphrase_set_id
dtype: string
- name: sentence_id
dtype: string
- name: paraphrase
dtype: string
- name: lists
sequence: string
- name: tags
sequence: string
- name: language
dtype: string
splits:
- name: train
num_bytes: 195021
num_examples: 2633
download_size: 32213126
dataset_size: 195021
- config_name: nl
features:
- name: paraphrase_set_id
dtype: string
- name: sentence_id
dtype: string
- name: paraphrase
dtype: string
- name: lists
sequence: string
- name: tags
sequence: string
- name: language
dtype: string
splits:
- name: train
num_bytes: 1790975
num_examples: 23561
download_size: 32213126
dataset_size: 1790975
- config_name: orv
features:
- name: paraphrase_set_id
dtype: string
- name: sentence_id
dtype: string
- name: paraphrase
dtype: string
- name: lists
sequence: string
- name: tags
sequence: string
- name: language
dtype: string
splits:
- name: train
num_bytes: 40484
num_examples: 471
download_size: 32213126
dataset_size: 40484
- config_name: ota
features:
- name: paraphrase_set_id
dtype: string
- name: sentence_id
dtype: string
- name: paraphrase
dtype: string
- name: lists
sequence: string
- name: tags
sequence: string
- name: language
dtype: string
splits:
- name: train
num_bytes: 44996
num_examples: 486
download_size: 32213126
dataset_size: 44996
- config_name: pes
features:
- name: paraphrase_set_id
dtype: string
- name: sentence_id
dtype: string
- name: paraphrase
dtype: string
- name: lists
sequence: string
- name: tags
sequence: string
- name: language
dtype: string
splits:
- name: train
num_bytes: 433406
num_examples: 4285
download_size: 32213126
dataset_size: 433406
- config_name: pl
features:
- name: paraphrase_set_id
dtype: string
- name: sentence_id
dtype: string
- name: paraphrase
dtype: string
- name: lists
sequence: string
- name: tags
sequence: string
- name: language
dtype: string
splits:
- name: train
num_bytes: 1722188
num_examples: 22391
download_size: 32213126
dataset_size: 1722188
- config_name: pt
features:
- name: paraphrase_set_id
dtype: string
- name: sentence_id
dtype: string
- name: paraphrase
dtype: string
- name: lists
sequence: string
- name: tags
sequence: string
- name: language
dtype: string
splits:
- name: train
num_bytes: 6141178
num_examples: 78430
download_size: 32213126
dataset_size: 6141178
- config_name: rn
features:
- name: paraphrase_set_id
dtype: string
- name: sentence_id
dtype: string
- name: paraphrase
dtype: string
- name: lists
sequence: string
- name: tags
sequence: string
- name: language
dtype: string
splits:
- name: train
num_bytes: 47387
num_examples: 648
download_size: 32213126
dataset_size: 47387
- config_name: ro
features:
- name: paraphrase_set_id
dtype: string
- name: sentence_id
dtype: string
- name: paraphrase
dtype: string
- name: lists
sequence: string
- name: tags
sequence: string
- name: language
dtype: string
splits:
- name: train
num_bytes: 162955
num_examples: 2092
download_size: 32213126
dataset_size: 162955
- config_name: ru
features:
- name: paraphrase_set_id
dtype: string
- name: sentence_id
dtype: string
- name: paraphrase
dtype: string
- name: lists
sequence: string
- name: tags
sequence: string
- name: language
dtype: string
splits:
- name: train
num_bytes: 24540667
num_examples: 251263
download_size: 32213126
dataset_size: 24540667
- config_name: sl
features:
- name: paraphrase_set_id
dtype: string
- name: sentence_id
dtype: string
- name: paraphrase
dtype: string
- name: lists
sequence: string
- name: tags
sequence: string
- name: language
dtype: string
splits:
- name: train
num_bytes: 49610
num_examples: 706
download_size: 32213126
dataset_size: 49610
- config_name: sr
features:
- name: paraphrase_set_id
dtype: string
- name: sentence_id
dtype: string
- name: paraphrase
dtype: string
- name: lists
sequence: string
- name: tags
sequence: string
- name: language
dtype: string
splits:
- name: train
num_bytes: 667308
num_examples: 8175
download_size: 32213126
dataset_size: 667308
- config_name: sv
features:
- name: paraphrase_set_id
dtype: string
- name: sentence_id
dtype: string
- name: paraphrase
dtype: string
- name: lists
sequence: string
- name: tags
sequence: string
- name: language
dtype: string
splits:
- name: train
num_bytes: 509884
num_examples: 7005
download_size: 32213126
dataset_size: 509884
- config_name: tk
features:
- name: paraphrase_set_id
dtype: string
- name: sentence_id
dtype: string
- name: paraphrase
dtype: string
- name: lists
sequence: string
- name: tags
sequence: string
- name: language
dtype: string
splits:
- name: train
num_bytes: 95047
num_examples: 1165
download_size: 32213126
dataset_size: 95047
- config_name: tl
features:
- name: paraphrase_set_id
dtype: string
- name: sentence_id
dtype: string
- name: paraphrase
dtype: string
- name: lists
sequence: string
- name: tags
sequence: string
- name: language
dtype: string
splits:
- name: train
num_bytes: 76059
num_examples: 1017
download_size: 32213126
dataset_size: 76059
- config_name: tlh
features:
- name: paraphrase_set_id
dtype: string
- name: sentence_id
dtype: string
- name: paraphrase
dtype: string
- name: lists
sequence: string
- name: tags
sequence: string
- name: language
dtype: string
splits:
- name: train
num_bytes: 185309
num_examples: 2804
download_size: 32213126
dataset_size: 185309
- config_name: toki
features:
- name: paraphrase_set_id
dtype: string
- name: sentence_id
dtype: string
- name: paraphrase
dtype: string
- name: lists
sequence: string
- name: tags
sequence: string
- name: language
dtype: string
splits:
- name: train
num_bytes: 310864
num_examples: 3738
download_size: 32213126
dataset_size: 310864
- config_name: tr
features:
- name: paraphrase_set_id
dtype: string
- name: sentence_id
dtype: string
- name: paraphrase
dtype: string
- name: lists
sequence: string
- name: tags
sequence: string
- name: language
dtype: string
splits:
- name: train
num_bytes: 11271158
num_examples: 142088
download_size: 32213126
dataset_size: 11271158
- config_name: tt
features:
- name: paraphrase_set_id
dtype: string
- name: sentence_id
dtype: string
- name: paraphrase
dtype: string
- name: lists
sequence: string
- name: tags
sequence: string
- name: language
dtype: string
splits:
- name: train
num_bytes: 277269
num_examples: 2398
download_size: 32213126
dataset_size: 277269
- config_name: ug
features:
- name: paraphrase_set_id
dtype: string
- name: sentence_id
dtype: string
- name: paraphrase
dtype: string
- name: lists
sequence: string
- name: tags
sequence: string
- name: language
dtype: string
splits:
- name: train
num_bytes: 118474
num_examples: 1183
download_size: 32213126
dataset_size: 118474
- config_name: uk
features:
- name: paraphrase_set_id
dtype: string
- name: sentence_id
dtype: string
- name: paraphrase
dtype: string
- name: lists
sequence: string
- name: tags
sequence: string
- name: language
dtype: string
splits:
- name: train
num_bytes: 4885677
num_examples: 54431
download_size: 32213126
dataset_size: 4885677
- config_name: ur
features:
- name: paraphrase_set_id
dtype: string
- name: sentence_id
dtype: string
- name: paraphrase
dtype: string
- name: lists
sequence: string
- name: tags
sequence: string
- name: language
dtype: string
splits:
- name: train
num_bytes: 24075
num_examples: 252
download_size: 32213126
dataset_size: 24075
- config_name: vi
features:
- name: paraphrase_set_id
dtype: string
- name: sentence_id
dtype: string
- name: paraphrase
dtype: string
- name: lists
sequence: string
- name: tags
sequence: string
- name: language
dtype: string
splits:
- name: train
num_bytes: 84773
num_examples: 962
download_size: 32213126
dataset_size: 84773
- config_name: vo
features:
- name: paraphrase_set_id
dtype: string
- name: sentence_id
dtype: string
- name: paraphrase
dtype: string
- name: lists
sequence: string
- name: tags
sequence: string
- name: language
dtype: string
splits:
- name: train
num_bytes: 22164
num_examples: 328
download_size: 32213126
dataset_size: 22164
- config_name: war
features:
- name: paraphrase_set_id
dtype: string
- name: sentence_id
dtype: string
- name: paraphrase
dtype: string
- name: lists
sequence: string
- name: tags
sequence: string
- name: language
dtype: string
splits:
- name: train
num_bytes: 25759
num_examples: 327
download_size: 32213126
dataset_size: 25759
- config_name: wuu
features:
- name: paraphrase_set_id
dtype: string
- name: sentence_id
dtype: string
- name: paraphrase
dtype: string
- name: lists
sequence: string
- name: tags
sequence: string
- name: language
dtype: string
splits:
- name: train
num_bytes: 31640
num_examples: 408
download_size: 32213126
dataset_size: 31640
- config_name: yue
features:
- name: paraphrase_set_id
dtype: string
- name: sentence_id
dtype: string
- name: paraphrase
dtype: string
- name: lists
sequence: string
- name: tags
sequence: string
- name: language
dtype: string
splits:
- name: train
num_bytes: 42766
num_examples: 561
download_size: 32213126
dataset_size: 42766
config_names:
- af
- all_languages
- ar
- az
- be
- ber
- bg
- bn
- br
- ca
- cbk
- cmn
- cs
- da
- de
- el
- en
- eo
- es
- et
- eu
- fi
- fr
- gl
- gos
- he
- hi
- hr
- hu
- hy
- ia
- id
- ie
- io
- is
- it
- ja
- jbo
- kab
- ko
- kw
- la
- lfn
- lt
- mk
- mr
- nb
- nds
- nl
- orv
- ota
- pes
- pl
- pt
- rn
- ro
- ru
- sl
- sr
- sv
- tk
- tl
- tlh
- tok
- tr
- tt
- ug
- uk
- ur
- vi
- vo
- war
- wuu
- yue
---
# Dataset Card for TaPaCo Corpus
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [TaPaCo: A Corpus of Sentential Paraphrases for 73 Languages](https://zenodo.org/record/3707949#.X9Dh0cYza3I)
- **Paper:** [TaPaCo: A Corpus of Sentential Paraphrases for 73 Languages](https://www.aclweb.org/anthology/2020.lrec-1.848.pdf)
- **Data:** https://doi.org/10.5281/zenodo.3707949
- **Point of Contact:** [Yves Scherrer](https://blogs.helsinki.fi/yvesscherrer/)
### Dataset Summary
A freely available paraphrase corpus for 73 languages extracted from the Tatoeba database.
Tatoeba is a crowdsourcing project mainly geared towards language learners. Its aim is to provide example sentences
and translations for particular linguistic constructions and words. The paraphrase corpus is created by populating a
graph with Tatoeba sentences and equivalence links between sentences “meaning the same thing”. This graph is then
traversed to extract sets of paraphrases. Several language-independent filters and pruning steps are applied to
remove uninteresting sentences. A manual evaluation performed on three languages shows that between half and three
quarters of inferred paraphrases are correct and that most remaining ones are either correct but trivial,
or near-paraphrases that neutralize a morphological distinction. The corpus contains a total of 1.9 million
sentences, with 200 – 250 000 sentences per language. It covers a range of languages for which, to our knowledge,
no other paraphrase dataset exists.
### Supported Tasks and Leaderboards
Paraphrase detection and generation have become popular tasks in NLP
and are increasingly integrated into a wide variety of common downstream tasks such as machine translation
, information retrieval, question answering, and semantic parsing. Most of the existing datasets
cover only a single language – in most cases English – or a small number of languages. Furthermore, some paraphrase
datasets focus on lexical and phrasal rather than sentential paraphrases, while others are created (semi
-)automatically using machine translation.
The number of sentences per language ranges from 200 to 250 000, which makes the dataset
more suitable for fine-tuning and evaluation purposes than
for training. It is well-suited for multi-reference evaluation
of paraphrase generation models, as there is generally not a
single correct way of paraphrasing a given input sentence.
### Languages
The dataset contains paraphrases in Afrikaans, Arabic, Azerbaijani, Belarusian, Berber languages, Bulgarian, Bengali
, Breton, Catalan; Valencian, Chavacano, Mandarin, Czech, Danish, German, Greek, Modern (1453-), English, Esperanto
, Spanish; Castilian, Estonian, Basque, Finnish, French, Galician, Gronings, Hebrew, Hindi, Croatian, Hungarian
, Armenian, Interlingua (International Auxiliary Language Association), Indonesian, Interlingue; Occidental, Ido
, Icelandic, Italian, Japanese, Lojban, Kabyle, Korean, Cornish, Latin, Lingua Franca Nova\t, Lithuanian, Macedonian
, Marathi, Bokmål, Norwegian; Norwegian Bokmål, Low German; Low Saxon; German, Low; Saxon, Low, Dutch; Flemish, ]Old
Russian, Turkish, Ottoman (1500-1928), Iranian Persian, Polish, Portuguese, Rundi, Romanian; Moldavian; Moldovan,
Russian, Slovenian, Serbian, Swedish, Turkmen, Tagalog, Klingon; tlhIngan-Hol, Toki Pona, Turkish, Tatar,
Uighur; Uyghur, Ukrainian, Urdu, Vietnamese, Volapük, Waray, Wu Chinese and Yue Chinese
## Dataset Structure
### Data Instances
Each data instance corresponds to a paraphrase, e.g.:
```
{
'paraphrase_set_id': '1483',
'sentence_id': '5778896',
'paraphrase': 'Ɣremt adlis-a.',
'lists': ['7546'],
'tags': [''],
'language': 'ber'
}
```
### Data Fields
Each dialogue instance has the following fields:
- `paraphrase_set_id`: a running number that groups together all sentences that are considered paraphrases of each
other
- `sentence_id`: OPUS sentence id
- `paraphrase`: Sentential paraphrase in a given language for a given paraphrase_set_id
- `lists`: Contributors can add sentences to list in order to specify the original source of the data
- `tags`: Indicates morphological or phonological properties of the sentence when available
- `language`: Language identifier, one of the 73 languages that belong to this dataset.
### Data Splits
The dataset is having a single `train` split, contains a total of 1.9 million sentences, with 200 – 250 000
sentences per language
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
Creative Commons Attribution 2.0 Generic
### Citation Information
```
@dataset{scherrer_yves_2020_3707949,
author = {Scherrer, Yves},
title = {{TaPaCo: A Corpus of Sentential Paraphrases for 73 Languages}},
month = mar,
year = 2020,
publisher = {Zenodo},
version = {1.0},
doi = {10.5281/zenodo.3707949},
url = {https://doi.org/10.5281/zenodo.3707949}
}
```
### Contributions
Thanks to [@pacman100](https://github.com/pacman100) for adding this dataset. |
explodinggradients/ragas-wikiqa | 2023-07-27T07:13:14.000Z | [
"region:us"
] | explodinggradients | null | null | null | 1 | 337 | ---
dataset_info:
features:
- name: question
dtype: string
- name: correct_answer
dtype: string
- name: incorrect_answer
dtype: string
- name: question_id
dtype: string
- name: generated_with_rag
dtype: string
- name: context
sequence: string
- name: generated_without_rag
dtype: string
splits:
- name: train
num_bytes: 1906213
num_examples: 232
download_size: 1152464
dataset_size: 1906213
---
# Dataset Card for "ragas-wikiqa"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
SetFit/ag_news | 2022-01-19T21:21:07.000Z | [
"region:us"
] | SetFit | null | null | null | 0 | 336 | Entry not found |
bigbio/pubmed_qa | 2022-12-22T15:46:24.000Z | [
"multilinguality:monolingual",
"language:en",
"license:mit",
"region:us"
] | bigbio | PubMedQA is a novel biomedical question answering (QA) dataset collected from PubMed abstracts.
The task of PubMedQA is to answer research biomedical questions with yes/no/maybe using the corresponding abstracts.
PubMedQA has 1k expert-annotated (PQA-L), 61.2k unlabeled (PQA-U) and 211.3k artificially generated QA instances (PQA-A).
Each PubMedQA instance is composed of:
(1) a question which is either an existing research article title or derived from one,
(2) a context which is the corresponding PubMed abstract without its conclusion,
(3) a long answer, which is the conclusion of the abstract and, presumably, answers the research question, and
(4) a yes/no/maybe answer which summarizes the conclusion.
PubMedQA is the first QA dataset where reasoning over biomedical research texts,
especially their quantitative contents, is required to answer the questions.
PubMedQA datasets comprise of 3 different subsets:
(1) PubMedQA Labeled (PQA-L): A labeled PubMedQA subset comprises of 1k manually annotated yes/no/maybe QA data collected from PubMed articles.
(2) PubMedQA Artificial (PQA-A): An artificially labelled PubMedQA subset comprises of 211.3k PubMed articles with automatically generated questions from the statement titles and yes/no answer labels generated using a simple heuristic.
(3) PubMedQA Unlabeled (PQA-U): An unlabeled PubMedQA subset comprises of 61.2k context-question pairs data collected from PubMed articles. | @inproceedings{jin2019pubmedqa,
title={PubMedQA: A Dataset for Biomedical Research Question Answering},
author={Jin, Qiao and Dhingra, Bhuwan and Liu, Zhengping and Cohen, William and Lu, Xinghua},
booktitle={Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)},
pages={2567--2577},
year={2019}
} | null | 3 | 336 |
---
language:
- en
bigbio_language:
- English
license: mit
multilinguality: monolingual
bigbio_license_shortname: MIT
pretty_name: PubMedQA
homepage: https://github.com/pubmedqa/pubmedqa
bigbio_pubmed: True
bigbio_public: True
bigbio_tasks:
- QUESTION_ANSWERING
---
# Dataset Card for PubMedQA
## Dataset Description
- **Homepage:** https://github.com/pubmedqa/pubmedqa
- **Pubmed:** True
- **Public:** True
- **Tasks:** QA
PubMedQA is a novel biomedical question answering (QA) dataset collected from PubMed abstracts.
The task of PubMedQA is to answer research biomedical questions with yes/no/maybe using the corresponding abstracts.
PubMedQA has 1k expert-annotated (PQA-L), 61.2k unlabeled (PQA-U) and 211.3k artificially generated QA instances (PQA-A).
Each PubMedQA instance is composed of:
(1) a question which is either an existing research article title or derived from one,
(2) a context which is the corresponding PubMed abstract without its conclusion,
(3) a long answer, which is the conclusion of the abstract and, presumably, answers the research question, and
(4) a yes/no/maybe answer which summarizes the conclusion.
PubMedQA is the first QA dataset where reasoning over biomedical research texts,
especially their quantitative contents, is required to answer the questions.
PubMedQA datasets comprise of 3 different subsets:
(1) PubMedQA Labeled (PQA-L): A labeled PubMedQA subset comprises of 1k manually annotated yes/no/maybe QA data collected from PubMed articles.
(2) PubMedQA Artificial (PQA-A): An artificially labelled PubMedQA subset comprises of 211.3k PubMed articles with automatically generated questions from the statement titles and yes/no answer labels generated using a simple heuristic.
(3) PubMedQA Unlabeled (PQA-U): An unlabeled PubMedQA subset comprises of 61.2k context-question pairs data collected from PubMed articles.
## Citation Information
```
@inproceedings{jin2019pubmedqa,
title={PubMedQA: A Dataset for Biomedical Research Question Answering},
author={Jin, Qiao and Dhingra, Bhuwan and Liu, Zhengping and Cohen, William and Lu, Xinghua},
booktitle={Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)},
pages={2567--2577},
year={2019}
}
```
|
jjonhwa/SECOND_KQ_V2 | 2023-09-13T07:04:47.000Z | [
"region:us"
] | jjonhwa | null | null | null | 0 | 336 | ---
dataset_info:
features:
- name: question
dtype: string
- name: answers
sequence: string
- name: ctxs
list:
- name: score
dtype: float64
- name: text
dtype: string
splits:
- name: train
num_bytes: 686780736
num_examples: 86975
download_size: 276955064
dataset_size: 686780736
---
# Dataset Card for "SECOND_KQ_V2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
allenai/prosocial-dialog | 2023-02-03T07:58:29.000Z | [
"task_categories:conversational",
"task_categories:text-classification",
"task_ids:dialogue-generation",
"task_ids:multi-class-classification",
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"language_creators:machine-generated",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"size_categories:100K<n<1M",
"source_datasets:original",
"source_datasets:extended|social_bias_frames",
"language:en",
"license:cc-by-4.0",
"dialogue",
"dialogue safety",
"social norm",
"rules-of-thumb",
"arxiv:2205.12688",
"region:us"
] | allenai | null | null | null | 64 | 335 | ---
annotations_creators:
- crowdsourced
language:
- en
language_creators:
- crowdsourced
- machine-generated
license: cc-by-4.0
multilinguality:
- monolingual
pretty_name: ProsocialDialog
size_categories:
- 10K<n<100K
- 100K<n<1M
source_datasets:
- original
- extended|social_bias_frames
tags:
- dialogue
- dialogue safety
- social norm
- rules-of-thumb
task_categories:
- conversational
- text-classification
task_ids:
- dialogue-generation
- multi-class-classification
---
# Dataset Card for ProsocialDialog Dataset
## Dataset Description
- **Repository:** [Dataset and Model](https://github.com/skywalker023/prosocial-dialog)
- **Paper:** [ProsocialDialog: A Prosocial Backbone for Conversational Agents](https://aclanthology.org/2022.emnlp-main.267/)
- **Point of Contact:** [Hyunwoo Kim](mailto:hyunwook@allenai.org)
## Dataset Summary
ProsocialDialog is the first large-scale multi-turn English dialogue dataset to teach conversational agents to respond to problematic content following social norms. Covering diverse unethical, problematic, biased, and toxic situations, ProsocialDialog contains responses that encourage prosocial behavior, grounded in commonsense social rules (i.e., rules-of-thumb, RoTs). Created via a human-AI collaborative framework, ProsocialDialog consists of 58K dialogues, with 331K utterances, 160K unique RoTs, and 497K dialogue safety labels accompanied by free-form rationales.
## Supported Tasks
* Dialogue response generation
* Dialogue safety prediction
* Rules-of-thumb generation
## Languages
English
## Dataset Structure
### Data Attributes
attribute | type | description
--- | --- | ---
`context` | str | the potentially unsafe utterance
`response` | str | the guiding utterance grounded on rules-of-thumb (`rots`)
`rots` | list of str\|null | the relevant rules-of-thumb for `text` *not* labeled as \_\_casual\_\_
`safety_label` | str | the final verdict of the context according to `safety_annotations`: {\_\_casual\_\_, \_\_possibly\_needs\_caution\_\_, \_\_probably\_needs\_caution\_\_, \_\_needs\_caution\_\_, \_\_needs\_intervention\_\_}
`safety_annotations` | list of str | raw annotations from three workers: {casual, needs caution, needs intervention}
`safety_annotation_reasons` | list of str | the reasons behind the safety annotations in free-form text from each worker
`source` | str | the source of the seed text that was used to craft the first utterance of the dialogue: {socialchemistry, sbic, ethics_amt, ethics_reddit}
`etc` | str\|null | other information
`dialogue_id` | int | the dialogue index
`response_id` | int | the response index
`episode_done` | bool | an indicator of whether it is the end of the dialogue
## Dataset Creation
To create ProsocialDialog, we set up a human-AI collaborative data creation framework, where GPT-3 generates the potentially unsafe utterances, and crowdworkers provide prosocial responses to them. This approach allows us to circumvent two substantial challenges: (1) there are no available large-scale corpora of multiturn prosocial conversations between humans, and (2) asking humans to write unethical, toxic, or problematic utterances could result in psychological harms (Roberts, 2017; Steiger et al., 2021).
### Further Details, Social Impacts, and Limitations
Please refer to our [paper](https://arxiv.org/abs/2205.12688).
## Additional Information
### Citation
Please cite our work if you found the resources in this repository useful:
```
@inproceedings{kim2022prosocialdialog,
title={ProsocialDialog: A Prosocial Backbone for Conversational Agents},
author={Hyunwoo Kim and Youngjae Yu and Liwei Jiang and Ximing Lu and Daniel Khashabi and Gunhee Kim and Yejin Choi and Maarten Sap},
booktitle={EMNLP},
year=2022
}
``` |
shailja/Verilog_GitHub | 2023-09-20T17:14:18.000Z | [
"license:mit",
"arxiv:2212.11140",
"region:us"
] | shailja | null | null | null | 2 | 335 | ---
license: mit
---
---
pipeline_tag: text-generation
tags:
- code
model-index:
- name: VeriGen
results:
- task:
type: text-generation
dataset:
type:
name:
extra_gated_prompt: >-
## Model License Agreement
Please read the BigCode [OpenRAIL-M
license](https://huggingface.co/spaces/bigcode/bigcode-model-license-agreement)
agreement before accepting it.
extra_gated_fields:
I accept the above license agreement, and will use the Model complying with the set of use restrictions and sharing requirements: checkbox
---
# VeriGen
## Table of Contents
1. [Dataset Summary](##model-summary)
2. [Use](##use)
3. [Limitations](##limitations)
4. [License](##license)
5. [Citation](##citation)
## Dataset Summary
- The dataset comprises Verilog modules as entries. The entries were retrieved from the GitHub dataset on BigQuery.
- For training [models (https://huggingface.co/shailja/fine-tuned-codegen-2B-Verilog)], we filtered entries with no of characters exceeding 20000 and duplicates (exact duplicates ignoring whitespaces).
- **Paper:** [ Benchmarking Large Language Models for Automated Verilog RTL Code Generation](https://arxiv.org/abs/2212.11140)
- **Point of Contact:** [contact@shailja](mailto:shailja.thakur90@gmail.com)
- **Languages:** Verilog (Hardware Description Language)
### Data Splits
The dataset only contains a train split.
### Use
```python
# pip install datasets
from datasets import load_dataset
ds = load_dataset("shailja/Verilog_GitHub", streaming=True, split="train")
print(next(iter(ds)))
#OUTPUT:
```
### Intended Use
The dataset consists of source code from a range of GitHub repositories. As such, they can potentially include non-compilable, low-quality, and vulnerable code.
### Attribution & Other Requirements
The pretraining dataset of the model was not filtered for permissive licenses only. Nevertheless, the model can generate source code verbatim from the dataset. The code's license might require attribution and/or other specific requirements that must be respected.
# License
The dataset is licensed under the BigCode OpenRAIL-M v1 license agreement. You can find the full agreement [here](https://huggingface.co/spaces/bigcode/bigcode-model-license-agreement).
# Citation
```
@misc{https://doi.org/10.48550/arxiv.2212.11140,
doi = {10.48550/ARXIV.2212.11140},
url = {https://arxiv.org/abs/2212.11140},
author = {Thakur, Shailja and Ahmad, Baleegh and Fan, Zhenxing and Pearce, Hammond and Tan, Benjamin and Karri, Ramesh and Dolan-Gavitt, Brendan and Garg, Siddharth},
title = {Benchmarking Large Language Models for Automated Verilog RTL Code Generation},
publisher = {arXiv},
year = {2022},
copyright = {arXiv.org perpetual, non-exclusive license}
}
``` |
approach0/MATH-full | 2023-09-14T18:42:51.000Z | [
"region:us"
] | approach0 | null | null | null | 0 | 335 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: src_path
dtype: string
- name: instruction
dtype: string
- name: input
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 7226726
num_examples: 7500
- name: test
num_bytes: 4555831
num_examples: 5000
download_size: 4968481
dataset_size: 11782557
---
# Dataset Card for "MATH-full"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Tevatron/beir-corpus | 2022-07-07T23:53:45.000Z | [
"region:us"
] | Tevatron | null | null | null | 0 | 334 | Entry not found |
hakurei/open-instruct-v1 | 2023-04-17T03:03:13.000Z | [
"task_categories:text-generation",
"size_categories:100K<n<1M",
"language:en",
"license:apache-2.0",
"region:us"
] | hakurei | null | null | null | 84 | 334 | ---
license: apache-2.0
task_categories:
- text-generation
language:
- en
size_categories:
- 100K<n<1M
---
# Open Instruct V1 - A dataset for having LLMs follow instructions.
Open Instruct V1 is an amalgamation of different datasets which are cleaned and then collated into a singular format for training.
## Dataset Breakdown
| Dataset | Amount of Samples |
|----------------|-------------------|
| [Alpaca](https://github.com/tatsu-lab/stanford_alpaca) | 51759 |
| [Self Instruct](https://github.com/yizhongw/self-instruct) | 82599 |
| [GPT-4 Instruct](https://github.com/teknium1/GPTeacher) | 18194 |
| [Code Alpaca](https://huggingface.co/datasets/HuggingFaceH4/CodeAlpaca_20K) | 18019 |
| [Dolly](https://huggingface.co/datasets/HuggingFaceH4/databricks_dolly_15k) | 15015 |
| [Synthetic](https://huggingface.co/datasets/Dahoas/synthetic-instruct-gptj-pairwise) | 33143 |
| [Roleplay](https://github.com/teknium1/GPTeacher) | 3146 |
| [asss](https://huggingface.co/datasets/HuggingFaceH4/asss) | 448 |
| [instruction-dataset](https://huggingface.co/datasets/HuggingFaceH4/instruction-dataset) | 327 |
| Total | 222650 |
|
BeIR/trec-covid-qrels | 2022-10-23T06:01:04.000Z | [
"task_categories:text-retrieval",
"task_ids:entity-linking-retrieval",
"task_ids:fact-checking-retrieval",
"multilinguality:monolingual",
"language:en",
"license:cc-by-sa-4.0",
"region:us"
] | BeIR | null | null | null | 0 | 333 | ---
annotations_creators: []
language_creators: []
language:
- en
license:
- cc-by-sa-4.0
multilinguality:
- monolingual
paperswithcode_id: beir
pretty_name: BEIR Benchmark
size_categories:
msmarco:
- 1M<n<10M
trec-covid:
- 100k<n<1M
nfcorpus:
- 1K<n<10K
nq:
- 1M<n<10M
hotpotqa:
- 1M<n<10M
fiqa:
- 10K<n<100K
arguana:
- 1K<n<10K
touche-2020:
- 100K<n<1M
cqadupstack:
- 100K<n<1M
quora:
- 100K<n<1M
dbpedia:
- 1M<n<10M
scidocs:
- 10K<n<100K
fever:
- 1M<n<10M
climate-fever:
- 1M<n<10M
scifact:
- 1K<n<10K
source_datasets: []
task_categories:
- text-retrieval
- zero-shot-retrieval
- information-retrieval
- zero-shot-information-retrieval
task_ids:
- passage-retrieval
- entity-linking-retrieval
- fact-checking-retrieval
- tweet-retrieval
- citation-prediction-retrieval
- duplication-question-retrieval
- argument-retrieval
- news-retrieval
- biomedical-information-retrieval
- question-answering-retrieval
---
# Dataset Card for BEIR Benchmark
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://github.com/UKPLab/beir
- **Repository:** https://github.com/UKPLab/beir
- **Paper:** https://openreview.net/forum?id=wCu6T5xFjeJ
- **Leaderboard:** https://docs.google.com/spreadsheets/d/1L8aACyPaXrL8iEelJLGqlMqXKPX2oSP_R10pZoy77Ns
- **Point of Contact:** nandan.thakur@uwaterloo.ca
### Dataset Summary
BEIR is a heterogeneous benchmark that has been built from 18 diverse datasets representing 9 information retrieval tasks:
- Fact-checking: [FEVER](http://fever.ai), [Climate-FEVER](http://climatefever.ai), [SciFact](https://github.com/allenai/scifact)
- Question-Answering: [NQ](https://ai.google.com/research/NaturalQuestions), [HotpotQA](https://hotpotqa.github.io), [FiQA-2018](https://sites.google.com/view/fiqa/)
- Bio-Medical IR: [TREC-COVID](https://ir.nist.gov/covidSubmit/index.html), [BioASQ](http://bioasq.org), [NFCorpus](https://www.cl.uni-heidelberg.de/statnlpgroup/nfcorpus/)
- News Retrieval: [TREC-NEWS](https://trec.nist.gov/data/news2019.html), [Robust04](https://trec.nist.gov/data/robust/04.guidelines.html)
- Argument Retrieval: [Touche-2020](https://webis.de/events/touche-20/shared-task-1.html), [ArguAna](tp://argumentation.bplaced.net/arguana/data)
- Duplicate Question Retrieval: [Quora](https://www.quora.com/q/quoradata/First-Quora-Dataset-Release-Question-Pairs), [CqaDupstack](http://nlp.cis.unimelb.edu.au/resources/cqadupstack/)
- Citation-Prediction: [SCIDOCS](https://allenai.org/data/scidocs)
- Tweet Retrieval: [Signal-1M](https://research.signal-ai.com/datasets/signal1m-tweetir.html)
- Entity Retrieval: [DBPedia](https://github.com/iai-group/DBpedia-Entity/)
All these datasets have been preprocessed and can be used for your experiments.
```python
```
### Supported Tasks and Leaderboards
The dataset supports a leaderboard that evaluates models against task-specific metrics such as F1 or EM, as well as their ability to retrieve supporting information from Wikipedia.
The current best performing models can be found [here](https://eval.ai/web/challenges/challenge-page/689/leaderboard/).
### Languages
All tasks are in English (`en`).
## Dataset Structure
All BEIR datasets must contain a corpus, queries and qrels (relevance judgments file). They must be in the following format:
- `corpus` file: a `.jsonl` file (jsonlines) that contains a list of dictionaries, each with three fields `_id` with unique document identifier, `title` with document title (optional) and `text` with document paragraph or passage. For example: `{"_id": "doc1", "title": "Albert Einstein", "text": "Albert Einstein was a German-born...."}`
- `queries` file: a `.jsonl` file (jsonlines) that contains a list of dictionaries, each with two fields `_id` with unique query identifier and `text` with query text. For example: `{"_id": "q1", "text": "Who developed the mass-energy equivalence formula?"}`
- `qrels` file: a `.tsv` file (tab-seperated) that contains three columns, i.e. the `query-id`, `corpus-id` and `score` in this order. Keep 1st row as header. For example: `q1 doc1 1`
### Data Instances
A high level example of any beir dataset:
```python
corpus = {
"doc1" : {
"title": "Albert Einstein",
"text": "Albert Einstein was a German-born theoretical physicist. who developed the theory of relativity, \
one of the two pillars of modern physics (alongside quantum mechanics). His work is also known for \
its influence on the philosophy of science. He is best known to the general public for his mass–energy \
equivalence formula E = mc2, which has been dubbed 'the world's most famous equation'. He received the 1921 \
Nobel Prize in Physics 'for his services to theoretical physics, and especially for his discovery of the law \
of the photoelectric effect', a pivotal step in the development of quantum theory."
},
"doc2" : {
"title": "", # Keep title an empty string if not present
"text": "Wheat beer is a top-fermented beer which is brewed with a large proportion of wheat relative to the amount of \
malted barley. The two main varieties are German Weißbier and Belgian witbier; other types include Lambic (made\
with wild yeast), Berliner Weisse (a cloudy, sour beer), and Gose (a sour, salty beer)."
},
}
queries = {
"q1" : "Who developed the mass-energy equivalence formula?",
"q2" : "Which beer is brewed with a large proportion of wheat?"
}
qrels = {
"q1" : {"doc1": 1},
"q2" : {"doc2": 1},
}
```
### Data Fields
Examples from all configurations have the following features:
### Corpus
- `corpus`: a `dict` feature representing the document title and passage text, made up of:
- `_id`: a `string` feature representing the unique document id
- `title`: a `string` feature, denoting the title of the document.
- `text`: a `string` feature, denoting the text of the document.
### Queries
- `queries`: a `dict` feature representing the query, made up of:
- `_id`: a `string` feature representing the unique query id
- `text`: a `string` feature, denoting the text of the query.
### Qrels
- `qrels`: a `dict` feature representing the query document relevance judgements, made up of:
- `_id`: a `string` feature representing the query id
- `_id`: a `string` feature, denoting the document id.
- `score`: a `int32` feature, denoting the relevance judgement between query and document.
### Data Splits
| Dataset | Website| BEIR-Name | Type | Queries | Corpus | Rel D/Q | Down-load | md5 |
| -------- | -----| ---------| --------- | ----------- | ---------| ---------| :----------: | :------:|
| MSMARCO | [Homepage](https://microsoft.github.io/msmarco/)| ``msmarco`` | ``train``<br>``dev``<br>``test``| 6,980 | 8.84M | 1.1 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/msmarco.zip) | ``444067daf65d982533ea17ebd59501e4`` |
| TREC-COVID | [Homepage](https://ir.nist.gov/covidSubmit/index.html)| ``trec-covid``| ``test``| 50| 171K| 493.5 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/trec-covid.zip) | ``ce62140cb23feb9becf6270d0d1fe6d1`` |
| NFCorpus | [Homepage](https://www.cl.uni-heidelberg.de/statnlpgroup/nfcorpus/) | ``nfcorpus`` | ``train``<br>``dev``<br>``test``| 323 | 3.6K | 38.2 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/nfcorpus.zip) | ``a89dba18a62ef92f7d323ec890a0d38d`` |
| BioASQ | [Homepage](http://bioasq.org) | ``bioasq``| ``train``<br>``test`` | 500 | 14.91M | 8.05 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#2-bioasq) |
| NQ | [Homepage](https://ai.google.com/research/NaturalQuestions) | ``nq``| ``train``<br>``test``| 3,452 | 2.68M | 1.2 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/nq.zip) | ``d4d3d2e48787a744b6f6e691ff534307`` |
| HotpotQA | [Homepage](https://hotpotqa.github.io) | ``hotpotqa``| ``train``<br>``dev``<br>``test``| 7,405 | 5.23M | 2.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/hotpotqa.zip) | ``f412724f78b0d91183a0e86805e16114`` |
| FiQA-2018 | [Homepage](https://sites.google.com/view/fiqa/) | ``fiqa`` | ``train``<br>``dev``<br>``test``| 648 | 57K | 2.6 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/fiqa.zip) | ``17918ed23cd04fb15047f73e6c3bd9d9`` |
| Signal-1M(RT) | [Homepage](https://research.signal-ai.com/datasets/signal1m-tweetir.html)| ``signal1m`` | ``test``| 97 | 2.86M | 19.6 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#4-signal-1m) |
| TREC-NEWS | [Homepage](https://trec.nist.gov/data/news2019.html) | ``trec-news`` | ``test``| 57 | 595K | 19.6 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#1-trec-news) |
| ArguAna | [Homepage](http://argumentation.bplaced.net/arguana/data) | ``arguana``| ``test`` | 1,406 | 8.67K | 1.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/arguana.zip) | ``8ad3e3c2a5867cdced806d6503f29b99`` |
| Touche-2020| [Homepage](https://webis.de/events/touche-20/shared-task-1.html) | ``webis-touche2020``| ``test``| 49 | 382K | 19.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/webis-touche2020.zip) | ``46f650ba5a527fc69e0a6521c5a23563`` |
| CQADupstack| [Homepage](http://nlp.cis.unimelb.edu.au/resources/cqadupstack/) | ``cqadupstack``| ``test``| 13,145 | 457K | 1.4 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/cqadupstack.zip) | ``4e41456d7df8ee7760a7f866133bda78`` |
| Quora| [Homepage](https://www.quora.com/q/quoradata/First-Quora-Dataset-Release-Question-Pairs) | ``quora``| ``dev``<br>``test``| 10,000 | 523K | 1.6 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/quora.zip) | ``18fb154900ba42a600f84b839c173167`` |
| DBPedia | [Homepage](https://github.com/iai-group/DBpedia-Entity/) | ``dbpedia-entity``| ``dev``<br>``test``| 400 | 4.63M | 38.2 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/dbpedia-entity.zip) | ``c2a39eb420a3164af735795df012ac2c`` |
| SCIDOCS| [Homepage](https://allenai.org/data/scidocs) | ``scidocs``| ``test``| 1,000 | 25K | 4.9 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/scidocs.zip) | ``38121350fc3a4d2f48850f6aff52e4a9`` |
| FEVER | [Homepage](http://fever.ai) | ``fever``| ``train``<br>``dev``<br>``test``| 6,666 | 5.42M | 1.2| [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/fever.zip) | ``5a818580227bfb4b35bb6fa46d9b6c03`` |
| Climate-FEVER| [Homepage](http://climatefever.ai) | ``climate-fever``|``test``| 1,535 | 5.42M | 3.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/climate-fever.zip) | ``8b66f0a9126c521bae2bde127b4dc99d`` |
| SciFact| [Homepage](https://github.com/allenai/scifact) | ``scifact``| ``train``<br>``test``| 300 | 5K | 1.1 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/scifact.zip) | ``5f7d1de60b170fc8027bb7898e2efca1`` |
| Robust04 | [Homepage](https://trec.nist.gov/data/robust/04.guidelines.html) | ``robust04``| ``test``| 249 | 528K | 69.9 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#3-robust04) |
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
[Needs More Information]
## Considerations for Using the Data
### Social Impact of Dataset
[Needs More Information]
### Discussion of Biases
[Needs More Information]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
[Needs More Information]
### Licensing Information
[Needs More Information]
### Citation Information
Cite as:
```
@inproceedings{
thakur2021beir,
title={{BEIR}: A Heterogeneous Benchmark for Zero-shot Evaluation of Information Retrieval Models},
author={Nandan Thakur and Nils Reimers and Andreas R{\"u}ckl{\'e} and Abhishek Srivastava and Iryna Gurevych},
booktitle={Thirty-fifth Conference on Neural Information Processing Systems Datasets and Benchmarks Track (Round 2)},
year={2021},
url={https://openreview.net/forum?id=wCu6T5xFjeJ}
}
```
### Contributions
Thanks to [@Nthakur20](https://github.com/Nthakur20) for adding this dataset. |
sarus-tech/phee | 2023-06-21T19:36:26.000Z | [
"arxiv:2210.12560",
"region:us"
] | sarus-tech | Data and Code for [``PHEE: A Dataset for Pharmacovigilance Event Extraction from Text``](https://arxiv.org/abs/2210.12560/)\ | @misc{sun2022phee,
title={PHEE: A Dataset for Pharmacovigilance Event Extraction from Text},
author={Zhaoyue Sun and Jiazheng Li and Gabriele Pergola and Byron C. Wallace and Bino John and Nigel Greene and Joseph Kim and Yulan He},
year={2022},
eprint={2210.12560},
archivePrefix={arXiv},
primaryClass={cs.CL}
} | null | 1 | 333 | # PHEE dataset
This dataset is port of https://github.com/ZhaoyueSun/PHEE,
the data used in: [``PHEE: A Dataset for Pharmacovigilance Event Extraction from Text``](https://arxiv.org/abs/2210.12560/)
|
GeorgiaTech/cnotesum | 2023-09-02T13:47:25.000Z | [
"license:other",
"region:us"
] | GeorgiaTech | null | null | null | 0 | 332 | ---
license: other
---
Synthetic Clinical Notes based on Synthea and Summary Generated via LLAMA 2 |
indonli | 2023-01-25T14:33:00.000Z | [
"task_categories:text-classification",
"task_ids:natural-language-inference",
"annotations_creators:expert-generated",
"annotations_creators:crowdsourced",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:id",
"license:cc-by-sa-4.0",
"region:us"
] | null | IndoNLI is the first human-elicited Natural Language Inference (NLI) dataset for Indonesian.
IndoNLI is annotated by both crowd workers and experts. The expert-annotated data is used exclusively as a test set.
It is designed to provide a challenging test-bed for Indonesian NLI by explicitly incorporating various linguistic phenomena such as numerical reasoning, structural changes, idioms, or temporal and spatial reasoning. | @inproceedings{mahendra-etal-2021-indonli,
title = "{I}ndo{NLI}: A Natural Language Inference Dataset for {I}ndonesian",
author = "Mahendra, Rahmad and Aji, Alham Fikri and Louvan, Samuel and Rahman, Fahrurrozi and Vania, Clara",
booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2021",
address = "Online and Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.emnlp-main.821",
pages = "10511--10527",
} | null | 6 | 331 | ---
annotations_creators:
- expert-generated
- crowdsourced
language_creators:
- expert-generated
language:
- id
license:
- cc-by-sa-4.0
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- natural-language-inference
paperswithcode_id: indonli
pretty_name: IndoNLI
dataset_info:
features:
- name: premise
dtype: string
- name: hypothesis
dtype: string
- name: label
dtype:
class_label:
names:
'0': entailment
'1': neutral
'2': contradiction
config_name: indonli
splits:
- name: train
num_bytes: 2265687
num_examples: 10330
- name: validation
num_bytes: 465299
num_examples: 2197
- name: test_lay
num_bytes: 473849
num_examples: 2201
- name: test_expert
num_bytes: 911916
num_examples: 2984
download_size: 6977877
dataset_size: 4116751
---
# Dataset Card for IndoNLI
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Repository:** [GitHub](https://github.com/ir-nlp-csui/indonli)
- **Paper:** [EMNLP 2021](https://aclanthology.org/2021.emnlp-main.821/)
- **Point of Contact:** [GitHub](https://github.com/ir-nlp-csui/indonli)
### Dataset Summary
IndoNLI is the first human-elicited Natural Language Inference (NLI) dataset for Indonesian.
IndoNLI is annotated by both crowd workers and experts. The expert-annotated data is used exclusively as a test set. It is designed to provide a challenging test-bed for Indonesian NLI by explicitly incorporating various linguistic phenomena such as numerical reasoning, structural changes, idioms, or temporal and spatial reasoning.
### Supported Tasks and Leaderboards
- Natural Language Inference for Indonesian
### Languages
Indonesian
## Dataset Structure
### Data Instances
An example of `train` looks as follows.
```
{
"premise": "Keindahan alam yang terdapat di Gunung Batu Jonggol ini dapat Anda manfaatkan sebagai objek fotografi yang cantik.",
"hypothesis": "Keindahan alam tidak dapat difoto.",
"label": 2
}
```
### Data Fields
The data fields are:
- `premise`: a `string` feature
- `hypothesis`: a `string` feature
- `label`: a classification label, with possible values including `entailment` (0), `neutral` (1), `contradiction` (2).
### Data Splits
The data is split across `train`, `valid`, `test_lay`, and `test_expert`.
`test_expert` is written by expert annotators, whereas the rest are written by lay annotators.
| split | # examples |
|----------|-------:|
|train| 10330|
|valid| 2197|
|test_lay| 2201|
|test_expert| 2984|
A small subset of `test_expert` is used as a diasnostic tool. For more info, please visit https://github.com/ir-nlp-csui/indonli
## Dataset Creation
### Curation Rationale
Indonesian NLP is considered under-resourced. Up until now, there is no publicly available human-annotated NLI dataset for Indonesian.
### Source Data
#### Initial Data Collection and Normalization
The premise were collected from Indonesian Wikipedia and from other public Indonesian dataset: Indonesian PUD and GSD treebanks provided by the [Universal Dependencies 2.5](https://lindat.mff.cuni.cz/repository/xmlui/handle/11234/1-3105) and [IndoSum](https://github.com/kata-ai/indosum)
The hypothesis were written by annotators.
#### Who are the source language producers?
The data was produced by humans.
### Annotations
#### Annotation process
We start by writing the hypothesis, given the premise and the target label. Then, we ask 2 different independent annotators to predict the label, given the premise and hypothesis. If all 3 (the original hypothesis + 2 independent annotators) agree with the label, then the annotation process ends for that sample. Otherwise, we incrementally ask additional annotator until 3 annotators agree with the label. If there's no majority concensus after 5 annotations, the sample is removed.
#### Who are the annotators?
Lay annotators were computer science students, and expert annotators were NLP scientists with 7+ years research experience in NLP. All annotators are native speakers.
Additionally, expert annotators were explicitly instructed to provide challenging examples by incorporating various linguistic phenomena such as numerical reasoning, structural changes, idioms, or temporal and spatial reasoning. Annotators were compensated based on hourly rate.
### Personal and Sensitive Information
There might be some personal information coming from Wikipedia and news, especially the information of famous/important people.
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
INDONLI is created using premise sentences taken from Wikipedia and news. These data sources may contain some bias.
### Other Known Limitations
No other known limitations
## Additional Information
### Dataset Curators
This dataset is the result of the collaborative work of Indonesian researchers from the University of Indonesia, kata.ai, New York University, Fondazione Bruno Kessler, and the University of St Andrews.
### Licensing Information
CC-BY-SA 4.0.
Attribution — You must give appropriate credit, provide a link to the license, and indicate if changes were made. You may do so in any reasonable manner, but not in any way that suggests the licensor endorses you or your use.
ShareAlike — If you remix, transform, or build upon the material, you must distribute your contributions under the same license as the original.
No additional restrictions — You may not apply legal terms or technological measures that legally restrict others from doing anything the license permits.
Please contact authors for any information on the dataset.
### Citation Information
```
@inproceedings{mahendra-etal-2021-indonli,
title = "{I}ndo{NLI}: A Natural Language Inference Dataset for {I}ndonesian",
author = "Mahendra, Rahmad and Aji, Alham Fikri and Louvan, Samuel and Rahman, Fahrurrozi and Vania, Clara",
booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2021",
address = "Online and Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.emnlp-main.821",
pages = "10511--10527",
}
```
### Contributions
Thanks to [@afaji](https://github.com/afaji) for adding this dataset. |
yahoo_answers_qa | 2022-11-03T16:30:48.000Z | [
"task_categories:question-answering",
"task_ids:open-domain-qa",
"annotations_creators:found",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:extended|other-yahoo-webscope-l6",
"language:en",
"license:unknown",
"region:us"
] | null | Yahoo Non-Factoid Question Dataset is derived from Yahoo's Webscope L6 collection using machine learning techiques such that the questions would contain non-factoid answers.The dataset contains 87,361 questions and their corresponding answers. Each question contains its best answer along with additional other answers submitted by users. Only the best answer was reviewed in determining the quality of the question-answer pair. | null | null | 13 | 331 | ---
annotations_creators:
- found
language_creators:
- found
language:
- en
license:
- unknown
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- extended|other-yahoo-webscope-l6
task_categories:
- question-answering
task_ids:
- open-domain-qa
paperswithcode_id: null
pretty_name: YahooAnswersQa
dataset_info:
features:
- name: id
dtype: string
- name: question
dtype: string
- name: answer
dtype: string
- name: nbestanswers
sequence: string
- name: main_category
dtype: string
config_name: yahoo_answers_qa
splits:
- name: train
num_bytes: 138540510
num_examples: 87362
download_size: 49411220
dataset_size: 138540510
---
# Dataset Card for YahooAnswersQa
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [Add homepage URL here if available (unless it's a GitHub repository)]()
- **Repository:** [If the dataset is hosted on github or has a github homepage, add URL here]()
- **Paper:** [If the dataset was introduced by a paper or there was a paper written describing the dataset, add URL here (landing page for Arxiv paper preferred)]()
- **Leaderboard:** [If the dataset supports an active leaderboard, add link here]()
- **Point of Contact:** [If known, name and email of at least one person the reader can contact for questions about the dataset.]()
### Dataset Summary
[More Information Needed]
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
[More Information Needed]
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
[More Information Needed]
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@patil-suraj](https://github.com/patil-suraj) for adding this dataset. |
scikit-learn/adult-census-income | 2022-06-20T14:46:43.000Z | [
"license:cc0-1.0",
"region:us"
] | scikit-learn | null | null | null | 1 | 329 | ---
license: cc0-1.0
---
## Adult Census Income Dataset
The following was retrieved from [UCI machine learning repository](https://archive.ics.uci.edu/ml/datasets/adult).
This data was extracted from the 1994 Census bureau database by Ronny Kohavi and Barry Becker (Data Mining and Visualization, Silicon Graphics). A set of reasonably clean records was extracted using the following conditions: ((AAGE>16) && (AGI>100) && (AFNLWGT>1) && (HRSWK>0)). The prediction task is to determine whether a person makes over $50K a year.
**Description of fnlwgt (final weight)**
The weights on the Current Population Survey (CPS) files are controlled to independent estimates of the civilian noninstitutional population of the US. These are prepared monthly for us by Population Division here at the Census Bureau. We use 3 sets of controls. These are:
- A single cell estimate of the population 16+ for each state.
- Controls for Hispanic Origin by age and sex.
- Controls by Race, age and sex.
We use all three sets of controls in our weighting program and "rake" through them 6 times so that by the end we come back to all the controls we used. The term estimate refers to population totals derived from CPS by creating "weighted tallies" of any specified socio-economic characteristics of the population. People with similar demographic characteristics should have similar weights. There is one important caveat to remember about this statement. That is that since the CPS sample is actually a collection of 51 state samples, each with its own probability of selection, the statement only applies within state. |
medalpaca/medical_meadow_wikidoc_patient_information | 2023-04-06T17:08:53.000Z | [
"task_categories:question-answering",
"language:en",
"license:cc",
"region:us"
] | medalpaca | null | null | null | 5 | 329 | ---
license: cc
task_categories:
- question-answering
language:
- en
---
# Dataset Card for WikiDoc
For the dataset containing rephrased content from the living textbook refer to [this dataset](https://huggingface.co/datasets/medalpaca/medical_meadow_wikidoc)
## Dataset Description
- **Source:** https://www.wikidoc.org/index.php/Main_Page
- **Repository:** https://github.com/kbressem/medalpaca
- **Paper:** TBA
### Dataset Summary
This dataset containes medical question-answer pairs extracted from [WikiDoc](https://www.wikidoc.org/index.php/Main_Page),
a collaborative platform for medical professionals to share and contribute to up-to-date medical knowledge.
The platform has to main subsites, the "Living Textbook" and "Patient Information". The "Living Textbook"
contains chapters for various medical specialties, which we crawled. We then used GTP-3.5-Turbo to rephrase
the paragraph heading to a question and used the paragraph as answer. Patient Information is structured differently,
in that each section subheading is already a question, making rephrasing them obsolete.
**Note:** This dataset is still a WIP. While the Q/A pairs from the patient information seems to be mostly correct,
the conversion using GPT-3.5-Turbo yielded some unsatisfactory results in approximately 30% of cases. We are in the process of cleaning this dataset.
### Citation Information
TBA |
hoskinson-center/proofnet | 2023-03-17T21:25:37.000Z | [
"license:mit",
"arxiv:2302.12433",
"region:us"
] | hoskinson-center | A dataset that evaluates formally proving and autoformalizing undergraduate mathematics. | null | null | 8 | 328 | ---
license: mit
---
# ProofNet
## Dataset Description
- **Repository:** [zhangir-azerbayev/ProofNet](https://github.com/zhangir-azerbayev/ProofNet)
- **Paper:** [ProofNet](https://mathai2022.github.io/papers/20.pdf)
- **Point of Contact:** [Zhangir Azerbayev](https://zhangir-azerbayev.github.io/)
### Dataset Summary
ProofNet is a benchmark for autoformalization and formal proving of undergraduate-level mathematics. The ProofNet benchmarks consists of 371 examples, each consisting of a formal theorem statement in Lean 3, a natural language theorem statement, and a natural language proof. The problems are primarily drawn from popular undergraduate pure mathematics textbooks and cover topics such as real and complex analysis, linear algebra, abstract algebra, and topology. We intend for ProofNet to be a challenging benchmark that will drive progress in autoformalization and automatic theorem proving.
**Citation**:
```bibtex
@misc{azerbayev2023proofnet,
title={ProofNet: Autoformalizing and Formally Proving Undergraduate-Level Mathematics},
author={Zhangir Azerbayev and Bartosz Piotrowski and Hailey Schoelkopf and Edward W. Ayers and Dragomir Radev and Jeremy Avigad},
year={2023},
eprint={2302.12433},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
### Leaderboard
**Statement Autoformalization**
| Model | Typecheck Rate | Accuracy |
| ---------------------------------- | -------------- | -------- |
| Davinci-code-002 (prompt retrieval)| 45.2 | 16.1 |
| Davinci-code-002 (in-context learning) | 23.7 | 13.4 |
| proofGPT-1.3B | 10.7 | 3.2 |
**Statement Informalization**
| Model | Accuracy |
| ---------------------------------- | -------- |
| Code-davinci-002 (in-context learning)| 62.3 |
| proofGPT-6.7B (in-context learning) | 6.5 |
| proofGPT-1.3B (in-context learning) | 4.3 |
### Data Fields
- `id`: Unique string identifier for the problem.
- `nl_statement`: Natural language theorem statement.
- `nl_proof`: Natural language proof, in LaTeX. Depends on `amsthm, amsmath, amssymb` packages.
- `formal_statement`: Formal theorem statement in Lean 3.
- `src_header`: File header including imports, namespaces, and locales required for the formal statement. Note that local import of [common.lean](https://github.com/zhangir-azerbayev/ProofNet/blob/main/benchmark/benchmark_to_publish/formal/common.lean), which has to be manually downloaded and place in the same directory as your `.lean` file containing the formal statement.
### Authors
Zhangir Azerbayev, Bartosz Piotrowski, Jeremy Avigad |
carlosdanielhernandezmena/ravnursson_asr | 2023-07-10T21:20:03.000Z | [
"task_categories:automatic-speech-recognition",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:fo",
"license:cc-by-4.0",
"faroe islands",
"faroese",
"ravnur project",
"speech recognition in faroese",
"region:us"
] | carlosdanielhernandezmena | The corpus \"RAVNURSSON FAROESE SPEECH AND TRANSCRIPTS\" (or RAVNURSSON Corpus for short) is a collection of speech recordings with transcriptions intended for Automatic Speech Recognition (ASR) applications in the language that is spoken at the Faroe Islands (Faroese). It was curated at the Reykjavík University (RU) in 2022. | @misc{carlosmenaravnursson2022,
title={Ravnursson Faroese Speech and Transcripts},
author={Hernandez Mena, Carlos Daniel and Simonsen, Annika},
year={2022},
url={http://hdl.handle.net/20.500.12537/276},
} | null | 1 | 327 | ---
annotations_creators:
- expert-generated
language:
- fo
language_creators:
- expert-generated
license:
- cc-by-4.0
multilinguality:
- monolingual
pretty_name: RAVNURSSON FAROESE SPEECH AND TRANSCRIPTS
size_categories:
- 10K<n<100K
source_datasets:
- original
tags:
- faroe islands
- faroese
- ravnur project
- speech recognition in faroese
task_categories:
- automatic-speech-recognition
task_ids: []
---
# Dataset Card for ravnursson_asr
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [Ravnursson Faroese Speech and Transcripts](http://hdl.handle.net/20.500.12537/276)
- **Repository:** [Clarin.is](http://hdl.handle.net/20.500.12537/276)
- **Paper:** [ASR Language Resources for Faroese](https://aclanthology.org/2023.nodalida-1.4.pdf)
- **Paper:** [Creating a basic language resource kit for faroese.](https://aclanthology.org/2022.lrec-1.495.pdf)
- **Point of Contact:** [Annika Simonsen](mailto:annika.simonsen@hotmail.com), [Carlos Mena](mailto:carlos.mena@ciempiess.org)
### Dataset Summary
The corpus "RAVNURSSON FAROESE SPEECH AND TRANSCRIPTS" (or RAVNURSSON Corpus for short) is a collection of speech recordings with transcriptions intended for Automatic Speech Recognition (ASR) applications in the language that is spoken at the Faroe Islands (Faroese). It was curated at the Reykjavík University (RU) in 2022.
The RAVNURSSON Corpus is an extract of the "Basic Language Resource Kit 1.0" (BLARK 1.0) [1] developed by the Ravnur Project from the Faroe Islands [2]. As a matter of fact, the name RAVNURSSON comes from Ravnur (a tribute to the Ravnur Project) and the suffix "son" which in Icelandic means "son of". Therefore, the name "RAVNURSSON" means "The (Icelandic) son of Ravnur". The double "ss" is just for aesthetics.
The audio was collected by recording speakers reading texts. The participants are aged 15-83, divided into 3 age groups: 15-35, 36-60 and 61+.
The speech files are made of 249 female speakers and 184 male speakers; 433 speakers total. The recordings were made on TASCAM DR-40 Linear PCM audio recorders using the built-in stereo microphones in WAVE 16 bit with a sample rate of 48kHz, but then, downsampled to 16kHz@16bit mono for this corpus.
[1] Simonsen, A., Debess, I. N., Lamhauge, S. S., & Henrichsen, P. J. Creating a basic language resource kit for Faroese. In LREC 2022. 13th International Conference on Language Resources and Evaluation.
[2] Website. The Project Ravnur under the Talutøkni Foundation https://maltokni.fo/en/the-ravnur-project
### Example Usage
The RAVNURSSON Corpus is divided in 3 splits: train, validation and test. To load a specific split pass its name as a config name:
```python
from datasets import load_dataset
ravnursson = load_dataset("carlosdanielhernandezmena/ravnursson_asr")
```
To load an specific split (for example, the validation split) do:
```python
from datasets import load_dataset
ravnursson = load_dataset("carlosdanielhernandezmena/ravnursson_asr",split="validation")
```
### Supported Tasks
automatic-speech-recognition: The dataset can be used to train a model for Automatic Speech Recognition (ASR). The model is presented with an audio file and asked to transcribe the audio file to written text. The most common evaluation metric is the word error rate (WER).
### Languages
The audio is in Faroese.
The reading prompts for the RAVNURSSON Corpus have been generated by expert linguists. The whole corpus was balanced for phonetic and dialectal coverage; Test and Dev subsets are gender-balanced. Tabular computer-searchable information is included as well as written documentation.
## Dataset Structure
### Data Instances
```python
{
'audio_id': 'KAM06_151121_0101',
'audio': {
'path': '/home/carlos/.cache/HuggingFace/datasets/downloads/extracted/32b4a757027b72b8d2e25cd9c8be9c7c919cc8d4eb1a9a899e02c11fd6074536/dev/RDATA2/KAM06_151121/KAM06_151121_0101.flac',
'array': array([ 0.0010376 , -0.00521851, -0.00393677, ..., 0.00128174,
0.00076294, 0.00045776], dtype=float32),
'sampling_rate': 16000
},
'speaker_id': 'KAM06_151121',
'gender': 'female',
'age': '36-60',
'duration': 4.863999843597412,
'normalized_text': 'endurskin eru týdningarmikil í myrkri',
'dialect': 'sandoy'
}
```
### Data Fields
* `audio_id` (string) - id of audio segment
* `audio` (datasets.Audio) - a dictionary containing the path to the audio, the decoded audio array, and the sampling rate. In non-streaming mode (default), the path points to the locally extracted audio. In streaming mode, the path is the relative path of an audio inside its archive (as files are not downloaded and extracted locally).
* `speaker_id` (string) - id of speaker
* `gender` (string) - gender of speaker (male or female)
* `age` (string) - range of age of the speaker: Younger (15-35), Middle-aged (36-60) or Elderly (61+).
* `duration` (float32) - duration of the audio file in seconds.
* `normalized_text` (string) - normalized audio segment transcription
* `dialect` (string) - dialect group, for example "Suðuroy" or "Sandoy".
### Data Splits
The speech material has been subdivided into portions for training (train), development (evaluation) and testing (test). Lengths of each portion are: train = 100h08m, test = 4h30m, dev (evaluation)=4h30m.
To load an specific portion please see the above section "Example Usage".
The development and test portions have exactly 10 male and 10 female speakers each and both portions have exactly the same size in hours (4.5h each).
## Dataset Creation
### Curation Rationale
The directory called "speech" contains all the speech files of the corpus. The files in the speech directory are divided in three directories: train, dev and test. The train portion is sub-divided in three types of recordings: RDATA1O, RDATA1OP and RDATA2; this is due to the organization of the recordings in the original BLARK 1.0. There, the recordings are divided in Rdata1 and Rdata2.
One main difference between Rdata1 and Rdata2 is that the reading environment for Rdata2 was controlled by a software called "PushPrompt" which is included in the original BLARK 1.0. Another main difference is that in Rdata1 there are some available transcriptions labeled at the phoneme level. For this reason the audio files in the speech directory of the RAVNURSSON corpus are divided in the folders RDATA1O where "O" is for "Orthographic" and RDATA1OP where "O" is for Orthographic and "P" is for phonetic.
In the case of the dev and test portions, the data come only from Rdata2 which does not have labels at the phonetic level.
It is important to clarify that the RAVNURSSON Corpus only includes transcriptions at the orthographic level.
### Source Data
#### Initial Data Collection and Normalization
The dataset was released with normalized text only at an orthographic level in lower-case. The normalization process was performed by automatically removing punctuation marks and characters that are not present in the Faroese alphabet.
#### Who are the source language producers?
* The utterances were recorded using a TASCAM DR-40.
* Participants self-reported their age group, gender, native language and dialect.
* Participants are aged between 15 to 83 years.
* The corpus contains 71949 speech files from 433 speakers, totalling 109 hours and 9 minutes.
### Annotations
#### Annotation process
Most of the reading prompts were selected by experts from a Faroese text corpus (news, blogs, Wikipedia etc.) and were edited to fit the format. Reading prompts that are within specific domains (such as Faroese place names, numbers, license plates, telling time etc.) were written by the Ravnur Project. Then, a software tool called PushPrompt were used for reading sessions (voice recordings). PushPromt presents the text items in the reading material to the reader, allowing him/her to manage the session interactively (adjusting the reading tempo, repeating speech productions at wish, inserting short breaks as needed, etc.). When the reading session is completed, a log file (with time stamps for each production) is written as a data table compliant with the TextGrid-format.
#### Who are the annotators?
The corpus was annotated by the [Ravnur Project](https://maltokni.fo/en/the-ravnur-project)
### Personal and Sensitive Information
The dataset consists of people who have donated their voice. You agree to not attempt to determine the identity of speakers in this dataset.
## Considerations for Using the Data
### Social Impact of Dataset
This is the first ASR corpus in Faroese.
### Discussion of Biases
As the number of reading prompts was limited, the common denominator in the RAVNURSSON corpus is that one prompt is read by more than one speaker. This is relevant because is a common practice in ASR to create a language model using the prompts that are found in the train portion of the corpus. That is not recommended for the RAVNURSSON Corpus as it counts with many prompts shared by all the portions and that will produce an important bias in the language modeling task.
In this section we present some statistics about the repeated prompts through all the portions of the corpus.
- In the train portion:
* Total number of prompts = 65616
* Number of unique prompts = 38646
There are 26970 repeated prompts in the train portion. In other words, 41.1% of the prompts are repeated.
- In the test portion:
* Total number of prompts = 3002
* Number of unique prompts = 2887
There are 115 repeated prompts in the test portion. In other words, 3.83% of the prompts are repeated.
- In the dev portion:
* Total number of prompts = 3331
* Number of unique prompts = 3302
There are 29 repeated prompts in the dev portion. In other words, 0.87% of the prompts are repeated.
- Considering the corpus as a whole:
* Total number of prompts = 71949
* Number of unique prompts = 39945
There are 32004 repeated prompts in the whole corpus. In other words, 44.48% of the prompts are repeated.
NOTICE!: It is also important to clarify that none of the 3 portions of the corpus share speakers.
### Other Known Limitations
"RAVNURSSON FAROESE SPEECH AND TRANSCRIPTS" by Carlos Daniel Hernández Mena and Annika Simonsen is licensed under a Creative Commons Attribution 4.0 International (CC BY 4.0) License with the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
## Additional Information
### Dataset Curators
The dataset was collected by Annika Simonsen and curated by Carlos Daniel Hernández Mena.
### Licensing Information
[CC-BY-4.0](https://creativecommons.org/licenses/by/4.0/)
### Citation Information
```
@misc{carlosmenaravnursson2022,
title={Ravnursson Faroese Speech and Transcripts},
author={Hernandez Mena, Carlos Daniel and Simonsen, Annika},
year={2022},
url={http://hdl.handle.net/20.500.12537/276},
}
```
### Contributions
This project was made possible under the umbrella of the Language Technology Programme for Icelandic 2019-2023. The programme, which is managed and coordinated by Almannarómur, is funded by the Icelandic Ministry of Education, Science and Culture.
Special thanks to Dr. Jón Guðnason, professor at Reykjavík University and head of the Language and Voice Lab (LVL) for providing computational resources.
|
amitness/PAWS-X-maltese | 2023-05-03T12:09:04.000Z | [
"region:us"
] | amitness | null | null | null | 0 | 326 | ---
dataset_info:
features:
- name: sentence1
dtype: string
- name: sentence2
dtype: string
- name: label
dtype:
class_label:
names:
'0': not_entailment
'1': entailment
- name: sentence1_mt
dtype: string
- name: sentence2_mt
dtype: string
splits:
- name: test
num_bytes: 972852
num_examples: 2000
- name: train
num_bytes: 23898021
num_examples: 49175
- name: validation
num_bytes: 965498
num_examples: 2000
download_size: 18059931
dataset_size: 25836371
---
# Dataset Card for "PAWS-X-maltese"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
nisaar/LLAMA2_Legal_Dataset_4.4k_Instructions | 2023-07-30T15:25:03.000Z | [
"license:apache-2.0",
"region:us"
] | nisaar | null | null | null | 11 | 326 | ---
license: apache-2.0
---
|
glaiveai/glaive-function-calling-v2 | 2023-09-27T18:04:08.000Z | [
"task_categories:text-generation",
"size_categories:100K<n<1M",
"language:en",
"license:apache-2.0",
"region:us"
] | glaiveai | null | null | null | 11 | 326 | ---
license: apache-2.0
task_categories:
- text-generation
language:
- en
size_categories:
- 100K<n<1M
--- |
Brendan/icdst_multiwoz_turns_v24 | 2023-08-04T23:01:11.000Z | [
"region:us"
] | Brendan | null | null | null | 0 | 325 | ---
dataset_info:
features:
- name: dialogue_id
dtype: string
- name: turn_id
dtype: int8
- name: domains
sequence: string
- name: user_utterances
sequence: string
- name: system_utterances
sequence: string
- name: slot_values
struct:
- name: hotel
struct:
- name: price range
dtype: string
- name: type
dtype: string
- name: parking
dtype: string
- name: book day
dtype: string
- name: book people
dtype: string
- name: book stay
dtype: string
- name: stars
dtype: string
- name: internet
dtype: string
- name: name
dtype: string
- name: area
dtype: string
- name: train
struct:
- name: arrive by
dtype: string
- name: departure
dtype: string
- name: day
dtype: string
- name: book people
dtype: string
- name: leave at
dtype: string
- name: destination
dtype: string
- name: attraction
struct:
- name: area
dtype: string
- name: name
dtype: string
- name: type
dtype: string
- name: restaurant
struct:
- name: price range
dtype: string
- name: area
dtype: string
- name: food
dtype: string
- name: name
dtype: string
- name: book day
dtype: string
- name: book people
dtype: string
- name: book time
dtype: string
- name: taxi
struct:
- name: leave at
dtype: string
- name: destination
dtype: string
- name: departure
dtype: string
- name: arrive by
dtype: string
- name: turn_slot_values
struct:
- name: hotel
struct:
- name: price range
dtype: string
- name: type
dtype: string
- name: parking
dtype: string
- name: book day
dtype: string
- name: book people
dtype: string
- name: book stay
dtype: string
- name: stars
dtype: string
- name: internet
dtype: string
- name: name
dtype: string
- name: area
dtype: string
- name: train
struct:
- name: arrive by
dtype: string
- name: departure
dtype: string
- name: day
dtype: string
- name: book people
dtype: string
- name: leave at
dtype: string
- name: destination
dtype: string
- name: attraction
struct:
- name: area
dtype: string
- name: name
dtype: string
- name: type
dtype: string
- name: restaurant
struct:
- name: price range
dtype: string
- name: area
dtype: string
- name: food
dtype: string
- name: name
dtype: string
- name: book day
dtype: string
- name: book people
dtype: string
- name: book time
dtype: string
- name: taxi
struct:
- name: leave at
dtype: string
- name: destination
dtype: string
- name: departure
dtype: string
- name: arrive by
dtype: string
- name: last_slot_values
struct:
- name: hotel
struct:
- name: price range
dtype: string
- name: type
dtype: string
- name: parking
dtype: string
- name: book day
dtype: string
- name: book people
dtype: string
- name: book stay
dtype: string
- name: stars
dtype: string
- name: internet
dtype: string
- name: name
dtype: string
- name: area
dtype: string
- name: train
struct:
- name: arrive by
dtype: string
- name: departure
dtype: string
- name: day
dtype: string
- name: book people
dtype: string
- name: leave at
dtype: string
- name: destination
dtype: string
- name: attraction
struct:
- name: area
dtype: string
- name: name
dtype: string
- name: type
dtype: string
- name: restaurant
struct:
- name: price range
dtype: string
- name: area
dtype: string
- name: food
dtype: string
- name: name
dtype: string
- name: book day
dtype: string
- name: book people
dtype: string
- name: book time
dtype: string
- name: taxi
struct:
- name: leave at
dtype: string
- name: destination
dtype: string
- name: departure
dtype: string
- name: arrive by
dtype: string
splits:
- name: train
num_bytes: 61435522
num_examples: 54971
- name: validation
num_bytes: 8468954
num_examples: 7374
- name: test
num_bytes: 8487792
num_examples: 7368
- name: valid_20p_ablation
num_bytes: 1661862.8204502305
num_examples: 1447
- name: valid_10p
num_bytes: 839545.0737727149
num_examples: 731
- name: 1p_train_v1
num_bytes: 585621.7556165978
num_examples: 524
- name: 1p_train_v2
num_bytes: 583386.5580760764
num_examples: 522
- name: 1p_train_v3
num_bytes: 647089.6879809354
num_examples: 579
- name: 5p_train_v1
num_bytes: 3052162.241581925
num_examples: 2731
- name: 5p_train_v2
num_bytes: 3077867.0132979206
num_examples: 2754
- name: 5p_train_v3
num_bytes: 2994047.1055283695
num_examples: 2679
- name: 10p_train_v1
num_bytes: 6124441.261028542
num_examples: 5480
- name: 10p_train_v2
num_bytes: 6123323.662258281
num_examples: 5479
- name: 10p_train_v3
num_bytes: 6049562.143421076
num_examples: 5413
download_size: 13917589
dataset_size: 110131177.32301268
---
# Dataset Card for "icdst_multiwoz_turns_v24"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
result-kand2-sdxl-wuerst-karlo/463b7b19 | 2023-09-28T06:54:08.000Z | [
"region:us"
] | result-kand2-sdxl-wuerst-karlo | null | null | null | 0 | 325 | ---
dataset_info:
features:
- name: result
dtype: string
- name: id
dtype: int64
splits:
- name: train
num_bytes: 161
num_examples: 10
download_size: 1299
dataset_size: 161
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "463b7b19"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
freebase_qa | 2022-11-18T20:03:22.000Z | [
"task_categories:question-answering",
"task_ids:open-domain-qa",
"annotations_creators:crowdsourced",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:extended|trivia_qa",
"language:en",
"license:unknown",
"region:us"
] | null | FreebaseQA is for open-domain factoid question answering (QA) tasks over structured knowledge bases, like Freebase The data set is generated by matching trivia-type question-answer pairs with subject-predicateobject triples in Freebase. | @article{jiang2019freebaseqa,
title={FreebaseQA: A New Factoid QA Dataset Matching Trivia-Style Question-Answer Pairs with Freebase},
author={Jiang, Kelvin and Wu, Dekun and Jiang, Hui},
journal={north american chapter of the association for computational linguistics},
year={2019}
} | null | 2 | 324 | ---
annotations_creators:
- crowdsourced
language_creators:
- found
language:
- en
license:
- unknown
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- extended|trivia_qa
task_categories:
- question-answering
task_ids:
- open-domain-qa
paperswithcode_id: freebaseqa
pretty_name: FreebaseQA
dataset_info:
features:
- name: Question-ID
dtype: string
- name: RawQuestion
dtype: string
- name: ProcessedQuestion
dtype: string
- name: Parses
sequence:
- name: Parse-Id
dtype: string
- name: PotentialTopicEntityMention
dtype: string
- name: TopicEntityName
dtype: string
- name: TopicEntityMid
dtype: string
- name: InferentialChain
dtype: string
- name: Answers
sequence:
- name: AnswersMid
dtype: string
- name: AnswersName
sequence: string
splits:
- name: train
num_bytes: 10235375
num_examples: 20358
- name: test
num_bytes: 1987874
num_examples: 3996
- name: validation
num_bytes: 1974114
num_examples: 3994
download_size: 33204999
dataset_size: 14197363
---
# Dataset Card for FreebaseQA
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:** [FreebaseQA repository](https://github.com/kelvin-jiang/FreebaseQA)
- **Paper:** [FreebaseQA ACL paper](https://www.aclweb.org/anthology/N19-1028.pdf)
- **Leaderboard:**
- **Point of Contact:** [Kelvin Jiang](https://github.com/kelvin-jiang)
### Dataset Summary
FreebaseQA is a dataset for open-domain factoid question answering (QA) tasks over structured knowledge bases, like Freebase.
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
English
## Dataset Structure
### Data Instances
Here is an example from the dataset:
```
{'Parses': {'Answers': [{'AnswersMid': ['m.01npcx'], 'AnswersName': [['goldeneye']]}, {'AnswersMid': ['m.01npcx'], 'AnswersName': [['goldeneye']]}], 'InferentialChain': ['film.film_character.portrayed_in_films..film.performance.film', 'film.actor.film..film.performance.film'], 'Parse-Id': ['FreebaseQA-train-0.P0', 'FreebaseQA-train-0.P1'], 'PotentialTopicEntityMention': ['007', 'pierce brosnan'], 'TopicEntityMid': ['m.0clpml', 'm.018p4y'], 'TopicEntityName': ['james bond', 'pierce brosnan']}, 'ProcessedQuestion': "what was pierce brosnan's first outing as 007", 'Question-ID': 'FreebaseQA-train-0', 'RawQuestion': "What was Pierce Brosnan's first outing as 007?"}
```
### Data Fields
- `Question-ID`: a `string` feature representing ID of each question.
- `RawQuestion`: a `string` feature representing the original question collected from data sources.
- `ProcessedQuestion`: a `string` feature representing the question processed with some operations such as removal of trailing question mark and decapitalization.
- `Parses`: a dictionary feature representing the semantic parse(s) for the question containing:
- `Parse-Id`: a `string` feature representing the ID of each semantic parse.
- `PotentialTopicEntityMention`: a `string` feature representing the potential topic entity mention in the question.
- `TopicEntityName`: a `string` feature representing name or alias of the topic entity in the question from Freebase.
- `TopicEntityMid`: a `string` feature representing the Freebase MID of the topic entity in the question.
- `InferentialChain`: a `string` feature representing path from the topic entity node to the answer node in Freebase, labeled as a predicate.
- `Answers`: a dictionary feature representing the answer found from this parse containing:
- `AnswersMid`: a `string` feature representing the Freebase MID of the answer.
- `AnswersName`: a `list` of `string` features representing the answer string from the original question-answer pair.
### Data Splits
This data set contains 28,348 unique questions that are divided into three subsets: train (20,358), dev (3,994) and eval (3,996), formatted as JSON files: FreebaseQA-[train|dev|eval].json
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
The data set is generated by matching trivia-type question-answer pairs with subject-predicateobject triples in Freebase. For each collected question-answer pair, we first tag all entities in each question and search for relevant predicates that bridge a tagged entity with the answer in Freebase. Finally, human annotation is used to remove false positives in these matched triples.
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
Kelvin Jiang - Currently at University of Waterloo. Work was done at
York University.
### Licensing Information
[More Information Needed]
### Citation Information
```
@inproceedings{jiang-etal-2019-freebaseqa,
title = "{F}reebase{QA}: A New Factoid {QA} Data Set Matching Trivia-Style Question-Answer Pairs with {F}reebase",
author = "Jiang, Kelvin and
Wu, Dekun and
Jiang, Hui",
booktitle = "Proceedings of the 2019 Conference of the North {A}merican Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers)",
month = jun,
year = "2019",
address = "Minneapolis, Minnesota",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/N19-1028",
doi = "10.18653/v1/N19-1028",
pages = "318--323",
abstract = "In this paper, we present a new data set, named FreebaseQA, for open-domain factoid question answering (QA) tasks over structured knowledge bases, like Freebase. The data set is generated by matching trivia-type question-answer pairs with subject-predicate-object triples in Freebase. For each collected question-answer pair, we first tag all entities in each question and search for relevant predicates that bridge a tagged entity with the answer in Freebase. Finally, human annotation is used to remove any false positive in these matched triples. Using this method, we are able to efficiently generate over 54K matches from about 28K unique questions with minimal cost. Our analysis shows that this data set is suitable for model training in factoid QA tasks beyond simpler questions since FreebaseQA provides more linguistically sophisticated questions than other existing data sets.",
}
```
### Contributions
Thanks to [@gchhablani](https://github.com/gchhablani) and [@anaerobeth](https://github.com/anaerobeth) for adding this dataset. |
lmqg/qa_squadshifts_synthetic | 2023-01-15T14:25:15.000Z | [
"task_categories:question-answering",
"task_ids:extractive-qa",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:extended|wikipedia",
"language:en",
"license:cc-by-4.0",
"arxiv:2210.03992",
"region:us"
] | lmqg | null | @inproceedings{ushio-etal-2022-generative,
title = "{G}enerative {L}anguage {M}odels for {P}aragraph-{L}evel {Q}uestion {G}eneration",
author = "Ushio, Asahi and
Alva-Manchego, Fernando and
Camacho-Collados, Jose",
booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2022",
address = "Abu Dhabi, U.A.E.",
publisher = "Association for Computational Linguistics",
} | null | 0 | 322 | ---
license: cc-by-4.0
pretty_name: Synthetic QA dataset on SQuADShifts.
language: en
multilinguality: monolingual
size_categories: 10K<n<100K
source_datasets:
- extended|wikipedia
task_categories:
- question-answering
task_ids:
- extractive-qa
---
# Dataset Card for "lmqg/qa_squadshifts_synthetic"
## Dataset Description
- **Repository:** [https://github.com/asahi417/lm-question-generation](https://github.com/asahi417/lm-question-generation)
- **Paper:** [https://arxiv.org/abs/2210.03992](https://arxiv.org/abs/2210.03992)
- **Point of Contact:** [Asahi Ushio](http://asahiushio.com/)
### Dataset Summary
This is a synthetic QA dataset generated with fine-tuned QG models over [`lmqg/qa_squadshifts`](https://huggingface.co/datasets/lmqg/qa_squadshifts), made for question-answering based evaluation (QAE) for question generation model proposed by [Zhang and Bansal, 2019](https://aclanthology.org/D19-1253/).
The test split is the original validation set of [`lmqg/qa_squadshifts`](https://huggingface.co/datasets/lmqg/qa_squadshifts), where the model should be evaluate on.
### Supported Tasks and Leaderboards
* `question-answering`
### Languages
English (en)
## Dataset Structure
### Data Fields
The data fields are the same among all splits.
#### plain_text
- `id`: a `string` feature of id
- `title`: a `string` feature of title of the paragraph
- `context`: a `string` feature of paragraph
- `question`: a `string` feature of question
- `answers`: a `json` feature of answers
### Data Splits
TBA
## Citation Information
```
@inproceedings{ushio-etal-2022-generative,
title = "{G}enerative {L}anguage {M}odels for {P}aragraph-{L}evel {Q}uestion {G}eneration",
author = "Ushio, Asahi and
Alva-Manchego, Fernando and
Camacho-Collados, Jose",
booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2022",
address = "Abu Dhabi, U.A.E.",
publisher = "Association for Computational Linguistics",
}
``` |
HumanCompatibleAI/ppo-seals-HalfCheetah-v0 | 2023-05-29T09:52:45.000Z | [
"region:us"
] | HumanCompatibleAI | null | null | null | 0 | 322 | ---
dataset_info:
features:
- name: obs
sequence:
sequence: float64
- name: acts
sequence:
sequence: float32
- name: infos
sequence: string
- name: terminal
dtype: bool
- name: rews
sequence: float64
splits:
- name: train
num_bytes: 89536876
num_examples: 104
download_size: 24489478
dataset_size: 89536876
---
# Dataset Card for "ppo-seals-HalfCheetah-v0"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
mediabiasgroup/mbib-base | 2023-08-03T01:03:05.000Z | [
"task_categories:text-classification",
"size_categories:1M<n<10M",
"language:en",
"license:cc",
"media",
"mediabias",
"media-bias",
"media bias",
"region:us"
] | mediabiasgroup | null | null | null | 5 | 321 | ---
license: cc
task_categories:
- text-classification
language:
- en
tags:
- media
- mediabias
- media-bias
- media bias
size_categories:
- 1M<n<10M
---
# Dataset Card for Media-Bias-Identification-Benchmark
## Table of Contents
- [Dataset Card for Media-Bias-Identification-Benchmark](#dataset-card-for-mbib)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Tasks and Information](#tasks-and-information)
- [Baseline](#baseline)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [cognitive-bias](#cognitive-bias)
- [Data Fields](#data-fields)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://github.com/Media-Bias-Group/Media-Bias-Identification-Benchmark
- **Repository:** https://github.com/Media-Bias-Group/Media-Bias-Identification-Benchmark
- **Paper:** https://doi.org/10.1145/3539618.3591882
- **Point of Contact:** [Martin Wessel](mailto:martin.wessel@uni-konstanz.de)
### Baseline
<table>
<tr><td><b>Task</b></td><td><b>Model</b></td><td><b>Micro F1</b></td><td><b>Macro F1</b></td></tr>
<td>cognitive-bias</td> <td> ConvBERT/ConvBERT</td> <td>0.7126</td> <td> 0.7664</td></tr>
<td>fake-news</td> <td>Bart/RoBERTa-T</td> <td>0.6811</td> <td> 0.7533</td> </tr>
<td>gender-bias</td> <td> RoBERTa-T/ELECTRA</td> <td>0.8334</td> <td>0.8211</td> </tr>
<td>hate-speech</td> <td>RoBERTA-T/Bart</td> <td>0.8897</td> <td> 0.7310</td> </tr>
<td>linguistic-bias</td> <td> ConvBERT/Bart </td> <td> 0.7044 </td> <td> 0.4995 </td> </tr>
<td>political-bias</td> <td> ConvBERT/ConvBERT </td> <td> 0.7041 </td> <td> 0.7110 </td> </tr>
<td>racial-bias</td> <td> ConvBERT/ELECTRA </td> <td> 0.8772 </td> <td> 0.6170 </td> </tr>
<td>text-leve-bias</td> <td> ConvBERT/ConvBERT </td> <td> 0.7697</td> <td> 0.7532 </td> </tr>
</table>
### Languages
All datasets are in English
## Dataset Structure
### Data Instances
#### cognitive-bias
An example of one training instance looks as follows.
```json
{
"text": "A defense bill includes language that would require military hospitals to provide abortions on demand",
"label": 1
}
```
### Data Fields
- `text`: a sentence from various sources (eg., news articles, twitter, other social media).
- `label`: binary indicator of bias (0 = unbiased, 1 = biased)
## Considerations for Using the Data
### Social Impact of Dataset
We believe that MBIB offers a new common ground
for research in the domain, especially given the rising amount of
(research) attention directed toward media bias
### Citation Information
```
@inproceedings{
title = {Introducing MBIB - the first Media Bias Identification Benchmark Task and Dataset Collection},
author = {Wessel, Martin and Spinde, Timo and Horych, Tomáš and Ruas, Terry and Aizawa, Akiko and Gipp, Bela},
year = {2023},
note = {[in review]}
}
``` |
codeparrot/self-instruct-starcoder | 2023-06-21T08:52:23.000Z | [
"task_categories:text2text-generation",
"size_categories:1K<n<10K",
"license:bigscience-openrail-m",
"code",
"arxiv:2212.10560",
"arxiv:2305.06161",
"arxiv:1908.10084",
"doi:10.57967/hf/0790",
"region:us"
] | codeparrot | null | null | null | 26 | 321 | ---
dataset_info:
features:
- name: instruction
dtype: string
- name: output
dtype: string
- name: most_similar
dtype: string
- name: avg_similarity_score
dtype: float64
splits:
- name: curated
num_bytes: 1937514
num_examples: 771
- name: raw
num_bytes: 12969008
num_examples: 5003
- name: unique
num_bytes: 786771
num_examples: 308
- name: compile
num_bytes: 9048805
num_examples: 3549
download_size: 10935008
dataset_size: 24742098
tags:
- code
size_categories:
- 1K<n<10K
task_categories:
- text2text-generation
license: bigscience-openrail-m
---
# Self-instruct-starcoder
## Table of Contents
- [Summary](#summary)
- [Our approach](#our-approach)
- [Dataset generation](#dataset-generation)
- [Dataset quality](#dataset-quality)
- [Post-processing](#post-processing)
- [Self-consistency](#self-consistency)
- [Uniqueness](#uniqueness)
- [Compile](#compile)
- [Dataset structure](#dataset-structure)
- [Space](#space)
## Summary
Self-instruct-starcoder is a dataset that was generated by prompting starcoder to generate new instructions based on some human-written seed instructions.
The underlying process is explained in the paper [self-instruct](https://arxiv.org/abs/2212.10560). This algorithm gave birth to famous machine generated
datasets such as [Alpaca](https://github.com/tatsu-lab/stanford_alpaca) and [Code Alpaca](https://github.com/sahil280114/codealpaca) which are two datasets
obtained by prompting OpenAI `text-davinci-003` engine.
## Our approach
While our method is similar to self-instruct and stanford alpaca, we included some relevant modifications to the pipeline to account for what we wanted.
- Rather than using `text-davinci-003`, we chose to prompt [StarCoder](https://arxiv.org/abs/2305.06161) which is a 10x smaller LLM developed for code use cases. However, it is possible to use any decoder based LLM on the hub.
- We changed our seed tasks in order to have the model generate code related tasks. We completed the seed tasks from code alpaca with 20 additional algorithm instructions.
- We switched from the generation format `"instruction":` - `"input":` - `"output":` to the format `"instruction":` - `"output":` by concatenating each instruction and its input under the
keyword `instruction`. We did so because the previous prompting format tended to make the model generate test cases as input and their solution as output, which is not what we wanted.
- Finally, we incorporated the possibility to change the trigger word in the prompt. We thus replaced the `"instruction" :` keyword by `"Here is the correct solution to the problem ":` which
resulted into much better generated instructions.
## Dataset generation
The generation of the dataset was time consuming and we chose our parameters to limit the computational burden of our method.
- Number of examples in context : 4
- 2 seed instructions
- 2 machine generated instructions
- Number of instructions to generate : 5000
- Stop words used in the generation : ["\n20", "20.", "20 ."]
- Similarity threshold for rouge score : 0.7
## Dataset quality
StarCoder, while being a great model is not as capable as `text-davinci-003`. In the generation, the model quickly reach sort of a ceiling in terms of creativity.
There are many instructions that are similar to each other, but it should not bother since they are not phrased the same.
## Post-processing
Post-processing is an important part of the pipeline since it improves the quality of the dataset despite the fact that it implies getting rid of some examples. First we
need to identify what we want to avoid :
- A generated solution which does not answer to the corresponding instruction
- An instruction that is too similar to another one.
### Self-consistency
We imagined a process that we named **self-consistency**. The idea is to reverse-prompt the model to see if it can generate a sound instruction that corresponds to the
solution (output) it is prompted with. This is a particularly difficult few-shot task, and unfortunately StarCoder does not perform incredibly well on it. With a few-shot parameters of `4`
(all being seed tasks), the model is able to recover 1135 instructions out of 5003, which amount for 22.6% of the raw dataset. Fortunately, the inability for starcoder to generate instructions for some
solutions does not mean we should get rid of them. For the solutions (outputs) with generated instructions, we can compare these with the ground truth. For that we can use [Sentence-BERT](https://arxiv.org/abs/1908.10084) because the comparison should focus the meaning
rather than the word to word similarity ratio. We have about 771 instructions (~68%) with a similarity score >= 0.5 with their ground truth. These can be seen as high quality examples, they form the `curated` set.
<p align="center">
<img src="https://huggingface.co/datasets/codeparrot/self-instruct-starcoder/resolve/main/output.png" alt="drawing" width="300", height="300"/>
</p>
### Uniqueness
Another approach that can be used to clean the raw dataset is to focus on distinct instructions. For a given instruction, we go through all the instructions generated before it to see if there is one with a similarity score >= 0.5.
If it is the case, we remove that instruction. This process removes about 94% of the raw dataset, the remaining instructions form the `unique` set.
### Compile
We also decided to build a set which contains solely the example featuring a code written in python 3 which does not code a compilation error.
## Dataset structure
```python
from datasets import load_dataset
dataset = load_dataset("codeparrot/self-instruct-starcoder")
DatasetDict({
compile: Dataset({
features: ['instruction', 'output', 'most_similar', 'avg_similarity_score'],
num_rows: 3549
})
curated: Dataset({
features: ['instruction', 'output', 'most_similar', 'avg_similarity_score'],
num_rows: 771
})
raw: Dataset({
features: ['instruction', 'output', 'most_similar', 'avg_similarity_score'],
num_rows: 5003
})
unique: Dataset({
features: ['instruction', 'output', 'most_similar', 'avg_similarity_score'],
num_rows: 308
})
}))
```
|Field|Type|Description|
|---|---|---|
|instruction|string|Instruction|
|output|string|Answer to the instruction|
|most_similar|string|Dictionnary containing the 10 most similar instructions generated before the current instruction along with the similarity scores|
|avg_similarity_score|float64| Average similarity score|
## Additional resources
- [Space(self-instruct-starcoder)](https://huggingface.co/spaces/codeparrot/self-instruct-starcoder)
- [Github Repository](https://github.com/ArmelRandy/Self-instruct)
## Citation
```
@misc{title={Self-Instruct-StarCoder},
author={Zebaze, Armel Randy},
doi={https://doi.org/10.57967/hf/0790},
}
```
|
sam-mosaic/chat-v2 | 2023-07-18T00:23:25.000Z | [
"language:en",
"region:us"
] | sam-mosaic | null | null | null | 2 | 321 | ---
language: en
dataset_info:
features:
- name: prompt
dtype: string
- name: response
dtype: string
- name: source
dtype: string
splits:
- name: train
num_bytes: 1053541716.4621352
num_examples: 306305
- name: test
num_bytes: 20265459.694286585
num_examples: 5339
download_size: 505718158
dataset_size: 1073807176.1564217
---
# Dataset Card for "chat_v2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
hackathon-pln-es/ITAMA-DataSet | 2022-04-04T03:32:20.000Z | [
"region:us"
] | hackathon-pln-es | null | null | null | 2 | 320 | # Extracción de datos de Reddit
Se descargaron todos los titulos de los hilos de algunas comunidades en español de Reddit entre marzo del 2017 y enero del 2022:
| Comunidad | N° de hilos |
|----------------------------|-------------|
|AskRedditespanol | 28072 |
| BOLIVIA | 4935 |
| PERU | 20735 |
| argentina | 214986 |
| chile | 69077 |
|espanol | 39376 |
| mexico | 136984 |
| preguntaleareddit | 37300 |
| uruguay | 55693 |
| vzla | 42909 |
# Etiquetas
Luego, se etiquetaron manualmente algunos de los hilos para marcar AMA vs No AMA.
Se etiquetaron 757 hilos (AMA: 290, No AMA: 458), siguiendo una estrategia de query by committee.
En el archivo `etiqueta_ama.csv` se puede revisar esto.
Con estos 757 hilos se ejecuto un algoritmo de label spreading para identificar los hilos AMA restantes, esto dío un total de 3519 hilos.
En el archivo `autoetiquetado_ama.csv` se puede revisar esto.
Para identificar las profesiones de las personas que crearon los hilos se utilizó la siguiente lista:
https://raw.githubusercontent.com/davoclavo/adigmatangadijolachanga/master/profesiones.txt
Para lograr abarcar todas las posibilidades, se agregaron tanto las versiones que terminaban en "a" como en "o" de todas las profesiones.
Luego se agruparon las profesiones similares, para lograr un numero similar de hilos por profesión, para lo que se utilizo el siguiente diccionario:
```
sinonimos = {
'sexologo': 'psicologo',
'enfermero': 'medico',
'farmaceutico': 'medico',
'cirujano': 'medico',
'doctor': 'medico',
'radiologo': 'medico',
'dentista': 'odontologo',
'matron': 'medico',
'patologo': 'medico',
'educador': 'profesor',
'maestro': 'profesor',
'programador': 'ingeniero',
'informatico': 'ingeniero',
'juez': 'abogado',
'fiscal': 'abogado',
'oficial': 'abogado',
'astronomo': 'ciencias',
'fisico': 'ciencias',
'ecologo': 'ciencias',
'filosofo': 'ciencias',
'biologo': 'ciencias',
'zoologo': 'ciencias',
'quimico': 'ciencias',
'matematico': 'ciencias',
'meteorologo': 'ciencias',
'periodista': 'humanidades',
'dibujante': 'humanidades',
'fotografo': 'humanidades',
'traductor': 'humanidades',
'presidente': 'jefe',
'gerente': 'jefe'
}
```
Se descargaron todos los comentarios de los hilos AMA que contenian algunas de estas profesiones y luego se agruparon incluyendo solamente los que contenian algún signo de pregunta y que tuviesen una respuesta del autor del hilo, formando un par de pregunta respuesta.
Finalmente, se mantuvieron todas las profesiones que contenian más de 200 pares de pregunta respuesta, las que incluyen alrededor de 3000 pares pregunta respuesta.
En el archivo `qa_corpus_profesion.csv` se puede revisar esto. |
MLRS/korpus_malti | 2022-08-30T08:59:09.000Z | [
"task_categories:text-generation",
"task_categories:fill-mask",
"task_ids:language-modeling",
"task_ids:masked-language-modeling",
"annotations_creators:no-annotation",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:10M<n<100M",
"source_datasets:original",
"language:mt",
"license:cc-by-nc-sa-4.0",
"region:us"
] | MLRS | General Corpora for the Maltese language. | @inproceedings{BERTu,
title = "Pre-training Data Quality and Quantity for a Low-Resource Language: New Corpus and {BERT} Models for {M}altese",
author = "Micallef, Kurt and
Gatt, Albert and
Tanti, Marc and
van der Plas, Lonneke and
Borg, Claudia",
booktitle = "Proceedings of the Third Workshop on Deep Learning for Low-Resource Natural Language Processing",
month = jul,
year = "2022",
address = "Hybrid",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2022.deeplo-1.10",
doi = "10.18653/v1/2022.deeplo-1.10",
pages = "90--101",
} | null | 0 | 318 | ---
pretty_name: Korpus Malti
language:
- mt
multilinguality:
- monolingual
size_categories:
- 10M<n<100M
annotations_creators:
- no-annotation
language_creators:
- found
source_datasets:
- original
task_categories:
- text-generation
- fill-mask
task_ids:
- language-modeling
- masked-language-modeling
license:
- cc-by-nc-sa-4.0
---
# Korpus Malti 🇲🇹
General Corpora for the Maltese Language.
This dataset is composed of texts from various genres/domains written in Maltese.
## Configurations
### Shuffled data
The default configuration (`"shuffled"`) yields the the entire corpus from all genres:
```python
import datasets
dataset = datasets.load_dataset("MLRS/korpus_malti")
```
All sentences are combined together and shuffled, without preserving the sentence order.
No other annotations are present, so an instance would be of the following form:
```json
{
"text": "Din hija sentenza."
}
```
The training/validation/testing split is what was used to train the [BERTu](https://huggingface.co/MLRS/BERTu) model.
### Domain-split data
All other configurations contain a subset of the data.
For instance, this loads the Wikipedia portion:
```python
import datasets
dataset = datasets.load_dataset("MLRS/korpus_malti", "wiki")
```
For these configurations the data is not shuffled, so the sentence order on a document level is preserved.
An instance from these configurations would take the following form:
```json
{
"text": ["Din hija sentenza.", "U hawn oħra!"],
}
```
The raw data files contain additional metadata.
Its structure differs from one instance to another, depending on what's available from the source.
This information was typically scraped from the source itself & minimal processing is performed on such data.
## Additional Information
### Dataset Curators
The dataset was created by [Albert Gatt](https://albertgatt.github.io), [Kurt Micallef](https://www.um.edu.mt/profile/kurtmicallef), [Marc Tanti](https://www.um.edu.mt/profile/marctanti), [Lonneke van der Plas](https://sites.google.com/site/lonnekenlp/) and [Claudia Borg](https://www.um.edu.mt/profile/claudiaborg).
### Licensing Information
This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License][cc-by-nc-sa].
Permissions beyond the scope of this license may be available at [https://mlrs.research.um.edu.mt/](https://mlrs.research.um.edu.mt/).
[![CC BY-NC-SA 4.0][cc-by-nc-sa-image]][cc-by-nc-sa]
[cc-by-nc-sa]: http://creativecommons.org/licenses/by-nc-sa/4.0/
[cc-by-nc-sa-image]: https://licensebuttons.net/l/by-nc-sa/4.0/88x31.png
### Citation Information
This work was first presented in [Pre-training Data Quality and Quantity for a Low-Resource Language: New Corpus and BERT Models for Maltese](https://aclanthology.org/2022.deeplo-1.10/).
Cite it as follows:
```bibtex
@inproceedings{BERTu,
title = "Pre-training Data Quality and Quantity for a Low-Resource Language: New Corpus and {BERT} Models for {M}altese",
author = "Micallef, Kurt and
Gatt, Albert and
Tanti, Marc and
van der Plas, Lonneke and
Borg, Claudia",
booktitle = "Proceedings of the Third Workshop on Deep Learning for Low-Resource Natural Language Processing",
month = jul,
year = "2022",
address = "Hybrid",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2022.deeplo-1.10",
doi = "10.18653/v1/2022.deeplo-1.10",
pages = "90--101",
}
```
|
bigheiniuJ/EvalMetaICLAll | 2023-07-24T06:39:16.000Z | [
"region:us"
] | bigheiniuJ | null | null | null | 0 | 318 | ---
dataset_info:
features:
- name: task
dtype: string
- name: input
dtype: string
- name: output
dtype: string
- name: options
sequence: string
- name: seed
dtype: string
- name: split
dtype: string
splits:
- name: meta_train
num_bytes: 648803062
num_examples: 1111614
- name: meta_eval_100shot
num_bytes: 1798838431
num_examples: 2725939
download_size: 1076308849
dataset_size: 2447641493
---
# Dataset Card for "EvalMetaICLAll"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
medalpaca/medical_meadow_mmmlu | 2023-04-06T17:49:48.000Z | [
"region:us"
] | medalpaca | null | null | null | 0 | 316 | Entry not found |
biu-nlp/abstract-sim | 2023-05-29T09:33:17.000Z | [
"region:us"
] | biu-nlp | null | null | null | 2 | 316 | A dataset of Wikipedia sentences accompannied by valid and invalid abstract descriptions. |
mteb/toxic_conversations_50k | 2022-09-27T19:14:35.000Z | [
"language:en",
"region:us"
] | mteb | null | null | null | 2 | 315 | ---
language:
- en
---
# Toxic Conversation
This is a version of the [Jigsaw Unintended Bias in Toxicity Classification dataset](https://www.kaggle.com/c/jigsaw-unintended-bias-in-toxicity-classification/overview). It contains comments from the Civil Comments platform together with annotations if the comment is toxic or not.
This dataset just contains the first 50k training examples.
10 annotators annotated each example and, as recommended in the task page, set a comment as toxic when target >= 0.5
The dataset is inbalanced, with only about 8% of the comments marked as toxic.
|
marmal88/skin_cancer | 2023-01-25T02:21:28.000Z | [
"task_categories:image-classification",
"task_categories:image-segmentation",
"size_categories:1K<n<10K",
"language:en",
"skin_cancer",
"HAM10000",
"region:us"
] | marmal88 | null | null | null | 4 | 315 | ---
dataset_info:
features:
- name: image
dtype: image
- name: image_id
dtype: string
- name: lesion_id
dtype: string
- name: dx
dtype: string
- name: dx_type
dtype: string
- name: age
dtype: float64
- name: sex
dtype: string
- name: localization
dtype: string
splits:
- name: train
num_bytes: 2490501038.358
num_examples: 9577
- name: test
num_bytes: 351507473.24
num_examples: 1285
- name: validation
num_bytes: 681758880.144
num_examples: 2492
download_size: 3693626934
dataset_size: 3523767391.7419996
task_categories:
- image-classification
- image-segmentation
language:
- en
tags:
- skin_cancer
- HAM10000
pretty_name: HAM10000
size_categories:
- 1K<n<10K
---
# The HAM10000 dataset, a large collection of multi-source dermatoscopic images of common pigmented skin lesions
- Original Paper and Dataset [here](https://dataverse.harvard.edu/dataset.xhtml?persistentId=doi:10.7910/DVN/DBW86T)
- Kaggle dataset [here](https://www.kaggle.com/datasets/kmader/skin-cancer-mnist-ham10000?resource=download)
# Introduction to datasets
Training of neural networks for automated diagnosis of pigmented skin lesions is hampered by the small size and lack of diversity of available dataset of dermatoscopic images. We tackle this problem by releasing the HAM10000 ("Human Against Machine with 10000 training images") dataset. We collected dermatoscopic images from different populations, acquired and stored by different modalities. The final dataset consists of 10015 dermatoscopic images which can serve as a training set for academic machine learning purposes. Cases include a representative collection of all important diagnostic categories in the realm of pigmented lesions: Actinic keratoses and intraepithelial carcinoma / Bowen's disease (akiec), basal cell carcinoma (bcc), benign keratosis-like lesions (solar lentigines / seborrheic keratoses and lichen-planus like keratoses, bkl), dermatofibroma (df), melanoma (mel), melanocytic nevi (nv) and vascular lesions (angiomas, angiokeratomas, pyogenic granulomas and hemorrhage, vasc).
More than 50% of lesions are confirmed through histopathology (histo), the ground truth for the rest of the cases is either follow-up examination (follow_up), expert consensus (consensus), or confirmation by in-vivo confocal microscopy (confocal).
The test set is not public, but the evaluation server remains running (see the challenge website). Any publications written using the HAM10000 data should be evaluated on the official test set hosted there, so that methods can be fairly compared.
- Test site can be accessed [here](https://challenge.isic-archive.com/landing/2018/)
# Disclaimer and additional information
This is a contribution to open sourced data in hugging face for image data. Images can be obtained from above links.
Train test split was done using a stratified splitting by cancer/diagnosis type. The code to stratify the dataset can be obtained on my github [here](https://github.com/marmal88/skin_cancer).
I do not own any rights to above images.
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
eReverter/cnn_dailymail_extractive | 2023-07-19T18:45:02.000Z | [
"task_categories:summarization",
"size_categories:100K<n<1M",
"language:en",
"license:mit",
"arxiv:1903.10318",
"region:us"
] | eReverter | null | null | null | 0 | 315 | ---
dataset_info:
features:
- name: src
sequence: string
- name: tgt
sequence: string
- name: labels
sequence: int64
splits:
- name: test
num_bytes: 53831114
num_examples: 11490
- name: train
num_bytes: 1376640992
num_examples: 287113
- name: validation
num_bytes: 62200550
num_examples: 13368
download_size: 857262516
dataset_size: 1492672656
license: mit
task_categories:
- summarization
language:
- en
size_categories:
- 100K<n<1M
---
## Data Card for Extractive CNN/DailyMail Dataset
### Overview
This is an extractive version of the [CNN/Dailymail](https://huggingface.co/datasets/cnn_dailymail) dataset. The structure of this dataset is identical to the original except for a minor modification in the data representation and the introduction of labels to denote the extractive summary.
The labels are generated following a greedy algorithm, as proposed by [Liu (2019)](https://arxiv.org/abs/1903.10318). The curation process can be found in the [bertsum-hf](https://github.com/eReverter/bertsum-hf) repository. I am uploading it in case someone does not want to go through the preprocessing, although Liu has a version ready for training in its [bertsum](https://github.com/nlpyang/BertSum) repository!
In this dataset:
- 'src' corresponds to 'article',
- 'tgt' equates to 'abstract',
- 'labels' represents a mapping of sentences forming the extractive summary.
### Data Architecture
Each entry in the dataset contains the following fields:
- `id`: a unique `string` identifier for each example.
- `src`: a `list[string]` field representing the original news article. Each string in the list is a separate sentence from the article.
- `tgt`: a `list[string]` field representing the professionally edited highlights or abstract of the article.
- `labels`: a `list[bool]` field with binary values. Each boolean value corresponds to a sentence in 'article', indicating whether that sentence is part of the extractive summary (1 for True, 0 for False).
### Sample Data Entry
Here is an illustrative example from the dataset:
```json
{
"id": "1",
"src": ["This is the first sentence",
"This is the second"],
"tgt": ["This is one of the highlights"],
"labels": [1, 0]
}
```
In this example, the first sentence of the article is selected as part of the extractive summary (as indicated by '1' in the 'labels'), while the second sentence is not ('0' in the 'labels').
### Usage
The extractive CNN/DailyMail dataset can be used to train and evaluate models for extractive text summarization tasks. It allows models to learn to predict which sentences from an original text contribute to a summary, providing a binary mapping as a reference. The 'tgt' or 'abstract' field can serve as a basis for comparison, helping to assess how well the selected sentences cover the key points in the abstract. |
pvduy/arena_synth | 2023-08-02T16:02:03.000Z | [
"region:us"
] | pvduy | null | null | null | 0 | 315 | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: selected
dtype: string
- name: rejected
dtype: string
splits:
- name: train
num_bytes: 53190421
num_examples: 29851
- name: test
num_bytes: 14269380
num_examples: 8000
download_size: 36514341
dataset_size: 67459801
---
# Dataset Card for "arena_synth"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
AdaptLLM/finance-tasks | 2023-09-26T08:36:08.000Z | [
"arxiv:2309.09530",
"region:us"
] | AdaptLLM | null | null | null | 3 | 315 | ---
configs:
- config_name: ConvFinQA
data_files:
- split: test
path: "ConviFinQA/test.json"
- config_name: FiQA_SA
data_files:
- split: test
path: "FiQA_SA/test.json"
- config_name: FPB
data_files:
- split: test
path: "FPB/test.json"
- config_name: Headline
data_files:
- split: test
path: "Headline/test.json"
- config_name: NER
data_files:
- split: test
path: "NER/test.json"
---
# Adapting Large Language Models via Reading Comprehension
This repo contains the evaluation datasets for our paper [Adapting Large Language Models via Reading Comprehension](https://arxiv.org/pdf/2309.09530.pdf)
We explore **continued pre-training on domain-specific corpora** for large language models. While this approach enriches LLMs with domain knowledge, it significantly hurts their prompting ability for question answering. Inspired by human learning via reading comprehension, we propose a simple method to **transform large-scale pre-training corpora into reading comprehension texts**, consistently improving prompting performance across tasks in **biomedicine, finance, and law domains**. Our 7B model competes with much larger domain-specific models like BloombergGPT-50B. Moreover, our domain-specific reading comprehension texts enhance model performance even on general benchmarks, indicating potential for developing a general LLM across more domains.
## GitHub repo:
https://github.com/microsoft/LMOps
## Domain-specific LLMs:
Our models of different domains are now available in Huggingface: [Biomedicine-LLM](https://huggingface.co/AdaptLLM/medicine-LLM), [Finance-LLM](https://huggingface.co/AdaptLLM/finance-LLM) and [Law-LLM](https://huggingface.co/AdaptLLM/law-LLM), the performances of our AdaptLLM compared to other domain-specific LLMs are:
<p align='center'>
<img src="./comparison.png" width="700">
</p>
## Domain-specific Tasks:
To easily reproduce our results, we have uploaded the filled-in zero/few-shot input instructions and output completions of each domain-specific task: [biomedicine-tasks](https://huggingface.co/datasets/AdaptLLM/medicine-tasks), [finance-tasks](https://huggingface.co/datasets/AdaptLLM/finance-tasks), and [law-tasks](https://huggingface.co/datasets/AdaptLLM/law-tasks).
## Citation:
```bibtex
@inproceedings{AdaptLLM,
title={Adapting Large Language Models via Reading Comprehension},
author={Daixuan Cheng and Shaohan Huang and Furu Wei},
url={https://arxiv.org/abs/2309.09530},
year={2023},
}
```
|
yxchar/rct-20k-tlm | 2021-11-05T01:18:46.000Z | [
"region:us"
] | yxchar | null | null | null | 0 | 314 | Entry not found |
mteb/stackoverflowdupquestions-reranking | 2022-09-27T19:13:01.000Z | [
"language:en",
"region:us"
] | mteb | null | null | null | 0 | 314 | ---
language:
- en
--- |
allenai/wmt22_african | 2022-08-15T21:52:43.000Z | [
"region:us"
] | allenai | null | null | null | 3 | 314 | # Dataset Card for allenai/wmt22_african
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** https://www.statmt.org/wmt22/large-scale-multilingual-translation-task.html
- **Repository:** [Needs More Information]
- **Paper:** [Needs More Information]
- **Leaderboard:** [Needs More Information]
- **Point of Contact:** [Needs More Information]
### Dataset Summary
This dataset was created based on [metadata](https://github.com/facebookresearch/LASER/tree/main/data/wmt22_african) for mined bitext released by Meta AI. It contains bitext for 248 pairs for the African languages that are part of the [2022 WMT Shared Task on Large Scale Machine Translation Evaluation for African Languages](https://www.statmt.org/wmt22/large-scale-multilingual-translation-task.html).
#### How to use the data
There are two ways to access the data:
* Via the Hugging Face Python datasets library
```
from datasets import load_dataset
dataset = load_dataset("allenai/wmt22_african")
```
* Clone the git repo
```
git lfs install
git clone https://huggingface.co/datasets/allenai/wmt22_african
```
### Supported Tasks and Leaderboards
This dataset is one of resources allowed under the Constrained Track for the [2022 WMT Shared Task on Large Scale Machine Translation Evaluation for African Languages](https://www.statmt.org/wmt22/large-scale-multilingual-translation-task.html).
### Languages
#### Focus languages
| Language | Code |
| -------- | ---- |
| Afrikaans | afr |
| Amharic | amh |
| Chichewa | nya |
| Nigerian Fulfulde | fuv |
| Hausa | hau |
| Igbo | ibo |
| Kamba | kam |
| Kinyarwanda | kin |
| Lingala | lin |
| Luganda | lug |
| Luo | luo |
| Northern Sotho | nso |
| Oroma | orm |
| Shona | sna |
| Somali | som |
| Swahili | swh |
| Swati | ssw |
| Tswana | tsn |
| Umbundu | umb |
| Wolof | wol |
| Xhosa | xho |
| Xitsonga | tso |
| Yoruba | yor |
| Zulu | zul |
Colonial linguae francae: English - eng, French - fra
## Dataset Structure
The dataset contains gzipped tab delimited text files for each direction. Each text file contains lines with parallel sentences.
### Data Instances
The dataset contains 248 language pairs.
Sentence counts for each pair can be found [here](https://huggingface.co/datasets/allenai/wmt22_african/blob/main/sentence_counts.txt).
### Data Fields
Every instance for a language pair contains the following fields: 'translation' (containing sentence pairs), 'laser_score', 'source_sentence_lid', 'target_sentence_lid', where 'lid' is language classification probability.
Example:
```
{
'translation':
{
'afr': 'In Mei 2007, in ooreenstemming met die spesifikasies van die Java Gemeenskapproses, het Sun Java tegnologie geherlisensieer onder die GNU General Public License.',
'eng': 'As of May 2007, in compliance with the specifications of the Java Community Process, Sun relicensed most of its Java technologies under the GNU General Public License.'
},
'laser_score': 1.0717015266418457,
'source_sentence_lid': 0.9996600151062012,
'target_sentence_lid': 0.9972000122070312
}
```
### Data Splits
The data is not split into train, dev, and test.
## Dataset Creation
### Curation Rationale
Parallel sentences from monolingual data in Common Crawl and ParaCrawl were identified via [Language-Agnostic Sentence Representation (LASER)](https://github.com/facebookresearch/LASER) encoders.
### Source Data
#### Initial Data Collection and Normalization
Monolingual data was obtained from Common Crawl and ParaCrawl.
#### Who are the source language producers?
Contributors to web text in Common Crawl and ParaCrawl.
### Annotations
#### Annotation process
The data was not human annotated. The metadata used to create the dataset can be found here: https://github.com/facebookresearch/LASER/tree/main/data/wmt22_african
#### Who are the annotators?
The data was not human annotated. Parallel text from Common Crawl and Para Crawl monolingual data were identified automatically via [LASER](https://github.com/facebookresearch/LASER) encoders.
### Personal and Sensitive Information
[Needs More Information]
## Considerations for Using the Data
### Social Impact of Dataset
This dataset provides data for training machine learning systems for many languages that have low resources available for NLP.
### Discussion of Biases
Biases in the data have not been studied.
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
[Needs More Information]
### Licensing Information
The dataset is released under the terms of [ODC-BY](https://opendatacommons.org/licenses/by/1-0/). By using this, you are also bound by the Internet Archive [Terms of Use](https://archive.org/about/terms.php) in respect of the content contained in the dataset.
### Citation Information
NLLB Team et al, No Language Left Behind: Scaling Human-Centered Machine Translation, Arxiv, 2022.
### Contributions
We thank the AllenNLP team at AI2 for hosting and releasing this data, including [Akshita Bhagia](https://akshitab.github.io/) (for engineering efforts to create the huggingface dataset), and [Jesse Dodge](https://jessedodge.github.io/) (for organizing the connection).
|
argilla/research_titles_multi-label | 2022-10-07T13:22:53.000Z | [
"region:us"
] | argilla | null | null | null | 0 | 314 | Entry not found |
code_x_glue_tc_text_to_code | 2022-11-18T19:31:29.000Z | [
"task_categories:translation",
"annotations_creators:found",
"language_creators:found",
"multilinguality:other-programming-languages",
"size_categories:100K<n<1M",
"source_datasets:original",
"language:code",
"language:en",
"license:c-uda",
"text-to-code",
"region:us"
] | null | We use concode dataset which is a widely used code generation dataset from Iyer's EMNLP 2018 paper Mapping Language to Code in Programmatic Context. See paper for details. | @article{iyer2018mapping,
title={Mapping language to code in programmatic context},
author={Iyer, Srinivasan and Konstas, Ioannis and Cheung, Alvin and Zettlemoyer, Luke},
journal={arXiv preprint arXiv:1808.09588},
year={2018}
} | null | 18 | 313 | ---
annotations_creators:
- found
language_creators:
- found
language:
- code
- en
license:
- c-uda
multilinguality:
- other-programming-languages
size_categories:
- 100K<n<1M
source_datasets:
- original
task_categories:
- translation
task_ids: []
pretty_name: CodeXGlueTcTextToCode
tags:
- text-to-code
dataset_info:
features:
- name: id
dtype: int32
- name: nl
dtype: string
- name: code
dtype: string
splits:
- name: train
num_bytes: 96225611
num_examples: 100000
- name: validation
num_bytes: 1749751
num_examples: 2000
- name: test
num_bytes: 1609306
num_examples: 2000
download_size: 100769638
dataset_size: 99584668
---
# Dataset Card for "code_x_glue_tc_text_to_code"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits-sample-size)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://github.com/microsoft/CodeXGLUE/tree/main/Text-Code/text-to-code
### Dataset Summary
CodeXGLUE text-to-code dataset, available at https://github.com/microsoft/CodeXGLUE/tree/main/Text-Code/text-to-code
The dataset we use is crawled and filtered from Microsoft Documentation, whose document located at https://github.com/MicrosoftDocs/.
### Supported Tasks and Leaderboards
- `machine-translation`: The dataset can be used to train a model for generating Java code from an **English** natural language description.
### Languages
- Java **programming** language
## Dataset Structure
### Data Instances
An example of 'train' looks as follows.
```
{
"code": "boolean function ( ) { return isParsed ; }",
"id": 0,
"nl": "check if details are parsed . concode_field_sep Container parent concode_elem_sep boolean isParsed concode_elem_sep long offset concode_elem_sep long contentStartPosition concode_elem_sep ByteBuffer deadBytes concode_elem_sep boolean isRead concode_elem_sep long memMapSize concode_elem_sep Logger LOG concode_elem_sep byte[] userType concode_elem_sep String type concode_elem_sep ByteBuffer content concode_elem_sep FileChannel fileChannel concode_field_sep Container getParent concode_elem_sep byte[] getUserType concode_elem_sep void readContent concode_elem_sep long getOffset concode_elem_sep long getContentSize concode_elem_sep void getContent concode_elem_sep void setDeadBytes concode_elem_sep void parse concode_elem_sep void getHeader concode_elem_sep long getSize concode_elem_sep void parseDetails concode_elem_sep String getType concode_elem_sep void _parseDetails concode_elem_sep String getPath concode_elem_sep boolean verify concode_elem_sep void setParent concode_elem_sep void getBox concode_elem_sep boolean isSmallBox"
}
```
### Data Fields
In the following each data field in go is explained for each config. The data fields are the same among all splits.
#### default
|field name| type | description |
|----------|------|---------------------------------------------|
|id |int32 | Index of the sample |
|nl |string| The natural language description of the task|
|code |string| The programming source code for the task |
### Data Splits
| name |train |validation|test|
|-------|-----:|---------:|---:|
|default|100000| 2000|2000|
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
https://github.com/microsoft, https://github.com/madlag
### Licensing Information
Computational Use of Data Agreement (C-UDA) License.
### Citation Information
```
@article{iyer2018mapping,
title={Mapping language to code in programmatic context},
author={Iyer, Srinivasan and Konstas, Ioannis and Cheung, Alvin and Zettlemoyer, Luke},
journal={arXiv preprint arXiv:1808.09588},
year={2018}
}
```
### Contributions
Thanks to @madlag (and partly also @ncoop57) for adding this dataset. |
polyglot_ner | 2023-04-05T13:36:52.000Z | [
"task_categories:token-classification",
"task_ids:named-entity-recognition",
"annotations_creators:machine-generated",
"language_creators:found",
"multilinguality:multilingual",
"size_categories:unknown",
"source_datasets:original",
"language:ar",
"language:bg",
"language:ca",
"language:cs",
"language:da",
"language:de",
"language:el",
"language:en",
"language:es",
"language:et",
"language:fa",
"language:fi",
"language:fr",
"language:he",
"language:hi",
"language:hr",
"language:hu",
"language:id",
"language:it",
"language:ja",
"language:ko",
"language:lt",
"language:lv",
"language:ms",
"language:nl",
"language:no",
"language:pl",
"language:pt",
"language:ro",
"language:ru",
"language:sk",
"language:sl",
"language:sr",
"language:sv",
"language:th",
"language:tl",
"language:tr",
"language:uk",
"language:vi",
"language:zh",
"license:unknown",
"arxiv:1410.3791",
"region:us"
] | null | Polyglot-NER
A training dataset automatically generated from Wikipedia and Freebase the task
of named entity recognition. The dataset contains the basic Wikipedia based
training data for 40 languages we have (with coreference resolution) for the task of
named entity recognition. The details of the procedure of generating them is outlined in
Section 3 of the paper (https://arxiv.org/abs/1410.3791). Each config contains the data
corresponding to a different language. For example, "es" includes only spanish examples. | @article{polyglotner,
author = {Al-Rfou, Rami and Kulkarni, Vivek and Perozzi, Bryan and Skiena, Steven},
title = {{Polyglot-NER}: Massive Multilingual Named Entity Recognition},
journal = {{Proceedings of the 2015 {SIAM} International Conference on Data Mining, Vancouver, British Columbia, Canada, April 30- May 2, 2015}},
month = {April},
year = {2015},
publisher = {SIAM},
} | null | 20 | 313 | ---
annotations_creators:
- machine-generated
language_creators:
- found
language:
- ar
- bg
- ca
- cs
- da
- de
- el
- en
- es
- et
- fa
- fi
- fr
- he
- hi
- hr
- hu
- id
- it
- ja
- ko
- lt
- lv
- ms
- nl
- 'no'
- pl
- pt
- ro
- ru
- sk
- sl
- sr
- sv
- th
- tl
- tr
- uk
- vi
- zh
license:
- unknown
multilinguality:
- multilingual
pretty_name: Polyglot-NER
size_categories:
- unknown
source_datasets:
- original
task_categories:
- token-classification
task_ids:
- named-entity-recognition
paperswithcode_id: polyglot-ner
dataset_info:
- config_name: ca
features:
- name: id
dtype: string
- name: lang
dtype: string
- name: words
sequence: string
- name: ner
sequence: string
splits:
- name: train
num_bytes: 143746026
num_examples: 372665
download_size: 1107018606
dataset_size: 143746026
- config_name: de
features:
- name: id
dtype: string
- name: lang
dtype: string
- name: words
sequence: string
- name: ner
sequence: string
splits:
- name: train
num_bytes: 156744752
num_examples: 547578
download_size: 1107018606
dataset_size: 156744752
- config_name: es
features:
- name: id
dtype: string
- name: lang
dtype: string
- name: words
sequence: string
- name: ner
sequence: string
splits:
- name: train
num_bytes: 145387551
num_examples: 386699
download_size: 1107018606
dataset_size: 145387551
- config_name: fi
features:
- name: id
dtype: string
- name: lang
dtype: string
- name: words
sequence: string
- name: ner
sequence: string
splits:
- name: train
num_bytes: 95175890
num_examples: 387465
download_size: 1107018606
dataset_size: 95175890
- config_name: hi
features:
- name: id
dtype: string
- name: lang
dtype: string
- name: words
sequence: string
- name: ner
sequence: string
splits:
- name: train
num_bytes: 177698330
num_examples: 401648
download_size: 1107018606
dataset_size: 177698330
- config_name: id
features:
- name: id
dtype: string
- name: lang
dtype: string
- name: words
sequence: string
- name: ner
sequence: string
splits:
- name: train
num_bytes: 152560050
num_examples: 463862
download_size: 1107018606
dataset_size: 152560050
- config_name: ko
features:
- name: id
dtype: string
- name: lang
dtype: string
- name: words
sequence: string
- name: ner
sequence: string
splits:
- name: train
num_bytes: 174523416
num_examples: 560105
download_size: 1107018606
dataset_size: 174523416
- config_name: ms
features:
- name: id
dtype: string
- name: lang
dtype: string
- name: words
sequence: string
- name: ner
sequence: string
splits:
- name: train
num_bytes: 155268778
num_examples: 528181
download_size: 1107018606
dataset_size: 155268778
- config_name: pl
features:
- name: id
dtype: string
- name: lang
dtype: string
- name: words
sequence: string
- name: ner
sequence: string
splits:
- name: train
num_bytes: 159684112
num_examples: 623267
download_size: 1107018606
dataset_size: 159684112
- config_name: ru
features:
- name: id
dtype: string
- name: lang
dtype: string
- name: words
sequence: string
- name: ner
sequence: string
splits:
- name: train
num_bytes: 200717423
num_examples: 551770
download_size: 1107018606
dataset_size: 200717423
- config_name: sr
features:
- name: id
dtype: string
- name: lang
dtype: string
- name: words
sequence: string
- name: ner
sequence: string
splits:
- name: train
num_bytes: 183437513
num_examples: 559423
download_size: 1107018606
dataset_size: 183437513
- config_name: tl
features:
- name: id
dtype: string
- name: lang
dtype: string
- name: words
sequence: string
- name: ner
sequence: string
splits:
- name: train
num_bytes: 47104871
num_examples: 160750
download_size: 1107018606
dataset_size: 47104871
- config_name: vi
features:
- name: id
dtype: string
- name: lang
dtype: string
- name: words
sequence: string
- name: ner
sequence: string
splits:
- name: train
num_bytes: 141062258
num_examples: 351643
download_size: 1107018606
dataset_size: 141062258
- config_name: ar
features:
- name: id
dtype: string
- name: lang
dtype: string
- name: words
sequence: string
- name: ner
sequence: string
splits:
- name: train
num_bytes: 183551222
num_examples: 339109
download_size: 1107018606
dataset_size: 183551222
- config_name: cs
features:
- name: id
dtype: string
- name: lang
dtype: string
- name: words
sequence: string
- name: ner
sequence: string
splits:
- name: train
num_bytes: 156792129
num_examples: 564462
download_size: 1107018606
dataset_size: 156792129
- config_name: el
features:
- name: id
dtype: string
- name: lang
dtype: string
- name: words
sequence: string
- name: ner
sequence: string
splits:
- name: train
num_bytes: 195456401
num_examples: 446052
download_size: 1107018606
dataset_size: 195456401
- config_name: et
features:
- name: id
dtype: string
- name: lang
dtype: string
- name: words
sequence: string
- name: ner
sequence: string
splits:
- name: train
num_bytes: 21961619
num_examples: 87023
download_size: 1107018606
dataset_size: 21961619
- config_name: fr
features:
- name: id
dtype: string
- name: lang
dtype: string
- name: words
sequence: string
- name: ner
sequence: string
splits:
- name: train
num_bytes: 147560734
num_examples: 418411
download_size: 1107018606
dataset_size: 147560734
- config_name: hr
features:
- name: id
dtype: string
- name: lang
dtype: string
- name: words
sequence: string
- name: ner
sequence: string
splits:
- name: train
num_bytes: 154151689
num_examples: 629667
download_size: 1107018606
dataset_size: 154151689
- config_name: it
features:
- name: id
dtype: string
- name: lang
dtype: string
- name: words
sequence: string
- name: ner
sequence: string
splits:
- name: train
num_bytes: 147520094
num_examples: 378325
download_size: 1107018606
dataset_size: 147520094
- config_name: lt
features:
- name: id
dtype: string
- name: lang
dtype: string
- name: words
sequence: string
- name: ner
sequence: string
splits:
- name: train
num_bytes: 165319919
num_examples: 848018
download_size: 1107018606
dataset_size: 165319919
- config_name: nl
features:
- name: id
dtype: string
- name: lang
dtype: string
- name: words
sequence: string
- name: ner
sequence: string
splits:
- name: train
num_bytes: 150737871
num_examples: 520664
download_size: 1107018606
dataset_size: 150737871
- config_name: pt
features:
- name: id
dtype: string
- name: lang
dtype: string
- name: words
sequence: string
- name: ner
sequence: string
splits:
- name: train
num_bytes: 145627857
num_examples: 396773
download_size: 1107018606
dataset_size: 145627857
- config_name: sk
features:
- name: id
dtype: string
- name: lang
dtype: string
- name: words
sequence: string
- name: ner
sequence: string
splits:
- name: train
num_bytes: 134174889
num_examples: 500135
download_size: 1107018606
dataset_size: 134174889
- config_name: sv
features:
- name: id
dtype: string
- name: lang
dtype: string
- name: words
sequence: string
- name: ner
sequence: string
splits:
- name: train
num_bytes: 157058369
num_examples: 634881
download_size: 1107018606
dataset_size: 157058369
- config_name: tr
features:
- name: id
dtype: string
- name: lang
dtype: string
- name: words
sequence: string
- name: ner
sequence: string
splits:
- name: train
num_bytes: 164456506
num_examples: 607324
download_size: 1107018606
dataset_size: 164456506
- config_name: zh
features:
- name: id
dtype: string
- name: lang
dtype: string
- name: words
sequence: string
- name: ner
sequence: string
splits:
- name: train
num_bytes: 165056969
num_examples: 1570853
download_size: 1107018606
dataset_size: 165056969
- config_name: bg
features:
- name: id
dtype: string
- name: lang
dtype: string
- name: words
sequence: string
- name: ner
sequence: string
splits:
- name: train
num_bytes: 190509195
num_examples: 559694
download_size: 1107018606
dataset_size: 190509195
- config_name: da
features:
- name: id
dtype: string
- name: lang
dtype: string
- name: words
sequence: string
- name: ner
sequence: string
splits:
- name: train
num_bytes: 150551293
num_examples: 546440
download_size: 1107018606
dataset_size: 150551293
- config_name: en
features:
- name: id
dtype: string
- name: lang
dtype: string
- name: words
sequence: string
- name: ner
sequence: string
splits:
- name: train
num_bytes: 145491677
num_examples: 423982
download_size: 1107018606
dataset_size: 145491677
- config_name: fa
features:
- name: id
dtype: string
- name: lang
dtype: string
- name: words
sequence: string
- name: ner
sequence: string
splits:
- name: train
num_bytes: 180093656
num_examples: 492903
download_size: 1107018606
dataset_size: 180093656
- config_name: he
features:
- name: id
dtype: string
- name: lang
dtype: string
- name: words
sequence: string
- name: ner
sequence: string
splits:
- name: train
num_bytes: 177231613
num_examples: 459933
download_size: 1107018606
dataset_size: 177231613
- config_name: hu
features:
- name: id
dtype: string
- name: lang
dtype: string
- name: words
sequence: string
- name: ner
sequence: string
splits:
- name: train
num_bytes: 160702240
num_examples: 590218
download_size: 1107018606
dataset_size: 160702240
- config_name: ja
features:
- name: id
dtype: string
- name: lang
dtype: string
- name: words
sequence: string
- name: ner
sequence: string
splits:
- name: train
num_bytes: 193679570
num_examples: 1691018
download_size: 1107018606
dataset_size: 193679570
- config_name: lv
features:
- name: id
dtype: string
- name: lang
dtype: string
- name: words
sequence: string
- name: ner
sequence: string
splits:
- name: train
num_bytes: 76256241
num_examples: 331568
download_size: 1107018606
dataset_size: 76256241
- config_name: 'no'
features:
- name: id
dtype: string
- name: lang
dtype: string
- name: words
sequence: string
- name: ner
sequence: string
splits:
- name: train
num_bytes: 152431612
num_examples: 552176
download_size: 1107018606
dataset_size: 152431612
- config_name: ro
features:
- name: id
dtype: string
- name: lang
dtype: string
- name: words
sequence: string
- name: ner
sequence: string
splits:
- name: train
num_bytes: 96369897
num_examples: 285985
download_size: 1107018606
dataset_size: 96369897
- config_name: sl
features:
- name: id
dtype: string
- name: lang
dtype: string
- name: words
sequence: string
- name: ner
sequence: string
splits:
- name: train
num_bytes: 148140079
num_examples: 521251
download_size: 1107018606
dataset_size: 148140079
- config_name: th
features:
- name: id
dtype: string
- name: lang
dtype: string
- name: words
sequence: string
- name: ner
sequence: string
splits:
- name: train
num_bytes: 360409343
num_examples: 217631
download_size: 1107018606
dataset_size: 360409343
- config_name: uk
features:
- name: id
dtype: string
- name: lang
dtype: string
- name: words
sequence: string
- name: ner
sequence: string
splits:
- name: train
num_bytes: 198251631
num_examples: 561373
download_size: 1107018606
dataset_size: 198251631
- config_name: combined
features:
- name: id
dtype: string
- name: lang
dtype: string
- name: words
sequence: string
- name: ner
sequence: string
splits:
- name: train
num_bytes: 6286855097
num_examples: 21070925
download_size: 1107018606
dataset_size: 6286855097
---
# Dataset Card for Polyglot-NER
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://sites.google.com/site/rmyeid/projects/polylgot-ner](https://sites.google.com/site/rmyeid/projects/polylgot-ner)
- **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of downloaded dataset files:** 45.39 GB
- **Size of the generated dataset:** 12.54 GB
- **Total amount of disk used:** 57.93 GB
### Dataset Summary
Polyglot-NER
A training dataset automatically generated from Wikipedia and Freebase the task
of named entity recognition. The dataset contains the basic Wikipedia based
training data for 40 languages we have (with coreference resolution) for the task of
named entity recognition. The details of the procedure of generating them is outlined in
Section 3 of the paper (https://arxiv.org/abs/1410.3791). Each config contains the data
corresponding to a different language. For example, "es" includes only spanish examples.
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Dataset Structure
### Data Instances
#### ar
- **Size of downloaded dataset files:** 1.11 GB
- **Size of the generated dataset:** 183.55 MB
- **Total amount of disk used:** 1.29 GB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": "2",
"lang": "ar",
"ner": ["O", "O", "O", "O", "O", "O", "O", "O", "LOC", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "PER", "PER", "PER", "PER", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O"],
"words": "[\"وفي\", \"مرحلة\", \"موالية\", \"أنشأت\", \"قبيلة\", \"مكناسة\", \"الزناتية\", \"مكناسة\", \"تازة\", \",\", \"وأقام\", \"بها\", \"المرابطون\", \"قلعة\", \"..."
}
```
#### bg
- **Size of downloaded dataset files:** 1.11 GB
- **Size of the generated dataset:** 190.51 MB
- **Total amount of disk used:** 1.30 GB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": "1",
"lang": "bg",
"ner": ["O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O"],
"words": "[\"Дефиниция\", \"Наименованията\", \"\\\"\", \"книжовен\", \"\\\"/\\\"\", \"литературен\", \"\\\"\", \"език\", \"на\", \"български\", \"за\", \"тази\", \"кодифи..."
}
```
#### ca
- **Size of downloaded dataset files:** 1.11 GB
- **Size of the generated dataset:** 143.75 MB
- **Total amount of disk used:** 1.25 GB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": "2",
"lang": "ca",
"ner": "[\"O\", \"O\", \"O\", \"O\", \"O\", \"O\", \"O\", \"O\", \"O\", \"O\", \"O\", \"O\", \"O\", \"O\", \"O\", \"O\", \"O\", \"O\", \"O\", \"O\", \"O\", \"O\", \"O\", \"O\", \"O\", \"O...",
"words": "[\"Com\", \"a\", \"compositor\", \"deixà\", \"un\", \"immens\", \"llegat\", \"que\", \"inclou\", \"8\", \"simfonies\", \"(\", \"1822\", \"),\", \"diverses\", ..."
}
```
#### combined
- **Size of downloaded dataset files:** 1.11 GB
- **Size of the generated dataset:** 6.29 GB
- **Total amount of disk used:** 7.39 GB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": "18",
"lang": "es",
"ner": ["O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O"],
"words": "[\"Los\", \"cambios\", \"en\", \"la\", \"energía\", \"libre\", \"de\", \"Gibbs\", \"\\\\\", \"Delta\", \"G\", \"nos\", \"dan\", \"una\", \"cuantificación\", \"de..."
}
```
#### cs
- **Size of downloaded dataset files:** 1.11 GB
- **Size of the generated dataset:** 156.79 MB
- **Total amount of disk used:** 1.26 GB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": "3",
"lang": "cs",
"ner": ["O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O"],
"words": "[\"Historie\", \"Symfonická\", \"forma\", \"se\", \"rozvinula\", \"se\", \"především\", \"v\", \"období\", \"klasicismu\", \"a\", \"romantismu\", \",\", \"..."
}
```
### Data Fields
The data fields are the same among all splits.
#### ar
- `id`: a `string` feature.
- `lang`: a `string` feature.
- `words`: a `list` of `string` features.
- `ner`: a `list` of `string` features.
#### bg
- `id`: a `string` feature.
- `lang`: a `string` feature.
- `words`: a `list` of `string` features.
- `ner`: a `list` of `string` features.
#### ca
- `id`: a `string` feature.
- `lang`: a `string` feature.
- `words`: a `list` of `string` features.
- `ner`: a `list` of `string` features.
#### combined
- `id`: a `string` feature.
- `lang`: a `string` feature.
- `words`: a `list` of `string` features.
- `ner`: a `list` of `string` features.
#### cs
- `id`: a `string` feature.
- `lang`: a `string` feature.
- `words`: a `list` of `string` features.
- `ner`: a `list` of `string` features.
### Data Splits
| name | train |
|----------|---------:|
| ar | 339109 |
| bg | 559694 |
| ca | 372665 |
| combined | 21070925 |
| cs | 564462 |
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@article{polyglotner,
author = {Al-Rfou, Rami and Kulkarni, Vivek and Perozzi, Bryan and Skiena, Steven},
title = {{Polyglot-NER}: Massive Multilingual Named Entity Recognition},
journal = {{Proceedings of the 2015 {SIAM} International Conference on Data Mining, Vancouver, British Columbia, Canada, April 30- May 2, 2015}},
month = {April},
year = {2015},
publisher = {SIAM},
}
```
### Contributions
Thanks to [@joeddav](https://github.com/joeddav) for adding this dataset. |
mteb/scidocs-reranking | 2022-09-27T19:11:31.000Z | [
"language:en",
"region:us"
] | mteb | null | null | null | 0 | 313 | ---
language:
- en
--- |
nlp-thedeep/humset | 2023-05-25T17:14:31.000Z | [
"task_categories:text-classification",
"task_categories:text-retrieval",
"task_categories:token-classification",
"task_ids:multi-label-classification",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:multilingual",
"size_categories:100K<n<1M",
"source_datasets:original",
"language:en",
"language:fr",
"language:es",
"license:apache-2.0",
"humanitarian",
"research",
"analytical-framework",
"multilabel",
"humset",
"humbert",
"region:us"
] | nlp-thedeep | HumSet is a novel and rich multilingual dataset of humanitarian response documents annotated by experts in the humanitarian response community. HumSet is curated by humanitarian analysts and covers various disasters around the globe that occurred from 2018 to 2021 in 46 humanitarian response projects. The dataset consists of approximately 17K annotated documents in three languages of English, French, and Spanish, originally taken from publicly-available resources. For each document, analysts have identified informative snippets (entries) in respect to common humanitarian frameworks, and assigned one or many classes to each entry. See the our paper for details. | @misc{https://doi.org/10.48550/arxiv.2210.04573,
doi = {10.48550/ARXIV.2210.04573},
url = {https://arxiv.org/abs/2210.04573},
author = {Fekih, Selim and Tamagnone, Nicolò and Minixhofer, Benjamin and Shrestha, Ranjan and Contla, Ximena and Oglethorpe, Ewan and Rekabsaz, Navid},
keywords = {Computation and Language (cs.CL), Machine Learning (cs.LG), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {HumSet: Dataset of Multilingual Information Extraction and Classification for Humanitarian Crisis Response},
publisher = {arXiv},
year = {2022},
copyright = {arXiv.org perpetual, non-exclusive license}
} | null | 1 | 313 | ---
annotations_creators:
- expert-generated
language:
- en
- fr
- es
language_creators:
- expert-generated
license:
- apache-2.0
multilinguality:
- multilingual
pretty_name: HumSet
size_categories:
- 100K<n<1M
source_datasets:
- original
tags:
- humanitarian
- research
- analytical-framework
- multilabel
- humset
- humbert
task_categories:
- text-classification
- text-retrieval
- token-classification
task_ids:
- multi-label-classification
dataset_info:
features:
- name: entry_id
dtype: string
- name: lead_id
dtype: string
- name: project_id
dtype: string
- name: lang
dtype: string
- name: n_tokens
dtype: int64
- name: project_title
dtype: string
- name: created_at
dtype: string
- name: document
dtype: string
- name: excerpt
dtype: string
- name: sectors
sequence:
class_label:
names:
0: Agriculture
1: Cross
2: Education
3: Food Security
4: Health
5: Livelihoods
6: Logistics
7: Nutrition
8: Protection
9: Shelter
10: WASH
- name: pillars_1d
sequence:
class_label:
names:
0: Casualties
1: Context
2: Covid-19
3: Displacement
4: Humanitarian Access
5: Information And Communication
6: Shock/Event
- name: pillars_2d
sequence:
class_label:
names:
0: At Risk
1: Capacities & Response
2: Humanitarian Conditions
3: Impact
4: Priority Interventions
5: Priority Needs
- name: subpillars_1d
sequence:
class_label:
names:
0: Casualties->Dead
1: Casualties->Injured
2: Casualties->Missing
3: Context->Demography
4: Context->Economy
5: Context->Environment
6: Context->Legal & Policy
7: Context->Politics
8: Context->Security & Stability
9: Context->Socio Cultural
10: Covid-19->Cases
11: Covid-19->Contact Tracing
12: Covid-19->Deaths
13: Covid-19->Hospitalization & Care
14: Covid-19->Restriction Measures
15: Covid-19->Testing
16: Covid-19->Vaccination
17: Displacement->Intentions
18: Displacement->Local Integration
19: Displacement->Pull Factors
20: Displacement->Push Factors
21: Displacement->Type/Numbers/Movements
22: Humanitarian Access->Number Of People Facing Humanitarian Access Constraints/Humanitarian Access Gaps
23: Humanitarian Access->Physical Constraints
24: Humanitarian Access->Population To Relief
25: Humanitarian Access->Relief To Population
26: Information And Communication->Communication Means And Preferences
27: Information And Communication->Information Challenges And Barriers
28: Information And Communication->Knowledge And Info Gaps (Hum)
29: Information And Communication->Knowledge And Info Gaps (Pop)
30: Shock/Event->Hazard & Threats
31: Shock/Event->Type And Characteristics
32: Shock/Event->Underlying/Aggravating Factors
- name: subpillars_2d
sequence:
class_label:
names:
0: At Risk->Number Of People At Risk
1: At Risk->Risk And Vulnerabilities
2: Capacities & Response->International Response
3: Capacities & Response->Local Response
4: Capacities & Response->National Response
5: Capacities & Response->Number Of People Reached/Response Gaps
6: Humanitarian Conditions->Coping Mechanisms
7: Humanitarian Conditions->Living Standards
8: Humanitarian Conditions->Number Of People In Need
9: Humanitarian Conditions->Physical And Mental Well Being
10: Impact->Driver/Aggravating Factors
11: Impact->Impact On People
12: Impact->Impact On Systems, Services And Networks
13: Impact->Number Of People Affected
14: Priority Interventions->Expressed By Humanitarian Staff
15: Priority Interventions->Expressed By Population
16: Priority Needs->Expressed By Humanitarian Staff
17: Priority Needs->Expressed By Population
splits:
- name: train
num_examples: 117435
- name: validation
num_examples: 16039
- name: test
num_examples: 15147
---
# Dataset Card for HumSet
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** [http://blog.thedeep.io/humset/](http://blog.thedeep.io/humset/)
- **Repository:** [https://github.com/the-deep/humset](https://github.com/the-deep/humset)
- **Paper:** [EMNLP Findings 2022](https://aclanthology.org/2022.findings-emnlp.321)
- **Leaderboard:**
- **Point of Contact:**[the DEEP NLP team](mailto:nlp@thedeep.io)
### Dataset Summary
HumSet is a novel and rich multilingual dataset of humanitarian response documents annotated by experts in the humanitarian response community. HumSet is curated by humanitarian analysts and covers various disasters around the globe that occurred from 2018 to 2021 in 46 humanitarian response projects. The dataset consists of approximately 17K annotated documents in three languages of English, French, and Spanish, originally taken from publicly-available resources. For each document, analysts have identified informative snippets (entries) in respect to common humanitarian frameworks, and assigned one or many classes to each entry. See the our paper for details.
### Supported Tasks and Leaderboards
This dataset is intended for multi-label classification
### Languages
This dataset is in English, French and Spanish
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
- **entry_id**: unique identification number for a given entry. (string)
- **lead_id**: unique identification number for the document to which the corrisponding entry belongs. (string)
- **project_id** unique identification number for the project to which the corrisponding entry belongs. (string)
- **sectors**, **pillars_1d**, **pillars_2d**, **subpillars_1d**, **subpillars_2d**: labels assigned to the corresponding entry. Since this is a multi-label dataset (each entry may have several annotations belonging to the same category), they are reported as arrays of strings. See the paper for a detailed description of these categories. (list)
- **lang**: language. (str)
- **n_tokens**: number of tokens (tokenized using NLTK v3.7 library). (int64)
- **project_title**: the name of the project where the corresponding annotation was created. (str)
- **created_at**: date and time of creation of the annotation in stardard ISO 8601 format. (str)
- **document**: document URL source of the excerpt. (str)
- **excerpt**: excerpt text. (str)
### Data Splits
The dataset includes a set of train/validation/test splits, with 117435, 16039 and 15147 examples respectively.
## Dataset Creation
The collection originated from a multi-organizational platform called <em>the Data Entry and Exploration Platform (DEEP)</em> developed and maintained by Data Friendly Space (DFS). The platform facilitates classifying primarily qualitative information with respect to analysis frameworks and allows for collaborative classification and annotation of secondary data.
### Curation Rationale
[More Information Needed]
### Source Data
Documents are selected from different sources, ranging from official reports by humanitarian organizations to international and national media articles. See the paper for more informations.
#### Initial Data Collection and Normalization
#### Who are the source language producers?
[More Information Needed]
#### Annotation process
HumSet is curated by humanitarian analysts and covers various disasters around the globe that occurred from 2018 to 2021 in 46 humanitarian response projects. The dataset consists of approximately 17K annotated documents in three
languages of English, French, and Spanish, originally taken from publicly-available resources. For
each document, analysts have identified informative snippets (entries, or excerpt in the imported dataset) with respect to common <em>humanitarian frameworks</em> and assigned one or many classes to each entry.
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
NLP team at [Data Friendly Space](https://datafriendlyspace.org/)
### Licensing Information
The GitHub repository which houses this dataset has an Apache License 2.0.
### Citation Information
```
@inproceedings{fekih-etal-2022-humset,
title = "{H}um{S}et: Dataset of Multilingual Information Extraction and Classification for Humanitarian Crises Response",
author = "Fekih, Selim and
Tamagnone, Nicolo{'} and
Minixhofer, Benjamin and
Shrestha, Ranjan and
Contla, Ximena and
Oglethorpe, Ewan and
Rekabsaz, Navid",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2022",
month = dec,
year = "2022",
address = "Abu Dhabi, United Arab Emirates",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2022.findings-emnlp.321",
pages = "4379--4389",
}
```
|
llm-book/ner-wikipedia-dataset | 2023-07-25T17:19:14.000Z | [
"task_categories:token-classification",
"size_categories:1K<n<10K",
"language:ja",
"license:cc-by-sa-3.0",
"region:us"
] | llm-book | null | @inproceedings{omi-2021-wikipedia,
title = "Wikipediaを用いた日本語の固有表現抽出のデータセットの構築",
author = "近江 崇宏",
booktitle = "言語処理学会第27回年次大会",
year = "2021",
url = "https://anlp.jp/proceedings/annual_meeting/2021/pdf_dir/P2-7.pdf",
} | null | 0 | 313 | ---
language:
- ja
license:
- cc-by-sa-3.0
size_categories:
- 1K<n<10K
task_categories:
- token-classification
---
# Dataset Card for llm-book/ner-wikipedia-dataset
書籍『大規模言語モデル入門』で使用する、ストックマーク株式会社により作成された「Wikipediaを用いた日本語の固有表現抽出データセット」(Version 2.0)です。
Githubリポジトリ[stockmarkteam/ner-wikipedia-dataset](https://github.com/stockmarkteam/ner-wikipedia-dataset)で公開されているデータセットを利用しています。
### Citation
```bibtex
@inproceedings{omi-2021-wikipedia,
title = "Wikipediaを用いた日本語の固有表現抽出のデータセットの構築",
author = "近江 崇宏",
booktitle = "言語処理学会第27回年次大会",
year = "2021",
url = "https://anlp.jp/proceedings/annual_meeting/2021/pdf_dir/P2-7.pdf",
}
```
### Licence
Wikipedia日本語版と同じCC-BY-SA 3.0のライセンスに従います。
|
distil-whisper/librispeech_asr-timestamped | 2023-09-25T10:30:13.000Z | [
"task_categories:automatic-speech-recognition",
"language:en",
"license:cc-by-4.0",
"region:us"
] | distil-whisper | LibriSpeech is a corpus of approximately 1000 hours of read English speech with sampling rate of 16 kHz,
prepared by Vassil Panayotov with the assistance of Daniel Povey. The data is derived from read
audiobooks from the LibriVox project, and has been carefully segmented and aligned.87 | @inproceedings{panayotov2015librispeech,
title={Librispeech: an ASR corpus based on public domain audio books},
author={Panayotov, Vassil and Chen, Guoguo and Povey, Daniel and Khudanpur, Sanjeev},
booktitle={Acoustics, Speech and Signal Processing (ICASSP), 2015 IEEE International Conference on},
pages={5206--5210},
year={2015},
organization={IEEE}
} | null | 0 | 313 | ---
license: cc-by-4.0
task_categories:
- automatic-speech-recognition
language:
- en
-pretty_name: LibriSpeech ASR
---
# Distil Whisper: LibriSpeech ASR With Timestamps
This is a variant of the [LibriSpeech ASR](https://huggingface.co/datasets/librispeech_asr) dataset, augmented to return the pseudo-labelled Whisper
Transcriptions alongside the original dataset elements. The pseudo-labelled transcriptions were generated by
labelling the input audio data with the Whisper [large-v2](https://huggingface.co/openai/whisper-large-v2)
model with *greedy* sampling and timestamp prediction. For information on how the original dataset was curated, refer to the original
[dataset card](https://huggingface.co/datasets/librispeech_asr).
## Standalone Usage
First, install the latest version of the 🤗 Datasets package:
```bash
pip install --upgrade pip
pip install --upgrade datasets[audio]
```
The dataset can be downloaded and pre-processed on disk using the [`load_dataset`](https://huggingface.co/docs/datasets/v2.14.5/en/package_reference/loading_methods#datasets.load_dataset)
function:
```python
from datasets import load_dataset
dataset = load_dataset("distil-whisper/librispeech_asr", "all")
# take the first sample of the validation set
sample = dataset["validation.clean"][0]
```
It can also be streamed directly from the Hub using Datasets' [streaming mode](https://huggingface.co/blog/audio-datasets#streaming-mode-the-silver-bullet).
Loading a dataset in streaming mode loads individual samples of the dataset at a time, rather than downloading the entire
dataset to disk:
```python
from datasets import load_dataset
dataset = load_dataset("distil-whisper/librispeech_asr", "all", streaming=True)
# take the first sample of the validation set
sample = next(iter(dataset["validation.clean"]))
```
## Distil Whisper Usage
To use this dataset to reproduce a Distil Whisper training run, refer to the instructions on the
[Distil Whisper repository](https://github.com/huggingface/distil-whisper#training).
## License
This dataset is licensed under cc-by-4.0.
|
lince | 2023-04-05T10:09:24.000Z | [
"region:us"
] | null | LinCE is a centralized Linguistic Code-switching Evaluation benchmark
(https://ritual.uh.edu/lince/) that contains data for training and evaluating
NLP systems on code-switching tasks. | @inproceedings{aguilar-etal-2020-lince,
title = "{L}in{CE}: A Centralized Benchmark for Linguistic Code-switching Evaluation",
author = "Aguilar, Gustavo and
Kar, Sudipta and
Solorio, Thamar",
booktitle = "Proceedings of The 12th Language Resources and Evaluation Conference",
month = may,
year = "2020",
address = "Marseille, France",
publisher = "European Language Resources Association",
url = "https://www.aclweb.org/anthology/2020.lrec-1.223",
pages = "1803--1813",
language = "English",
ISBN = "979-10-95546-34-4",
}
Note that each LinCE dataset has its own citation. Please see the source to see
the correct citation for each contained dataset. | null | 5 | 312 | ---
paperswithcode_id: lince
pretty_name: Linguistic Code-switching Evaluation Dataset
dataset_info:
- config_name: lid_spaeng
features:
- name: idx
dtype: int32
- name: words
sequence: string
- name: lid
sequence: string
splits:
- name: train
num_bytes: 4745003
num_examples: 21030
- name: validation
num_bytes: 739950
num_examples: 3332
- name: test
num_bytes: 1337727
num_examples: 8289
download_size: 1188861
dataset_size: 6822680
- config_name: lid_hineng
features:
- name: idx
dtype: int32
- name: words
sequence: string
- name: lid
sequence: string
splits:
- name: train
num_bytes: 1662284
num_examples: 4823
- name: validation
num_bytes: 268930
num_examples: 744
- name: test
num_bytes: 456850
num_examples: 1854
download_size: 432854
dataset_size: 2388064
- config_name: lid_msaea
features:
- name: idx
dtype: int32
- name: words
sequence: string
- name: lid
sequence: string
splits:
- name: train
num_bytes: 3804156
num_examples: 8464
- name: validation
num_bytes: 490566
num_examples: 1116
- name: test
num_bytes: 590488
num_examples: 1663
download_size: 803806
dataset_size: 4885210
- config_name: lid_nepeng
features:
- name: idx
dtype: int32
- name: words
sequence: string
- name: lid
sequence: string
splits:
- name: train
num_bytes: 2239014
num_examples: 8451
- name: validation
num_bytes: 351649
num_examples: 1332
- name: test
num_bytes: 620512
num_examples: 3228
download_size: 545342
dataset_size: 3211175
- config_name: pos_spaeng
features:
- name: idx
dtype: int32
- name: words
sequence: string
- name: lid
sequence: string
- name: pos
sequence: string
splits:
- name: train
num_bytes: 5467832
num_examples: 27893
- name: validation
num_bytes: 840593
num_examples: 4298
- name: test
num_bytes: 1758626
num_examples: 10720
download_size: 819657
dataset_size: 8067051
- config_name: pos_hineng
features:
- name: idx
dtype: int32
- name: words
sequence: string
- name: lid
sequence: string
- name: pos
sequence: string
splits:
- name: train
num_bytes: 537541
num_examples: 1030
- name: validation
num_bytes: 80886
num_examples: 160
- name: test
num_bytes: 131192
num_examples: 299
download_size: 113872
dataset_size: 749619
- config_name: ner_spaeng
features:
- name: idx
dtype: int32
- name: words
sequence: string
- name: lid
sequence: string
- name: ner
sequence: string
splits:
- name: train
num_bytes: 9836312
num_examples: 33611
- name: validation
num_bytes: 2980990
num_examples: 10085
- name: test
num_bytes: 6530956
num_examples: 23527
download_size: 3075520
dataset_size: 19348258
- config_name: ner_msaea
features:
- name: idx
dtype: int32
- name: words
sequence: string
- name: ner
sequence: string
splits:
- name: train
num_bytes: 3887684
num_examples: 10103
- name: validation
num_bytes: 431414
num_examples: 1122
- name: test
num_bytes: 367310
num_examples: 1110
download_size: 938671
dataset_size: 4686408
- config_name: ner_hineng
features:
- name: idx
dtype: int32
- name: words
sequence: string
- name: lid
sequence: string
- name: ner
sequence: string
splits:
- name: train
num_bytes: 474639
num_examples: 1243
- name: validation
num_bytes: 121403
num_examples: 314
- name: test
num_bytes: 185220
num_examples: 522
download_size: 141285
dataset_size: 781262
- config_name: sa_spaeng
features:
- name: idx
dtype: int32
- name: words
sequence: string
- name: lid
sequence: string
- name: sa
dtype: string
splits:
- name: train
num_bytes: 3587783
num_examples: 12194
- name: validation
num_bytes: 546692
num_examples: 1859
- name: test
num_bytes: 1349407
num_examples: 4736
download_size: 1031412
dataset_size: 5483882
---
# Dataset Card for "lince"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [http://ritual.uh.edu/lince](http://ritual.uh.edu/lince)
- **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of downloaded dataset files:** 9.09 MB
- **Size of the generated dataset:** 56.42 MB
- **Total amount of disk used:** 65.52 MB
### Dataset Summary
LinCE is a centralized Linguistic Code-switching Evaluation benchmark
(https://ritual.uh.edu/lince/) that contains data for training and evaluating
NLP systems on code-switching tasks.
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Dataset Structure
### Data Instances
#### lid_hineng
- **Size of downloaded dataset files:** 0.43 MB
- **Size of the generated dataset:** 2.39 MB
- **Total amount of disk used:** 2.82 MB
An example of 'validation' looks as follows.
```
{
"idx": 0,
"lid": ["other", "other", "lang1", "lang1", "lang1", "other", "lang1", "lang1", "lang1", "lang1", "lang1", "lang1", "lang1", "mixed", "lang1", "lang1", "other"],
"words": ["@ZahirJ", "@BinyavangaW", "Loved", "the", "ending", "!", "I", "could", "have", "offered", "you", "some", "ironic", "chai-tea", "for", "it", ";)"]
}
```
#### lid_msaea
- **Size of downloaded dataset files:** 0.81 MB
- **Size of the generated dataset:** 4.89 MB
- **Total amount of disk used:** 5.69 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"idx": 0,
"lid": ["ne", "lang2", "other", "lang2", "lang2", "other", "other", "lang2", "lang2", "lang2", "lang2", "lang2", "lang2", "lang2", "lang2", "lang2", "lang2", "lang2", "lang2", "lang2", "lang2", "other", "lang2", "lang2", "lang2", "ne", "lang2", "lang2"],
"words": "[\"علاء\", \"بخير\", \"،\", \"معنوياته\", \"كويسة\", \".\", \"..\", \"اسخف\", \"حاجة\", \"بس\", \"ان\", \"كل\", \"واحد\", \"منهم\", \"بييقى\", \"مقفول\", \"عليه\"..."
}
```
#### lid_nepeng
- **Size of downloaded dataset files:** 0.55 MB
- **Size of the generated dataset:** 3.21 MB
- **Total amount of disk used:** 3.75 MB
An example of 'validation' looks as follows.
```
{
"idx": 1,
"lid": ["other", "lang2", "lang2", "lang2", "lang2", "lang1", "lang1", "lang1", "lang1", "lang1", "lang2", "lang2", "other", "mixed", "lang2", "lang2", "other", "other", "other", "other"],
"words": ["@nirvikdada", "la", "hamlai", "bhetna", "paayeko", "will", "be", "your", "greatest", "gift", "ni", "dada", ";P", "#TreatChaiyo", "j", "hos", ";)", "@zappylily", "@AsthaGhm", "@ayacs_asis"]
}
```
#### lid_spaeng
- **Size of downloaded dataset files:** 1.18 MB
- **Size of the generated dataset:** 6.83 MB
- **Total amount of disk used:** 8.01 MB
An example of 'train' looks as follows.
```
{
"idx": 0,
"lid": ["other", "other", "lang1", "lang1", "lang1", "other", "lang1", "lang1"],
"words": ["11:11", ".....", "make", "a", "wish", ".......", "night", "night"]
}
```
#### ner_hineng
- **Size of downloaded dataset files:** 0.14 MB
- **Size of the generated dataset:** 0.79 MB
- **Total amount of disk used:** 0.92 MB
An example of 'train' looks as follows.
```
{
"idx": 1,
"lid": ["en", "en", "en", "en", "en", "en", "hi", "hi", "hi", "hi", "hi", "hi", "hi", "en", "en", "en", "en", "rest"],
"ner": ["O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "B-PERSON", "I-PERSON", "O", "O", "O", "B-PERSON", "I-PERSON"],
"words": ["I", "liked", "a", "@YouTube", "video", "https://t.co/DmVqhZbdaI", "Kabhi", "Palkon", "Pe", "Aasoon", "Hai-", "Kishore", "Kumar", "-Vocal", "Cover", "By", "Stephen", "Qadir"]
}
```
### Data Fields
The data fields are the same among all splits.
#### lid_hineng
- `idx`: a `int32` feature.
- `words`: a `list` of `string` features.
- `lid`: a `list` of `string` features.
#### lid_msaea
- `idx`: a `int32` feature.
- `words`: a `list` of `string` features.
- `lid`: a `list` of `string` features.
#### lid_nepeng
- `idx`: a `int32` feature.
- `words`: a `list` of `string` features.
- `lid`: a `list` of `string` features.
#### lid_spaeng
- `idx`: a `int32` feature.
- `words`: a `list` of `string` features.
- `lid`: a `list` of `string` features.
#### ner_hineng
- `idx`: a `int32` feature.
- `words`: a `list` of `string` features.
- `lid`: a `list` of `string` features.
- `ner`: a `list` of `string` features.
### Data Splits
| name |train|validation|test|
|----------|----:|---------:|---:|
|lid_hineng| 4823| 744|1854|
|lid_msaea | 8464| 1116|1663|
|lid_nepeng| 8451| 1332|3228|
|lid_spaeng|21030| 3332|8289|
|ner_hineng| 1243| 314| 522|
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@inproceedings{aguilar-etal-2020-lince,
title = "{L}in{CE}: A Centralized Benchmark for Linguistic Code-switching Evaluation",
author = "Aguilar, Gustavo and
Kar, Sudipta and
Solorio, Thamar",
booktitle = "Proceedings of The 12th Language Resources and Evaluation Conference",
month = may,
year = "2020",
address = "Marseille, France",
publisher = "European Language Resources Association",
url = "https://www.aclweb.org/anthology/2020.lrec-1.223",
pages = "1803--1813",
language = "English",
ISBN = "979-10-95546-34-4",
}
```
Note that each LinCE dataset has its own citation too. Please see [here](https://ritual.uh.edu/lince/datasets)
for the correct citation on each dataset.
### Contributions
Thanks to [@lhoestq](https://github.com/lhoestq), [@thomwolf](https://github.com/thomwolf), [@gaguilar](https://github.com/gaguilar) for adding this dataset. |
snow_simplified_japanese_corpus | 2022-11-03T16:31:17.000Z | [
"task_categories:translation",
"annotations_creators:crowdsourced",
"annotations_creators:other",
"language_creators:found",
"multilinguality:translation",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:en",
"language:ja",
"license:cc-by-4.0",
"region:us"
] | null | About SNOW T15: The simplified corpus for the Japanese language. The corpus has 50,000 manually simplified and aligned sentences. This corpus contains the original sentences, simplified sentences and English translation of the original sentences. It can be used for automatic text simplification as well as translating simple Japanese into English and vice-versa. The core vocabulary is restricted to 2,000 words where it is selected by accounting for several factors such as meaning preservation, variation, simplicity and the UniDic word segmentation criterion.
For details, refer to the explanation page of Japanese simplification (http://www.jnlp.org/research/Japanese_simplification). The original texts are from "small_parallel_enja: 50k En/Ja Parallel Corpus for Testing SMT Methods", which is a bilingual corpus for machine translation. About SNOW T23: An expansion corpus of 35,000 sentences rewritten in easy Japanese (simple Japanese vocabulary) based on SNOW T15. The original texts are from "Tanaka Corpus" (http://www.edrdg.org/wiki/index.php/Tanaka_Corpus). | @inproceedings{maruyama-yamamoto-2018-simplified,
title = "Simplified Corpus with Core Vocabulary",
author = "Maruyama, Takumi and
Yamamoto, Kazuhide",
booktitle = "Proceedings of the Eleventh International Conference on Language Resources and Evaluation ({LREC} 2018)",
month = may,
year = "2018",
address = "Miyazaki, Japan",
publisher = "European Language Resources Association (ELRA)",
url = "https://www.aclweb.org/anthology/L18-1185",
}
@inproceedings{yamamoto-2017-simplified-japanese,
title = "やさしい⽇本語対訳コーパスの構築",
author = "⼭本 和英 and
丸⼭ 拓海 and
⾓張 ⻯晴 and
稲岡 夢⼈ and
⼩川 耀⼀朗 and
勝⽥ 哲弘 and
髙橋 寛治",
booktitle = "言語処理学会第23回年次大会",
month = 3月,
year = "2017",
address = "茨城, 日本",
publisher = "言語処理学会",
url = "https://www.anlp.jp/proceedings/annual_meeting/2017/pdf_dir/B5-1.pdf",
}
@inproceedings{katsuta-yamamoto-2018-crowdsourced,
title = "Crowdsourced Corpus of Sentence Simplification with Core Vocabulary",
author = "Katsuta, Akihiro and
Yamamoto, Kazuhide",
booktitle = "Proceedings of the Eleventh International Conference on Language Resources and Evaluation ({LREC} 2018)",
month = may,
year = "2018",
address = "Miyazaki, Japan",
publisher = "European Language Resources Association (ELRA)",
url = "https://www.aclweb.org/anthology/L18-1072",
} | null | 12 | 312 | ---
annotations_creators:
- crowdsourced
- other
language_creators:
- found
language:
- en
- ja
license:
- cc-by-4.0
multilinguality:
- translation
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- translation
task_ids: []
paperswithcode_id: null
pretty_name: SNOW T15 and T23 (simplified Japanese corpus)
dataset_info:
- config_name: snow_t15
features:
- name: ID
dtype: string
- name: original_ja
dtype: string
- name: simplified_ja
dtype: string
- name: original_en
dtype: string
splits:
- name: train
num_bytes: 7218115
num_examples: 50000
download_size: 3634132
dataset_size: 7218115
- config_name: snow_t23
features:
- name: ID
dtype: string
- name: original_ja
dtype: string
- name: simplified_ja
dtype: string
- name: original_en
dtype: string
- name: proper_noun
dtype: string
splits:
- name: train
num_bytes: 6704695
num_examples: 34300
download_size: 3641507
dataset_size: 6704695
---
# Dataset Card for SNOW T15 and T23 (simplified Japanese corpus)
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [SNOW T15](http://www.jnlp.org/SNOW/T15), [SNOW T23](http://www.jnlp.org/SNOW/T23)
- **Repository:** [N/A]
- **Paper:** ["Simplified Corpus with Core Vocabulary"](https://www.aclweb.org/anthology/L18-1185), ["やさしい⽇本語対訳コーパスの構築"](https://www.anlp.jp/proceedings/annual_meeting/2017/pdf_dir/B5-1.pdf), ["Crowdsourced Corpus of Sentence Simplification with Core Vocabulary"](https://www.aclweb.org/anthology/L18-1072)
- **Leaderboard:** [N/A]
- **Point of Contact:** Check the homepage.
### Dataset Summary
- **SNOW T15:**
The simplified corpus for the Japanese language. The corpus has 50,000 manually simplified and aligned sentences.
This corpus contains the original sentences, simplified sentences and English translation of the original sentences.
It can be used for automatic text simplification as well as translating simple Japanese into English and vice-versa.
The core vocabulary is restricted to 2,000 words where it is selected by accounting for several factors such as meaning preservation, variation, simplicity and the UniDic word segmentation criterion.
For details, refer to the explanation page of Japanese simplification (http://www.jnlp.org/research/Japanese_simplification).
The original texts are from "small_parallel_enja: 50k En/Ja Parallel Corpus for Testing SMT Methods", which is a bilingual corpus for machine translation.
- **SNOW T23:**
An expansion corpus of 35,000 sentences rewritten in easy Japanese (simple Japanese vocabulary) based on SNOW T15.
The original texts are from "Tanaka Corpus" (http://www.edrdg.org/wiki/index.php/Tanaka_Corpus).
### Supported Tasks and Leaderboards
It can be used for automatic text simplification in Japanese as well as translating simple Japanese into English and vice-versa.
### Languages
Japanese, simplified Japanese, and English.
## Dataset Structure
### Data Instances
SNOW T15 is xlsx file with ID, "#日本語(原文)" (Japanese (original)), "#やさしい日本語" (simplified Japanese), "#英語(原文)" (English (original)).
SNOW T23 is xlsx file with ID, "#日本語(原文)" (Japanese (original)), "#やさしい日本語" (simplified Japanese), "#英語(原文)" (English (original)), and "#固有名詞" (proper noun).
### Data Fields
- `ID`: sentence ID.
- `original_ja`: original Japanese sentence.
- `simplified_ja`: simplified Japanese sentence.
- `original_en`: original English sentence.
- `proper_noun`: (included only in SNOW T23) Proper nowus that the workers has extracted as proper nouns. The authors instructed workers not to rewrite proper nouns, leaving the determination of proper nouns to the workers.
### Data Splits
The data is not split.
## Dataset Creation
### Curation Rationale
A dataset on the study of automatic conversion to simplified Japanese (Japanese simplification).
### Source Data
#### Initial Data Collection and Normalization
- **SNOW T15:**
The original texts are from "small_parallel_enja: 50k En/Ja Parallel Corpus for Testing SMT Methods", which is a bilingual corpus for machine translation.
- **SNOW T23:**
The original texts are from "Tanaka Corpus" (http://www.edrdg.org/wiki/index.php/Tanaka_Corpus).
#### Who are the source language producers?
[N/A]
### Annotations
#### Annotation process
- **SNOW T15:**
Five students in the laboratory rewrote the original Japanese sentences to simplified Japanese all by hand.
The core vocabulary is restricted to 2,000 words where it is selected by accounting for several factors such as meaning preservation, variation, simplicity and the UniDic word segmentation criterion.
- **SNOW T23:**
Seven people, gathered through crowdsourcing, rewrote all the sentences manually.
Each worker rewrote 5,000 sentences, of which 100 sentences were rewritten to be common among the workers.
The average length of the sentences was kept as close to the same as possible so that the amount of work was not varied among the workers.
#### Who are the annotators?
Five students for SNOW T15, seven crowd workers for SNOW T23.
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
The datasets are part of SNOW, Japanese language resources/tools created by Natural Language Processing Laboratory, Nagaoka University of Technology, Japan.
### Licensing Information
CC BY 4.0
### Citation Information
```
@inproceedings{maruyama-yamamoto-2018-simplified,
title = "Simplified Corpus with Core Vocabulary",
author = "Maruyama, Takumi and
Yamamoto, Kazuhide",
booktitle = "Proceedings of the Eleventh International Conference on Language Resources and Evaluation ({LREC} 2018)",
month = may,
year = "2018",
address = "Miyazaki, Japan",
publisher = "European Language Resources Association (ELRA)",
url = "https://www.aclweb.org/anthology/L18-1185",
}
@inproceedings{yamamoto-2017-simplified-japanese,
title = "やさしい⽇本語対訳コーパスの構築",
author = "⼭本 和英 and
丸⼭ 拓海 and
⾓張 ⻯晴 and
稲岡 夢⼈ and
⼩川 耀⼀朗 and
勝⽥ 哲弘 and
髙橋 寛治",
booktitle = "言語処理学会第23回年次大会",
month = 3月,
year = "2017",
address = "茨城, 日本",
publisher = "言語処理学会",
url = "https://www.anlp.jp/proceedings/annual_meeting/2017/pdf_dir/B5-1.pdf",
}
@inproceedings{katsuta-yamamoto-2018-crowdsourced,
title = "Crowdsourced Corpus of Sentence Simplification with Core Vocabulary",
author = "Katsuta, Akihiro and
Yamamoto, Kazuhide",
booktitle = "Proceedings of the Eleventh International Conference on Language Resources and Evaluation ({LREC} 2018)",
month = may,
year = "2018",
address = "Miyazaki, Japan",
publisher = "European Language Resources Association (ELRA)",
url = "https://www.aclweb.org/anthology/L18-1072",
}
```
### Contributions
Thanks to [@forest1988](https://github.com/forest1988), [@lhoestq](https://github.com/lhoestq) for adding this dataset. |
ehartford/dolphin | 2023-09-25T16:59:11.000Z | [
"task_categories:text-generation",
"language:en",
"license:apache-2.0",
"region:us"
] | ehartford | null | null | null | 177 | 312 | ---
license: apache-2.0
task_categories:
- text-generation
language:
- en
---
Dolphin 🐬
https://erichartford.com/dolphin
## Dataset details
This dataset is an attempt to replicate the results of [Microsoft's Orca](https://www.microsoft.com/en-us/research/publication/orca-progressive-learning-from-complex-explanation-traces-of-gpt-4/)
Our dataset consists of:
- ~1 million of FLANv2 augmented with GPT-4 completions (flan1m-alpaca-uncensored.jsonl)
- ~3.5 million of FLANv2 augmented with GPT-3.5 completions (flan5m-alpaca-uncensored.jsonl)
We followed the submix and system prompt distribution outlined in the Orca paper. With a few exceptions. We included all 75k of CoT in the FLAN-1m dataset rather than sampling that. Also, we found that many items were duplicated, so we removed duplicates, resulting in 3.5m instructs in the ChatGPT dataset.
Then we filtered out instances of alignment, refusal, avoidance, and bias, in order to produce an uncensored model upon which can be layered your personalized alignment LoRA.
Token distribution for GPT-3.5 completions

### Loading
```python
## load GPT-4 completions
dataset = load_dataset("ehartford/dolphin",data_files="flan1m-alpaca-uncensored.jsonl")
## load GPT-3.5 completions
dataset = load_dataset("ehartford/dolphin",data_files="flan5m-alpaca-uncensored.jsonl")
```
This dataset is licensed apache-2.0 for commercial or non-commercial use.
We currently plan to release Dolphin on:
- Xgen 7b 8k
- LLaMA 13b (Non-commercial)
- MPT 30b 8k
- LLaMA 33b (Non-commercial)
- Falcon 40b
- LLaMA 65b (Non-commercial)
The Dolphin models that are released will be subject to the license of the foundational model on which it is trained. (LLaMA releases will be non-commercial)
I would like to thank the motley crew of Open Source AI/ML engineers who have worked beside me in this endeavor. Including:
- Wing "Caseus" Lian and NanoBit of OpenAccess AI Collective
- Rohan
- Teknium
- Pankaj Mathur
- Tom "TheBloke" Jobbins for quantizing and amplifying
- Special thanks to EdenCoder and chirper.ai for mentorship and financial sponsorship.
- Special thanks to Kilkonie for his very valued mentorship.
- All the other people in the Open Source AI community who have taught me and helped me along the way. |
BeIR/scifact-qrels | 2022-10-23T06:05:06.000Z | [
"task_categories:text-retrieval",
"task_ids:entity-linking-retrieval",
"task_ids:fact-checking-retrieval",
"multilinguality:monolingual",
"language:en",
"license:cc-by-sa-4.0",
"region:us"
] | BeIR | null | null | null | 0 | 311 | ---
annotations_creators: []
language_creators: []
language:
- en
license:
- cc-by-sa-4.0
multilinguality:
- monolingual
paperswithcode_id: beir
pretty_name: BEIR Benchmark
size_categories:
msmarco:
- 1M<n<10M
trec-covid:
- 100k<n<1M
nfcorpus:
- 1K<n<10K
nq:
- 1M<n<10M
hotpotqa:
- 1M<n<10M
fiqa:
- 10K<n<100K
arguana:
- 1K<n<10K
touche-2020:
- 100K<n<1M
cqadupstack:
- 100K<n<1M
quora:
- 100K<n<1M
dbpedia:
- 1M<n<10M
scidocs:
- 10K<n<100K
fever:
- 1M<n<10M
climate-fever:
- 1M<n<10M
scifact:
- 1K<n<10K
source_datasets: []
task_categories:
- text-retrieval
- zero-shot-retrieval
- information-retrieval
- zero-shot-information-retrieval
task_ids:
- passage-retrieval
- entity-linking-retrieval
- fact-checking-retrieval
- tweet-retrieval
- citation-prediction-retrieval
- duplication-question-retrieval
- argument-retrieval
- news-retrieval
- biomedical-information-retrieval
- question-answering-retrieval
---
# Dataset Card for BEIR Benchmark
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://github.com/UKPLab/beir
- **Repository:** https://github.com/UKPLab/beir
- **Paper:** https://openreview.net/forum?id=wCu6T5xFjeJ
- **Leaderboard:** https://docs.google.com/spreadsheets/d/1L8aACyPaXrL8iEelJLGqlMqXKPX2oSP_R10pZoy77Ns
- **Point of Contact:** nandan.thakur@uwaterloo.ca
### Dataset Summary
BEIR is a heterogeneous benchmark that has been built from 18 diverse datasets representing 9 information retrieval tasks:
- Fact-checking: [FEVER](http://fever.ai), [Climate-FEVER](http://climatefever.ai), [SciFact](https://github.com/allenai/scifact)
- Question-Answering: [NQ](https://ai.google.com/research/NaturalQuestions), [HotpotQA](https://hotpotqa.github.io), [FiQA-2018](https://sites.google.com/view/fiqa/)
- Bio-Medical IR: [TREC-COVID](https://ir.nist.gov/covidSubmit/index.html), [BioASQ](http://bioasq.org), [NFCorpus](https://www.cl.uni-heidelberg.de/statnlpgroup/nfcorpus/)
- News Retrieval: [TREC-NEWS](https://trec.nist.gov/data/news2019.html), [Robust04](https://trec.nist.gov/data/robust/04.guidelines.html)
- Argument Retrieval: [Touche-2020](https://webis.de/events/touche-20/shared-task-1.html), [ArguAna](tp://argumentation.bplaced.net/arguana/data)
- Duplicate Question Retrieval: [Quora](https://www.quora.com/q/quoradata/First-Quora-Dataset-Release-Question-Pairs), [CqaDupstack](http://nlp.cis.unimelb.edu.au/resources/cqadupstack/)
- Citation-Prediction: [SCIDOCS](https://allenai.org/data/scidocs)
- Tweet Retrieval: [Signal-1M](https://research.signal-ai.com/datasets/signal1m-tweetir.html)
- Entity Retrieval: [DBPedia](https://github.com/iai-group/DBpedia-Entity/)
All these datasets have been preprocessed and can be used for your experiments.
```python
```
### Supported Tasks and Leaderboards
The dataset supports a leaderboard that evaluates models against task-specific metrics such as F1 or EM, as well as their ability to retrieve supporting information from Wikipedia.
The current best performing models can be found [here](https://eval.ai/web/challenges/challenge-page/689/leaderboard/).
### Languages
All tasks are in English (`en`).
## Dataset Structure
All BEIR datasets must contain a corpus, queries and qrels (relevance judgments file). They must be in the following format:
- `corpus` file: a `.jsonl` file (jsonlines) that contains a list of dictionaries, each with three fields `_id` with unique document identifier, `title` with document title (optional) and `text` with document paragraph or passage. For example: `{"_id": "doc1", "title": "Albert Einstein", "text": "Albert Einstein was a German-born...."}`
- `queries` file: a `.jsonl` file (jsonlines) that contains a list of dictionaries, each with two fields `_id` with unique query identifier and `text` with query text. For example: `{"_id": "q1", "text": "Who developed the mass-energy equivalence formula?"}`
- `qrels` file: a `.tsv` file (tab-seperated) that contains three columns, i.e. the `query-id`, `corpus-id` and `score` in this order. Keep 1st row as header. For example: `q1 doc1 1`
### Data Instances
A high level example of any beir dataset:
```python
corpus = {
"doc1" : {
"title": "Albert Einstein",
"text": "Albert Einstein was a German-born theoretical physicist. who developed the theory of relativity, \
one of the two pillars of modern physics (alongside quantum mechanics). His work is also known for \
its influence on the philosophy of science. He is best known to the general public for his mass–energy \
equivalence formula E = mc2, which has been dubbed 'the world's most famous equation'. He received the 1921 \
Nobel Prize in Physics 'for his services to theoretical physics, and especially for his discovery of the law \
of the photoelectric effect', a pivotal step in the development of quantum theory."
},
"doc2" : {
"title": "", # Keep title an empty string if not present
"text": "Wheat beer is a top-fermented beer which is brewed with a large proportion of wheat relative to the amount of \
malted barley. The two main varieties are German Weißbier and Belgian witbier; other types include Lambic (made\
with wild yeast), Berliner Weisse (a cloudy, sour beer), and Gose (a sour, salty beer)."
},
}
queries = {
"q1" : "Who developed the mass-energy equivalence formula?",
"q2" : "Which beer is brewed with a large proportion of wheat?"
}
qrels = {
"q1" : {"doc1": 1},
"q2" : {"doc2": 1},
}
```
### Data Fields
Examples from all configurations have the following features:
### Corpus
- `corpus`: a `dict` feature representing the document title and passage text, made up of:
- `_id`: a `string` feature representing the unique document id
- `title`: a `string` feature, denoting the title of the document.
- `text`: a `string` feature, denoting the text of the document.
### Queries
- `queries`: a `dict` feature representing the query, made up of:
- `_id`: a `string` feature representing the unique query id
- `text`: a `string` feature, denoting the text of the query.
### Qrels
- `qrels`: a `dict` feature representing the query document relevance judgements, made up of:
- `_id`: a `string` feature representing the query id
- `_id`: a `string` feature, denoting the document id.
- `score`: a `int32` feature, denoting the relevance judgement between query and document.
### Data Splits
| Dataset | Website| BEIR-Name | Type | Queries | Corpus | Rel D/Q | Down-load | md5 |
| -------- | -----| ---------| --------- | ----------- | ---------| ---------| :----------: | :------:|
| MSMARCO | [Homepage](https://microsoft.github.io/msmarco/)| ``msmarco`` | ``train``<br>``dev``<br>``test``| 6,980 | 8.84M | 1.1 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/msmarco.zip) | ``444067daf65d982533ea17ebd59501e4`` |
| TREC-COVID | [Homepage](https://ir.nist.gov/covidSubmit/index.html)| ``trec-covid``| ``test``| 50| 171K| 493.5 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/trec-covid.zip) | ``ce62140cb23feb9becf6270d0d1fe6d1`` |
| NFCorpus | [Homepage](https://www.cl.uni-heidelberg.de/statnlpgroup/nfcorpus/) | ``nfcorpus`` | ``train``<br>``dev``<br>``test``| 323 | 3.6K | 38.2 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/nfcorpus.zip) | ``a89dba18a62ef92f7d323ec890a0d38d`` |
| BioASQ | [Homepage](http://bioasq.org) | ``bioasq``| ``train``<br>``test`` | 500 | 14.91M | 8.05 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#2-bioasq) |
| NQ | [Homepage](https://ai.google.com/research/NaturalQuestions) | ``nq``| ``train``<br>``test``| 3,452 | 2.68M | 1.2 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/nq.zip) | ``d4d3d2e48787a744b6f6e691ff534307`` |
| HotpotQA | [Homepage](https://hotpotqa.github.io) | ``hotpotqa``| ``train``<br>``dev``<br>``test``| 7,405 | 5.23M | 2.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/hotpotqa.zip) | ``f412724f78b0d91183a0e86805e16114`` |
| FiQA-2018 | [Homepage](https://sites.google.com/view/fiqa/) | ``fiqa`` | ``train``<br>``dev``<br>``test``| 648 | 57K | 2.6 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/fiqa.zip) | ``17918ed23cd04fb15047f73e6c3bd9d9`` |
| Signal-1M(RT) | [Homepage](https://research.signal-ai.com/datasets/signal1m-tweetir.html)| ``signal1m`` | ``test``| 97 | 2.86M | 19.6 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#4-signal-1m) |
| TREC-NEWS | [Homepage](https://trec.nist.gov/data/news2019.html) | ``trec-news`` | ``test``| 57 | 595K | 19.6 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#1-trec-news) |
| ArguAna | [Homepage](http://argumentation.bplaced.net/arguana/data) | ``arguana``| ``test`` | 1,406 | 8.67K | 1.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/arguana.zip) | ``8ad3e3c2a5867cdced806d6503f29b99`` |
| Touche-2020| [Homepage](https://webis.de/events/touche-20/shared-task-1.html) | ``webis-touche2020``| ``test``| 49 | 382K | 19.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/webis-touche2020.zip) | ``46f650ba5a527fc69e0a6521c5a23563`` |
| CQADupstack| [Homepage](http://nlp.cis.unimelb.edu.au/resources/cqadupstack/) | ``cqadupstack``| ``test``| 13,145 | 457K | 1.4 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/cqadupstack.zip) | ``4e41456d7df8ee7760a7f866133bda78`` |
| Quora| [Homepage](https://www.quora.com/q/quoradata/First-Quora-Dataset-Release-Question-Pairs) | ``quora``| ``dev``<br>``test``| 10,000 | 523K | 1.6 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/quora.zip) | ``18fb154900ba42a600f84b839c173167`` |
| DBPedia | [Homepage](https://github.com/iai-group/DBpedia-Entity/) | ``dbpedia-entity``| ``dev``<br>``test``| 400 | 4.63M | 38.2 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/dbpedia-entity.zip) | ``c2a39eb420a3164af735795df012ac2c`` |
| SCIDOCS| [Homepage](https://allenai.org/data/scidocs) | ``scidocs``| ``test``| 1,000 | 25K | 4.9 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/scidocs.zip) | ``38121350fc3a4d2f48850f6aff52e4a9`` |
| FEVER | [Homepage](http://fever.ai) | ``fever``| ``train``<br>``dev``<br>``test``| 6,666 | 5.42M | 1.2| [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/fever.zip) | ``5a818580227bfb4b35bb6fa46d9b6c03`` |
| Climate-FEVER| [Homepage](http://climatefever.ai) | ``climate-fever``|``test``| 1,535 | 5.42M | 3.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/climate-fever.zip) | ``8b66f0a9126c521bae2bde127b4dc99d`` |
| SciFact| [Homepage](https://github.com/allenai/scifact) | ``scifact``| ``train``<br>``test``| 300 | 5K | 1.1 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/scifact.zip) | ``5f7d1de60b170fc8027bb7898e2efca1`` |
| Robust04 | [Homepage](https://trec.nist.gov/data/robust/04.guidelines.html) | ``robust04``| ``test``| 249 | 528K | 69.9 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#3-robust04) |
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
[Needs More Information]
## Considerations for Using the Data
### Social Impact of Dataset
[Needs More Information]
### Discussion of Biases
[Needs More Information]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
[Needs More Information]
### Licensing Information
[Needs More Information]
### Citation Information
Cite as:
```
@inproceedings{
thakur2021beir,
title={{BEIR}: A Heterogeneous Benchmark for Zero-shot Evaluation of Information Retrieval Models},
author={Nandan Thakur and Nils Reimers and Andreas R{\"u}ckl{\'e} and Abhishek Srivastava and Iryna Gurevych},
booktitle={Thirty-fifth Conference on Neural Information Processing Systems Datasets and Benchmarks Track (Round 2)},
year={2021},
url={https://openreview.net/forum?id=wCu6T5xFjeJ}
}
```
### Contributions
Thanks to [@Nthakur20](https://github.com/Nthakur20) for adding this dataset. |
seamew/ChnSentiCorp | 2021-06-22T08:58:53.000Z | [
"region:us"
] | seamew | null | null | null | 19 | 309 | Entry not found |
biosses | 2022-11-03T16:31:20.000Z | [
"task_categories:text-classification",
"task_ids:text-scoring",
"task_ids:semantic-similarity-scoring",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:n<1K",
"source_datasets:original",
"language:en",
"license:gpl-3.0",
"region:us"
] | null | BIOSSES is a benchmark dataset for biomedical sentence similarity estimation. The dataset comprises 100 sentence pairs, in which each sentence was selected from the TAC (Text Analysis Conference) Biomedical Summarization Track Training Dataset containing articles from the biomedical domain. The sentence pairs were evaluated by five different human experts that judged their similarity and gave scores ranging from 0 (no relation) to 4 (equivalent). | @article{souganciouglu2017biosses,
title={BIOSSES: a semantic sentence similarity estimation system for the biomedical domain},
author={So{\\u{g}}anc{\\i}o{\\u{g}}lu, Gizem and {\\"O}zt{\\"u}rk, Hakime and {\\"O}zg{\\"u}r, Arzucan},
journal={Bioinformatics},
volume={33},
number={14},
pages={i49--i58},
year={2017},
publisher={Oxford University Press}
} | null | 4 | 308 | ---
annotations_creators:
- expert-generated
language_creators:
- found
language:
- en
license:
- gpl-3.0
multilinguality:
- monolingual
size_categories:
- n<1K
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- text-scoring
- semantic-similarity-scoring
paperswithcode_id: biosses
pretty_name: BIOSSES
dataset_info:
features:
- name: sentence1
dtype: string
- name: sentence2
dtype: string
- name: score
dtype: float32
splits:
- name: train
num_bytes: 32783
num_examples: 100
download_size: 36324
dataset_size: 32783
---
# Dataset Card for BIOSSES
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://tabilab.cmpe.boun.edu.tr/BIOSSES/DataSet.html
- **Repository:** https://github.com/gizemsogancioglu/biosses
- **Paper:** [BIOSSES: a semantic sentence similarity estimation system for the biomedical domain](https://academic.oup.com/bioinformatics/article/33/14/i49/3953954)
- **Point of Contact:** [Gizem Soğancıoğlu](gizemsogancioglu@gmail.com) and [Arzucan Özgür](gizemsogancioglu@gmail.com)
### Dataset Summary
BIOSSES is a benchmark dataset for biomedical sentence similarity estimation. The dataset comprises 100 sentence pairs, in which each sentence was selected from the [TAC (Text Analysis Conference) Biomedical Summarization Track Training Dataset](https://tac.nist.gov/2014/BiomedSumm/) containing articles from the biomedical domain. The sentence pairs in BIOSSES were selected from citing sentences, i.e. sentences that have a citation to a reference article.
The sentence pairs were evaluated by five different human experts that judged their similarity and gave scores ranging from 0 (no relation) to 4 (equivalent). In the original paper the mean of the scores assigned by the five human annotators was taken as the gold standard. The Pearson correlation between the gold standard scores and the scores estimated by the models was used as the evaluation metric. The strength of correlation can be assessed by the general guideline proposed by Evans (1996) as follows:
- very strong: 0.80–1.00
- strong: 0.60–0.79
- moderate: 0.40–0.59
- weak: 0.20–0.39
- very weak: 0.00–0.19
### Supported Tasks and Leaderboards
Biomedical Semantic Similarity Scoring.
### Languages
English.
## Dataset Structure
### Data Instances
For each instance, there are two sentences (i.e. sentence 1 and 2), and its corresponding similarity score (the mean of the scores assigned by the five human annotators).
```
{'sentence 1': 'Here, looking for agents that could specifically kill KRAS mutant cells, they found that knockdown of GATA2 was synthetically lethal with KRAS mutation'
'sentence 2': 'Not surprisingly, GATA2 knockdown in KRAS mutant cells resulted in a striking reduction of active GTP-bound RHO proteins, including the downstream ROCK kinase'
'score': 2.2}
```
### Data Fields
- `sentence 1`: string
- `sentence 2`: string
- `score`: float ranging from 0 (no relation) to 4 (equivalent)
### Data Splits
No data splits provided.
## Dataset Creation
### Curation Rationale
### Source Data
The [TAC (Text Analysis Conference) Biomedical Summarization Track Training Dataset](https://tac.nist.gov/2014/BiomedSumm/).
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
The sentence pairs were evaluated by five different human experts that judged their similarity and gave scores ranging from 0 (no relation) to 4 (equivalent). The score range was described based on the guidelines of SemEval 2012 Task 6 on STS (Agirre et al., 2012). Besides the annotation instructions, example sentences from the biomedical literature were provided to the annotators for each of the similarity degrees.
The table below shows the Pearson correlation of the scores of each annotator with respect to the average scores of the remaining four annotators. It is observed that there is strong association among the scores of the annotators. The lowest correlations are 0.902, which can be considered as an upper bound for an algorithmic measure evaluated on this dataset.
| |Correlation r |
|----------:|--------------:|
|Annotator A| 0.952|
|Annotator B| 0.958|
|Annotator C| 0.917|
|Annotator D| 0.902|
|Annotator E| 0.941|
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
- Gizem Soğancıoğlu, gizemsogancioglu@gmail.com
- Hakime Öztürk, hakime.ozturk@boun.edu.tr
- Arzucan Özgür, gizemsogancioglu@gmail.com
Bogazici University, Istanbul, Turkey
### Licensing Information
BIOSSES is made available under the terms of [The GNU Common Public License v.3.0](https://www.gnu.org/licenses/gpl-3.0.en.html).
### Citation Information
@article{souganciouglu2017biosses,
title={BIOSSES: a semantic sentence similarity estimation system for the biomedical domain},
author={So{\u{g}}anc{\i}o{\u{g}}lu, Gizem and {\"O}zt{\"u}rk, Hakime and {\"O}zg{\"u}r, Arzucan},
journal={Bioinformatics},
volume={33},
number={14},
pages={i49--i58},
year={2017},
publisher={Oxford University Press}
}
### Contributions
Thanks to [@bwang482](https://github.com/bwang482) for adding this dataset. |
squad_es | 2023-04-05T13:40:35.000Z | [
"task_categories:question-answering",
"task_ids:extractive-qa",
"annotations_creators:machine-generated",
"language_creators:machine-generated",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:extended|squad",
"language:es",
"license:cc-by-4.0",
"arxiv:1912.05200",
"region:us"
] | null | automatic translation of the Stanford Question Answering Dataset (SQuAD) v2 into Spanish | @article{2016arXiv160605250R,
author = {Casimiro Pio , Carrino and Marta R. , Costa-jussa and Jose A. R. , Fonollosa},
title = "{Automatic Spanish Translation of the SQuAD Dataset for Multilingual
Question Answering}",
journal = {arXiv e-prints},
year = 2019,
eid = {arXiv:1912.05200v1},
pages = {arXiv:1912.05200v1},
archivePrefix = {arXiv},
eprint = {1912.05200v2},
} | null | 5 | 308 | ---
annotations_creators:
- machine-generated
language_creators:
- machine-generated
language:
- es
license:
- cc-by-4.0
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- extended|squad
task_categories:
- question-answering
task_ids:
- extractive-qa
paperswithcode_id: squad-es
pretty_name: SQuAD-es
dataset_info:
features:
- name: id
dtype: string
- name: title
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: text
dtype: string
- name: answer_start
dtype: int32
config_name: v1.1.0
splits:
- name: train
num_bytes: 83680438
num_examples: 87595
- name: validation
num_bytes: 10955800
num_examples: 10570
download_size: 39291362
dataset_size: 94636238
---
# Dataset Card for "squad_es"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://github.com/ccasimiro88/TranslateAlignRetrieve](https://github.com/ccasimiro88/TranslateAlignRetrieve)
- **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of downloaded dataset files:** 39.29 MB
- **Size of the generated dataset:** 94.63 MB
- **Total amount of disk used:** 133.92 MB
### Dataset Summary
Automatic translation of the Stanford Question Answering Dataset (SQuAD) v2 into Spanish
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Dataset Structure
### Data Instances
#### v1.1.0
- **Size of downloaded dataset files:** 39.29 MB
- **Size of the generated dataset:** 94.63 MB
- **Total amount of disk used:** 133.92 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"answers": {
"answer_start": [404, 356, 356],
"text": ["Santa Clara, California", "Levi 's Stadium", "Levi 's Stadium en la Bahía de San Francisco en Santa Clara, California."]
},
"context": "\"El Super Bowl 50 fue un partido de fútbol americano para determinar al campeón de la NFL para la temporada 2015. El campeón de ...",
"id": "56be4db0acb8001400a502ee",
"question": "¿Dónde tuvo lugar el Super Bowl 50?",
"title": "Super Bowl _ 50"
}
```
### Data Fields
The data fields are the same among all splits.
#### v1.1.0
- `id`: a `string` feature.
- `title`: a `string` feature.
- `context`: a `string` feature.
- `question`: a `string` feature.
- `answers`: a dictionary feature containing:
- `text`: a `string` feature.
- `answer_start`: a `int32` feature.
### Data Splits
| name |train|validation|
|------|----:|---------:|
|v1.1.0|87595| 10570|
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
The SQuAD-es dataset is licensed under the [CC BY 4.0](https://creativecommons.org/licenses/by/4.0/) license.
### Citation Information
```
@article{2016arXiv160605250R,
author = {Casimiro Pio , Carrino and Marta R. , Costa-jussa and Jose A. R. , Fonollosa},
title = "{Automatic Spanish Translation of the SQuAD Dataset for Multilingual
Question Answering}",
journal = {arXiv e-prints},
year = 2019,
eid = {arXiv:1912.05200v1},
pages = {arXiv:1912.05200v1},
archivePrefix = {arXiv},
eprint = {1912.05200v2},
}
```
### Contributions
Thanks to [@patrickvonplaten](https://github.com/patrickvonplaten), [@thomwolf](https://github.com/thomwolf), [@albertvillanova](https://github.com/albertvillanova), [@lewtun](https://github.com/lewtun) for adding this dataset. |
olm/wikipedia | 2022-11-15T18:39:59.000Z | [
"task_categories:text-generation",
"task_categories:fill-mask",
"task_ids:language-modeling",
"task_ids:masked-language-modeling",
"annotations_creators:no-annotation",
"language_creators:crowdsourced",
"multilinguality:multilingual",
"size_categories:n<1K",
"size_categories:1K<n<10K",
"size_categories:10K<n<100K",
"size_categories:100K<n<1M",
"size_categories:1M<n<10M",
"source_datasets:original",
"language:aa",
"language:ab",
"language:ace",
"language:af",
"language:ak",
"language:als",
"language:am",
"language:an",
"language:ang",
"language:ar",
"language:arc",
"language:arz",
"language:as",
"language:ast",
"language:atj",
"language:av",
"language:ay",
"language:az",
"language:azb",
"language:ba",
"language:bar",
"language:bcl",
"language:be",
"language:bg",
"language:bh",
"language:bi",
"language:bjn",
"language:bm",
"language:bn",
"language:bo",
"language:bpy",
"language:br",
"language:bs",
"language:bug",
"language:bxr",
"language:ca",
"language:cbk",
"language:cdo",
"language:ce",
"language:ceb",
"language:ch",
"language:cho",
"language:chr",
"language:chy",
"language:ckb",
"language:co",
"language:cr",
"language:crh",
"language:cs",
"language:csb",
"language:cu",
"language:cv",
"language:cy",
"language:da",
"language:de",
"language:din",
"language:diq",
"language:dsb",
"language:dty",
"language:dv",
"language:dz",
"language:ee",
"language:el",
"language:eml",
"language:en",
"language:eo",
"language:es",
"language:et",
"language:eu",
"language:ext",
"language:fa",
"language:ff",
"language:fi",
"language:fj",
"language:fo",
"language:fr",
"language:frp",
"language:frr",
"language:fur",
"language:fy",
"language:ga",
"language:gag",
"language:gan",
"language:gd",
"language:gl",
"language:glk",
"language:gn",
"language:gom",
"language:gor",
"language:got",
"language:gu",
"language:gv",
"language:ha",
"language:hak",
"language:haw",
"language:he",
"language:hi",
"language:hif",
"language:ho",
"language:hr",
"language:hsb",
"language:ht",
"language:hu",
"language:hy",
"language:ia",
"language:id",
"language:ie",
"language:ig",
"language:ii",
"language:ik",
"language:ilo",
"language:inh",
"language:io",
"language:is",
"language:it",
"language:iu",
"language:ja",
"language:jam",
"language:jbo",
"language:jv",
"language:ka",
"language:kaa",
"language:kab",
"language:kbd",
"language:kbp",
"language:kg",
"language:ki",
"language:kj",
"language:kk",
"language:kl",
"language:km",
"language:kn",
"language:ko",
"language:koi",
"language:krc",
"language:ks",
"language:ksh",
"language:ku",
"language:kv",
"language:kw",
"language:ky",
"language:la",
"language:lad",
"language:lb",
"language:lbe",
"language:lez",
"language:lfn",
"language:lg",
"language:li",
"language:lij",
"language:lmo",
"language:ln",
"language:lo",
"language:lrc",
"language:lt",
"language:ltg",
"language:lv",
"language:lzh",
"language:mai",
"language:mdf",
"language:mg",
"language:mh",
"language:mhr",
"language:mi",
"language:min",
"language:mk",
"language:ml",
"language:mn",
"language:mr",
"language:mrj",
"language:ms",
"language:mt",
"language:mus",
"language:mwl",
"language:my",
"language:myv",
"language:mzn",
"language:na",
"language:nah",
"language:nan",
"language:nap",
"language:nds",
"language:ne",
"language:new",
"language:ng",
"language:nl",
"language:nn",
"language:no",
"language:nov",
"language:nrf",
"language:nso",
"language:nv",
"language:ny",
"language:oc",
"language:olo",
"language:om",
"language:or",
"language:os",
"language:pa",
"language:pag",
"language:pam",
"language:pap",
"language:pcd",
"language:pdc",
"language:pfl",
"language:pi",
"language:pih",
"language:pl",
"language:pms",
"language:pnb",
"language:pnt",
"language:ps",
"language:pt",
"language:qu",
"language:rm",
"language:rmy",
"language:rn",
"language:ro",
"language:ru",
"language:rue",
"language:rup",
"language:rw",
"language:sa",
"language:sah",
"language:sat",
"language:sc",
"language:scn",
"language:sco",
"language:sd",
"language:se",
"language:sg",
"language:sgs",
"language:sh",
"language:si",
"language:sk",
"language:sl",
"language:sm",
"language:sn",
"language:so",
"language:sq",
"language:sr",
"language:srn",
"language:ss",
"language:st",
"language:stq",
"language:su",
"language:sv",
"language:sw",
"language:szl",
"language:ta",
"language:tcy",
"language:tdt",
"language:te",
"language:tg",
"language:th",
"language:ti",
"language:tk",
"language:tl",
"language:tn",
"language:to",
"language:tpi",
"language:tr",
"language:ts",
"language:tt",
"language:tum",
"language:tw",
"language:ty",
"language:tyv",
"language:udm",
"language:ug",
"language:uk",
"language:ur",
"language:uz",
"language:ve",
"language:vec",
"language:vep",
"language:vi",
"language:vls",
"language:vo",
"language:vro",
"language:wa",
"language:war",
"language:wo",
"language:wuu",
"language:xal",
"language:xh",
"language:xmf",
"language:yi",
"language:yo",
"language:yue",
"language:za",
"language:zea",
"language:zh",
"language:zu",
"license:cc-by-sa-3.0",
"license:gfdl",
"region:us"
] | olm | Wikipedia dataset containing cleaned articles of all languages.
The datasets are built from the Wikipedia dump
(https://dumps.wikimedia.org/) with one split per language. Each example
contains the content of one full Wikipedia article with cleaning to strip
markdown and unwanted sections (references, etc.). | @ONLINE {wikidump,
author = {Wikimedia Foundation},
title = {Wikimedia Downloads},
url = {https://dumps.wikimedia.org}
} | null | 24 | 308 | ---
annotations_creators:
- no-annotation
language_creators:
- crowdsourced
pretty_name: Wikipedia
paperswithcode_id: null
license:
- cc-by-sa-3.0
- gfdl
task_categories:
- text-generation
- fill-mask
task_ids:
- language-modeling
- masked-language-modeling
source_datasets:
- original
multilinguality:
- multilingual
size_categories:
- n<1K
- 1K<n<10K
- 10K<n<100K
- 100K<n<1M
- 1M<n<10M
language:
- aa
- ab
- ace
- af
- ak
- als
- am
- an
- ang
- ar
- arc
- arz
- as
- ast
- atj
- av
- ay
- az
- azb
- ba
- bar
- bcl
- be
- bg
- bh
- bi
- bjn
- bm
- bn
- bo
- bpy
- br
- bs
- bug
- bxr
- ca
- cbk
- cdo
- ce
- ceb
- ch
- cho
- chr
- chy
- ckb
- co
- cr
- crh
- cs
- csb
- cu
- cv
- cy
- da
- de
- din
- diq
- dsb
- dty
- dv
- dz
- ee
- el
- eml
- en
- eo
- es
- et
- eu
- ext
- fa
- ff
- fi
- fj
- fo
- fr
- frp
- frr
- fur
- fy
- ga
- gag
- gan
- gd
- gl
- glk
- gn
- gom
- gor
- got
- gu
- gv
- ha
- hak
- haw
- he
- hi
- hif
- ho
- hr
- hsb
- ht
- hu
- hy
- ia
- id
- ie
- ig
- ii
- ik
- ilo
- inh
- io
- is
- it
- iu
- ja
- jam
- jbo
- jv
- ka
- kaa
- kab
- kbd
- kbp
- kg
- ki
- kj
- kk
- kl
- km
- kn
- ko
- koi
- krc
- ks
- ksh
- ku
- kv
- kw
- ky
- la
- lad
- lb
- lbe
- lez
- lfn
- lg
- li
- lij
- lmo
- ln
- lo
- lrc
- lt
- ltg
- lv
- lzh
- mai
- mdf
- mg
- mh
- mhr
- mi
- min
- mk
- ml
- mn
- mr
- mrj
- ms
- mt
- mus
- mwl
- my
- myv
- mzn
- na
- nah
- nan
- nap
- nds
- ne
- new
- ng
- nl
- nn
- 'no'
- nov
- nrf
- nso
- nv
- ny
- oc
- olo
- om
- or
- os
- pa
- pag
- pam
- pap
- pcd
- pdc
- pfl
- pi
- pih
- pl
- pms
- pnb
- pnt
- ps
- pt
- qu
- rm
- rmy
- rn
- ro
- ru
- rue
- rup
- rw
- sa
- sah
- sat
- sc
- scn
- sco
- sd
- se
- sg
- sgs
- sh
- si
- sk
- sl
- sm
- sn
- so
- sq
- sr
- srn
- ss
- st
- stq
- su
- sv
- sw
- szl
- ta
- tcy
- tdt
- te
- tg
- th
- ti
- tk
- tl
- tn
- to
- tpi
- tr
- ts
- tt
- tum
- tw
- ty
- tyv
- udm
- ug
- uk
- ur
- uz
- ve
- vec
- vep
- vi
- vls
- vo
- vro
- wa
- war
- wo
- wuu
- xal
- xh
- xmf
- yi
- yo
- yue
- za
- zea
- zh
- zu
language_bcp47:
- nds-nl
configs:
- 20220301.aa
- 20220301.ab
- 20220301.ace
- 20220301.ady
- 20220301.af
- 20220301.ak
- 20220301.als
- 20220301.am
- 20220301.an
- 20220301.ang
- 20220301.ar
- 20220301.arc
- 20220301.arz
- 20220301.as
- 20220301.ast
- 20220301.atj
- 20220301.av
- 20220301.ay
- 20220301.az
- 20220301.azb
- 20220301.ba
- 20220301.bar
- 20220301.bat-smg
- 20220301.bcl
- 20220301.be
- 20220301.be-x-old
- 20220301.bg
- 20220301.bh
- 20220301.bi
- 20220301.bjn
- 20220301.bm
- 20220301.bn
- 20220301.bo
- 20220301.bpy
- 20220301.br
- 20220301.bs
- 20220301.bug
- 20220301.bxr
- 20220301.ca
- 20220301.cbk-zam
- 20220301.cdo
- 20220301.ce
- 20220301.ceb
- 20220301.ch
- 20220301.cho
- 20220301.chr
- 20220301.chy
- 20220301.ckb
- 20220301.co
- 20220301.cr
- 20220301.crh
- 20220301.cs
- 20220301.csb
- 20220301.cu
- 20220301.cv
- 20220301.cy
- 20220301.da
- 20220301.de
- 20220301.din
- 20220301.diq
- 20220301.dsb
- 20220301.dty
- 20220301.dv
- 20220301.dz
- 20220301.ee
- 20220301.el
- 20220301.eml
- 20220301.en
- 20220301.eo
- 20220301.es
- 20220301.et
- 20220301.eu
- 20220301.ext
- 20220301.fa
- 20220301.ff
- 20220301.fi
- 20220301.fiu-vro
- 20220301.fj
- 20220301.fo
- 20220301.fr
- 20220301.frp
- 20220301.frr
- 20220301.fur
- 20220301.fy
- 20220301.ga
- 20220301.gag
- 20220301.gan
- 20220301.gd
- 20220301.gl
- 20220301.glk
- 20220301.gn
- 20220301.gom
- 20220301.gor
- 20220301.got
- 20220301.gu
- 20220301.gv
- 20220301.ha
- 20220301.hak
- 20220301.haw
- 20220301.he
- 20220301.hi
- 20220301.hif
- 20220301.ho
- 20220301.hr
- 20220301.hsb
- 20220301.ht
- 20220301.hu
- 20220301.hy
- 20220301.ia
- 20220301.id
- 20220301.ie
- 20220301.ig
- 20220301.ii
- 20220301.ik
- 20220301.ilo
- 20220301.inh
- 20220301.io
- 20220301.is
- 20220301.it
- 20220301.iu
- 20220301.ja
- 20220301.jam
- 20220301.jbo
- 20220301.jv
- 20220301.ka
- 20220301.kaa
- 20220301.kab
- 20220301.kbd
- 20220301.kbp
- 20220301.kg
- 20220301.ki
- 20220301.kj
- 20220301.kk
- 20220301.kl
- 20220301.km
- 20220301.kn
- 20220301.ko
- 20220301.koi
- 20220301.krc
- 20220301.ks
- 20220301.ksh
- 20220301.ku
- 20220301.kv
- 20220301.kw
- 20220301.ky
- 20220301.la
- 20220301.lad
- 20220301.lb
- 20220301.lbe
- 20220301.lez
- 20220301.lfn
- 20220301.lg
- 20220301.li
- 20220301.lij
- 20220301.lmo
- 20220301.ln
- 20220301.lo
- 20220301.lrc
- 20220301.lt
- 20220301.ltg
- 20220301.lv
- 20220301.mai
- 20220301.map-bms
- 20220301.mdf
- 20220301.mg
- 20220301.mh
- 20220301.mhr
- 20220301.mi
- 20220301.min
- 20220301.mk
- 20220301.ml
- 20220301.mn
- 20220301.mr
- 20220301.mrj
- 20220301.ms
- 20220301.mt
- 20220301.mus
- 20220301.mwl
- 20220301.my
- 20220301.myv
- 20220301.mzn
- 20220301.na
- 20220301.nah
- 20220301.nap
- 20220301.nds
- 20220301.nds-nl
- 20220301.ne
- 20220301.new
- 20220301.ng
- 20220301.nl
- 20220301.nn
- 20220301.no
- 20220301.nov
- 20220301.nrm
- 20220301.nso
- 20220301.nv
- 20220301.ny
- 20220301.oc
- 20220301.olo
- 20220301.om
- 20220301.or
- 20220301.os
- 20220301.pa
- 20220301.pag
- 20220301.pam
- 20220301.pap
- 20220301.pcd
- 20220301.pdc
- 20220301.pfl
- 20220301.pi
- 20220301.pih
- 20220301.pl
- 20220301.pms
- 20220301.pnb
- 20220301.pnt
- 20220301.ps
- 20220301.pt
- 20220301.qu
- 20220301.rm
- 20220301.rmy
- 20220301.rn
- 20220301.ro
- 20220301.roa-rup
- 20220301.roa-tara
- 20220301.ru
- 20220301.rue
- 20220301.rw
- 20220301.sa
- 20220301.sah
- 20220301.sat
- 20220301.sc
- 20220301.scn
- 20220301.sco
- 20220301.sd
- 20220301.se
- 20220301.sg
- 20220301.sh
- 20220301.si
- 20220301.simple
- 20220301.sk
- 20220301.sl
- 20220301.sm
- 20220301.sn
- 20220301.so
- 20220301.sq
- 20220301.sr
- 20220301.srn
- 20220301.ss
- 20220301.st
- 20220301.stq
- 20220301.su
- 20220301.sv
- 20220301.sw
- 20220301.szl
- 20220301.ta
- 20220301.tcy
- 20220301.te
- 20220301.tet
- 20220301.tg
- 20220301.th
- 20220301.ti
- 20220301.tk
- 20220301.tl
- 20220301.tn
- 20220301.to
- 20220301.tpi
- 20220301.tr
- 20220301.ts
- 20220301.tt
- 20220301.tum
- 20220301.tw
- 20220301.ty
- 20220301.tyv
- 20220301.udm
- 20220301.ug
- 20220301.uk
- 20220301.ur
- 20220301.uz
- 20220301.ve
- 20220301.vec
- 20220301.vep
- 20220301.vi
- 20220301.vls
- 20220301.vo
- 20220301.wa
- 20220301.war
- 20220301.wo
- 20220301.wuu
- 20220301.xal
- 20220301.xh
- 20220301.xmf
- 20220301.yi
- 20220301.yo
- 20220301.za
- 20220301.zea
- 20220301.zh
- 20220301.zh-classical
- 20220301.zh-min-nan
- 20220301.zh-yue
- 20220301.zu
---
# Dataset Card for Wikipedia
This repo is a fork of the original Hugging Face Wikipedia repo [here](https://huggingface.co/datasets/wikipedia).
The difference is that this fork does away with the need for `apache-beam`, and this fork is very fast if you have a lot of CPUs on your machine.
It will use all CPUs available to create a clean Wikipedia pretraining dataset. It takes less than an hour to process all of English wikipedia on a GCP n1-standard-96.
This fork is also used in the [OLM Project](https://github.com/huggingface/olm-datasets) to pull and process up-to-date wikipedia snapshots.
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://dumps.wikimedia.org](https://dumps.wikimedia.org)
- **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Dataset Summary
Wikipedia dataset containing cleaned articles of all languages.
The datasets are built from the Wikipedia dump
(https://dumps.wikimedia.org/) with one split per language. Each example
contains the content of one full Wikipedia article with cleaning to strip
markdown and unwanted sections (references, etc.).
The articles are parsed using the ``mwparserfromhell`` tool, and we use ``multiprocess`` for parallelization.
To load this dataset you need to install these first:
```
pip install mwparserfromhell==0.6.4 multiprocess==0.70.13
```
Then, you can load any subset of Wikipedia per language and per date this way:
```python
from datasets import load_dataset
load_dataset("olm/wikipedia", language="en", date="20220920")
```
You can find the full list of languages and dates [here](https://dumps.wikimedia.org/backup-index.html).
### Supported Tasks and Leaderboards
The dataset is generally used for Language Modeling.
### Languages
You can find the list of languages [here](https://meta.wikimedia.org/wiki/List_of_Wikipedias).
## Dataset Structure
### Data Instances
An example looks as follows:
```
{'id': '1',
'url': 'https://simple.wikipedia.org/wiki/April',
'title': 'April',
'text': 'April is the fourth month...'
}
```
### Data Fields
The data fields are the same among all configurations:
- `id` (`str`): ID of the article.
- `url` (`str`): URL of the article.
- `title` (`str`): Title of the article.
- `text` (`str`): Text content of the article.
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
Most of Wikipedia's text and many of its images are co-licensed under the
[Creative Commons Attribution-ShareAlike 3.0 Unported License](https://en.wikipedia.org/wiki/Wikipedia:Text_of_Creative_Commons_Attribution-ShareAlike_3.0_Unported_License)
(CC BY-SA) and the [GNU Free Documentation License](https://en.wikipedia.org/wiki/Wikipedia:Text_of_the_GNU_Free_Documentation_License)
(GFDL) (unversioned, with no invariant sections, front-cover texts, or back-cover texts).
Some text has been imported only under CC BY-SA and CC BY-SA-compatible license and cannot be reused under GFDL; such
text will be identified on the page footer, in the page history, or on the discussion page of the article that utilizes
the text.
### Citation Information
```
@ONLINE{wikidump,
author = "Wikimedia Foundation",
title = "Wikimedia Downloads",
url = "https://dumps.wikimedia.org"
}
```
|
maritaca-ai/boolq_pt | 2023-02-09T00:38:29.000Z | [
"region:us"
] | maritaca-ai | BoolQ is a question answering dataset for yes/no questions containing 15942 examples. These questions are naturally
occurring ---they are generated in unprompted and unconstrained settings.
Each example is a triplet of (question, passage, answer), with the title of the page as optional additional context.
The text-pair classification setup is similar to existing natural language inference tasks. | @inproceedings{clark2019boolq,
title = {BoolQ: Exploring the Surprising Difficulty of Natural Yes/No Questions},
author = {Clark, Christopher and Lee, Kenton and Chang, Ming-Wei, and Kwiatkowski, Tom and Collins, Michael, and Toutanova, Kristina},
booktitle = {NAACL},
year = {2019},
} | null | 1 | 308 | Entry not found |
jamescalam/agent-conversations-retrieval-tool | 2023-08-27T12:57:37.000Z | [
"region:us"
] | jamescalam | null | null | null | 7 | 307 | Entry not found |
reginaboateng/Bioasq7b | 2023-07-13T13:55:58.000Z | [
"language:en",
"region:us"
] | reginaboateng | null | null | null | 1 | 306 | ---
language: en
dataset_info:
features:
- name: context
dtype: string
- name: question
dtype: string
- name: id
dtype: string
- name: answers
dtype: string
splits:
- name: train
num_bytes: 9973215.098861594
num_examples: 6000
- name: validation
num_bytes: 1123648.9011384062
num_examples: 676
download_size: 6069060
dataset_size: 11096864.0
---
# Dataset Card for "Bioasq7b"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
notrichardren/azaria-mitchell | 2023-08-17T21:22:50.000Z | [
"region:us"
] | notrichardren | null | null | null | 0 | 306 | ---
configs:
- config_name: default
data_files:
- split: combined
path: data/combined-*
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: claim
dtype: string
- name: label
dtype: int64
- name: dataset
dtype: string
- name: qa_type
dtype: int64
- name: ind
dtype: int64
splits:
- name: combined
num_bytes: 1553103
num_examples: 17092
- name: train
num_bytes: 1244045
num_examples: 13673
- name: test
num_bytes: 309058
num_examples: 3419
download_size: 1228770
dataset_size: 3106206
---
# Dataset Card for "azaria-mitchell"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
lener_br | 2023-09-25T07:35:39.000Z | [
"task_categories:token-classification",
"task_ids:named-entity-recognition",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:pt",
"license:unknown",
"legal",
"region:us"
] | null | LeNER-Br is a Portuguese language dataset for named entity recognition
applied to legal documents. LeNER-Br consists entirely of manually annotated
legislation and legal cases texts and contains tags for persons, locations,
time entities, organizations, legislation and legal cases.
To compose the dataset, 66 legal documents from several Brazilian Courts were
collected. Courts of superior and state levels were considered, such as Supremo
Tribunal Federal, Superior Tribunal de Justiça, Tribunal de Justiça de Minas
Gerais and Tribunal de Contas da União. In addition, four legislation documents
were collected, such as "Lei Maria da Penha", giving a total of 70 documents | @inproceedings{luz_etal_propor2018,
author = {Pedro H. {Luz de Araujo} and Te\'{o}filo E. {de Campos} and
Renato R. R. {de Oliveira} and Matheus Stauffer and
Samuel Couto and Paulo Bermejo},
title = {{LeNER-Br}: a Dataset for Named Entity Recognition in {Brazilian} Legal Text},
booktitle = {International Conference on the Computational Processing of Portuguese ({PROPOR})},
publisher = {Springer},
series = {Lecture Notes on Computer Science ({LNCS})},
pages = {313--323},
year = {2018},
month = {September 24-26},
address = {Canela, RS, Brazil},
doi = {10.1007/978-3-319-99722-3_32},
url = {https://cic.unb.br/~teodecampos/LeNER-Br/},
} | null | 20 | 305 | ---
annotations_creators:
- expert-generated
language_creators:
- expert-generated
language:
- pt
license:
- unknown
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- token-classification
task_ids:
- named-entity-recognition
paperswithcode_id: lener-br
pretty_name: leNER-br
dataset_info:
features:
- name: id
dtype: string
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-ORGANIZACAO
'2': I-ORGANIZACAO
'3': B-PESSOA
'4': I-PESSOA
'5': B-TEMPO
'6': I-TEMPO
'7': B-LOCAL
'8': I-LOCAL
'9': B-LEGISLACAO
'10': I-LEGISLACAO
'11': B-JURISPRUDENCIA
'12': I-JURISPRUDENCIA
config_name: lener_br
splits:
- name: train
num_bytes: 3984189
num_examples: 7828
- name: validation
num_bytes: 719433
num_examples: 1177
- name: test
num_bytes: 823708
num_examples: 1390
download_size: 2983137
dataset_size: 5527330
tags:
- legal
---
# Dataset Card for leNER-br
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [leNER-BR homepage](https://cic.unb.br/~teodecampos/LeNER-Br/)
- **Repository:** [leNER-BR repository](https://github.com/peluz/lener-br)
- **Paper:** [leNER-BR: Long Form Question Answering](https://cic.unb.br/~teodecampos/LeNER-Br/luz_etal_propor2018.pdf)
- **Point of Contact:** [Pedro H. Luz de Araujo](mailto:pedrohluzaraujo@gmail.com)
### Dataset Summary
LeNER-Br is a Portuguese language dataset for named entity recognition
applied to legal documents. LeNER-Br consists entirely of manually annotated
legislation and legal cases texts and contains tags for persons, locations,
time entities, organizations, legislation and legal cases.
To compose the dataset, 66 legal documents from several Brazilian Courts were
collected. Courts of superior and state levels were considered, such as Supremo
Tribunal Federal, Superior Tribunal de Justiça, Tribunal de Justiça de Minas
Gerais and Tribunal de Contas da União. In addition, four legislation documents
were collected, such as "Lei Maria da Penha", giving a total of 70 documents
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
The language supported is Portuguese.
## Dataset Structure
### Data Instances
An example from the dataset looks as follows:
```
{
"id": "0",
"ner_tags": [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 2, 0, 0, 0],
"tokens": [
"EMENTA", ":", "APELAÇÃO", "CÍVEL", "-", "AÇÃO", "DE", "INDENIZAÇÃO", "POR", "DANOS", "MORAIS", "-", "PRELIMINAR", "-", "ARGUIDA", "PELO", "MINISTÉRIO", "PÚBLICO", "EM", "GRAU", "RECURSAL"]
}
```
### Data Fields
- `id`: id of the sample
- `tokens`: the tokens of the example text
- `ner_tags`: the NER tags of each token
The NER tags correspond to this list:
```
"O", "B-ORGANIZACAO", "I-ORGANIZACAO", "B-PESSOA", "I-PESSOA", "B-TEMPO", "I-TEMPO", "B-LOCAL", "I-LOCAL", "B-LEGISLACAO", "I-LEGISLACAO", "B-JURISPRUDENCIA", "I-JURISPRUDENCIA"
```
The NER tags have the same format as in the CoNLL shared task: a B denotes the first item of a phrase and an I any non-initial word.
### Data Splits
The data is split into train, validation and test set. The split sizes are as follow:
| Train | Val | Test |
| ------ | ----- | ---- |
| 7828 | 1177 | 1390 |
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
```
@inproceedings{luz_etal_propor2018,
author = {Pedro H. {Luz de Araujo} and Te\'{o}filo E. {de Campos} and
Renato R. R. {de Oliveira} and Matheus Stauffer and
Samuel Couto and Paulo Bermejo},
title = {{LeNER-Br}: a Dataset for Named Entity Recognition in {Brazilian} Legal Text},
booktitle = {International Conference on the Computational Processing of Portuguese ({PROPOR})},
publisher = {Springer},
series = {Lecture Notes on Computer Science ({LNCS})},
pages = {313--323},
year = {2018},
month = {September 24-26},
address = {Canela, RS, Brazil},
doi = {10.1007/978-3-319-99722-3_32},
url = {https://cic.unb.br/~teodecampos/LeNER-Br/},
}
```
### Contributions
Thanks to [@jonatasgrosman](https://github.com/jonatasgrosman) for adding this dataset. |
kresnik/zeroth_korean | 2023-01-04T06:54:55.000Z | [
"region:us"
] | kresnik | This is Zeroth-Korean corpus,
licensed under Attribution 4.0 International (CC BY 4.0)
The data set contains transcriebed audio data for Korean. There are 51.6 hours transcribed Korean audio for training data (22,263 utterances, 105 people, 3000 sentences) and 1.2 hours transcribed Korean audio for testing data (457 utterances, 10 people). This corpus also contains pre-trained/designed language model, lexicon and morpheme-based segmenter(morfessor).
Zeroth project introduces free Korean speech corpus and aims to make Korean speech recognition more broadly accessible to everyone.
This project was developed in collaboration between Lucas Jo(@Atlas Guide Inc.) and Wonkyum Lee(@Gridspace Inc.).
Contact: Lucas Jo(lucasjo@goodatlas.com), Wonkyum Lee(wonkyum@gridspace.com) | \ | null | 5 | 305 | Entry not found |
BeIR/dbpedia-entity-qrels | 2022-10-23T06:07:36.000Z | [
"task_categories:text-retrieval",
"task_ids:entity-linking-retrieval",
"task_ids:fact-checking-retrieval",
"multilinguality:monolingual",
"language:en",
"license:cc-by-sa-4.0",
"region:us"
] | BeIR | null | null | null | 0 | 305 | ---
annotations_creators: []
language_creators: []
language:
- en
license:
- cc-by-sa-4.0
multilinguality:
- monolingual
paperswithcode_id: beir
pretty_name: BEIR Benchmark
size_categories:
msmarco:
- 1M<n<10M
trec-covid:
- 100k<n<1M
nfcorpus:
- 1K<n<10K
nq:
- 1M<n<10M
hotpotqa:
- 1M<n<10M
fiqa:
- 10K<n<100K
arguana:
- 1K<n<10K
touche-2020:
- 100K<n<1M
cqadupstack:
- 100K<n<1M
quora:
- 100K<n<1M
dbpedia:
- 1M<n<10M
scidocs:
- 10K<n<100K
fever:
- 1M<n<10M
climate-fever:
- 1M<n<10M
scifact:
- 1K<n<10K
source_datasets: []
task_categories:
- text-retrieval
- zero-shot-retrieval
- information-retrieval
- zero-shot-information-retrieval
task_ids:
- passage-retrieval
- entity-linking-retrieval
- fact-checking-retrieval
- tweet-retrieval
- citation-prediction-retrieval
- duplication-question-retrieval
- argument-retrieval
- news-retrieval
- biomedical-information-retrieval
- question-answering-retrieval
---
# Dataset Card for BEIR Benchmark
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://github.com/UKPLab/beir
- **Repository:** https://github.com/UKPLab/beir
- **Paper:** https://openreview.net/forum?id=wCu6T5xFjeJ
- **Leaderboard:** https://docs.google.com/spreadsheets/d/1L8aACyPaXrL8iEelJLGqlMqXKPX2oSP_R10pZoy77Ns
- **Point of Contact:** nandan.thakur@uwaterloo.ca
### Dataset Summary
BEIR is a heterogeneous benchmark that has been built from 18 diverse datasets representing 9 information retrieval tasks:
- Fact-checking: [FEVER](http://fever.ai), [Climate-FEVER](http://climatefever.ai), [SciFact](https://github.com/allenai/scifact)
- Question-Answering: [NQ](https://ai.google.com/research/NaturalQuestions), [HotpotQA](https://hotpotqa.github.io), [FiQA-2018](https://sites.google.com/view/fiqa/)
- Bio-Medical IR: [TREC-COVID](https://ir.nist.gov/covidSubmit/index.html), [BioASQ](http://bioasq.org), [NFCorpus](https://www.cl.uni-heidelberg.de/statnlpgroup/nfcorpus/)
- News Retrieval: [TREC-NEWS](https://trec.nist.gov/data/news2019.html), [Robust04](https://trec.nist.gov/data/robust/04.guidelines.html)
- Argument Retrieval: [Touche-2020](https://webis.de/events/touche-20/shared-task-1.html), [ArguAna](tp://argumentation.bplaced.net/arguana/data)
- Duplicate Question Retrieval: [Quora](https://www.quora.com/q/quoradata/First-Quora-Dataset-Release-Question-Pairs), [CqaDupstack](http://nlp.cis.unimelb.edu.au/resources/cqadupstack/)
- Citation-Prediction: [SCIDOCS](https://allenai.org/data/scidocs)
- Tweet Retrieval: [Signal-1M](https://research.signal-ai.com/datasets/signal1m-tweetir.html)
- Entity Retrieval: [DBPedia](https://github.com/iai-group/DBpedia-Entity/)
All these datasets have been preprocessed and can be used for your experiments.
```python
```
### Supported Tasks and Leaderboards
The dataset supports a leaderboard that evaluates models against task-specific metrics such as F1 or EM, as well as their ability to retrieve supporting information from Wikipedia.
The current best performing models can be found [here](https://eval.ai/web/challenges/challenge-page/689/leaderboard/).
### Languages
All tasks are in English (`en`).
## Dataset Structure
All BEIR datasets must contain a corpus, queries and qrels (relevance judgments file). They must be in the following format:
- `corpus` file: a `.jsonl` file (jsonlines) that contains a list of dictionaries, each with three fields `_id` with unique document identifier, `title` with document title (optional) and `text` with document paragraph or passage. For example: `{"_id": "doc1", "title": "Albert Einstein", "text": "Albert Einstein was a German-born...."}`
- `queries` file: a `.jsonl` file (jsonlines) that contains a list of dictionaries, each with two fields `_id` with unique query identifier and `text` with query text. For example: `{"_id": "q1", "text": "Who developed the mass-energy equivalence formula?"}`
- `qrels` file: a `.tsv` file (tab-seperated) that contains three columns, i.e. the `query-id`, `corpus-id` and `score` in this order. Keep 1st row as header. For example: `q1 doc1 1`
### Data Instances
A high level example of any beir dataset:
```python
corpus = {
"doc1" : {
"title": "Albert Einstein",
"text": "Albert Einstein was a German-born theoretical physicist. who developed the theory of relativity, \
one of the two pillars of modern physics (alongside quantum mechanics). His work is also known for \
its influence on the philosophy of science. He is best known to the general public for his mass–energy \
equivalence formula E = mc2, which has been dubbed 'the world's most famous equation'. He received the 1921 \
Nobel Prize in Physics 'for his services to theoretical physics, and especially for his discovery of the law \
of the photoelectric effect', a pivotal step in the development of quantum theory."
},
"doc2" : {
"title": "", # Keep title an empty string if not present
"text": "Wheat beer is a top-fermented beer which is brewed with a large proportion of wheat relative to the amount of \
malted barley. The two main varieties are German Weißbier and Belgian witbier; other types include Lambic (made\
with wild yeast), Berliner Weisse (a cloudy, sour beer), and Gose (a sour, salty beer)."
},
}
queries = {
"q1" : "Who developed the mass-energy equivalence formula?",
"q2" : "Which beer is brewed with a large proportion of wheat?"
}
qrels = {
"q1" : {"doc1": 1},
"q2" : {"doc2": 1},
}
```
### Data Fields
Examples from all configurations have the following features:
### Corpus
- `corpus`: a `dict` feature representing the document title and passage text, made up of:
- `_id`: a `string` feature representing the unique document id
- `title`: a `string` feature, denoting the title of the document.
- `text`: a `string` feature, denoting the text of the document.
### Queries
- `queries`: a `dict` feature representing the query, made up of:
- `_id`: a `string` feature representing the unique query id
- `text`: a `string` feature, denoting the text of the query.
### Qrels
- `qrels`: a `dict` feature representing the query document relevance judgements, made up of:
- `_id`: a `string` feature representing the query id
- `_id`: a `string` feature, denoting the document id.
- `score`: a `int32` feature, denoting the relevance judgement between query and document.
### Data Splits
| Dataset | Website| BEIR-Name | Type | Queries | Corpus | Rel D/Q | Down-load | md5 |
| -------- | -----| ---------| --------- | ----------- | ---------| ---------| :----------: | :------:|
| MSMARCO | [Homepage](https://microsoft.github.io/msmarco/)| ``msmarco`` | ``train``<br>``dev``<br>``test``| 6,980 | 8.84M | 1.1 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/msmarco.zip) | ``444067daf65d982533ea17ebd59501e4`` |
| TREC-COVID | [Homepage](https://ir.nist.gov/covidSubmit/index.html)| ``trec-covid``| ``test``| 50| 171K| 493.5 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/trec-covid.zip) | ``ce62140cb23feb9becf6270d0d1fe6d1`` |
| NFCorpus | [Homepage](https://www.cl.uni-heidelberg.de/statnlpgroup/nfcorpus/) | ``nfcorpus`` | ``train``<br>``dev``<br>``test``| 323 | 3.6K | 38.2 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/nfcorpus.zip) | ``a89dba18a62ef92f7d323ec890a0d38d`` |
| BioASQ | [Homepage](http://bioasq.org) | ``bioasq``| ``train``<br>``test`` | 500 | 14.91M | 8.05 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#2-bioasq) |
| NQ | [Homepage](https://ai.google.com/research/NaturalQuestions) | ``nq``| ``train``<br>``test``| 3,452 | 2.68M | 1.2 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/nq.zip) | ``d4d3d2e48787a744b6f6e691ff534307`` |
| HotpotQA | [Homepage](https://hotpotqa.github.io) | ``hotpotqa``| ``train``<br>``dev``<br>``test``| 7,405 | 5.23M | 2.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/hotpotqa.zip) | ``f412724f78b0d91183a0e86805e16114`` |
| FiQA-2018 | [Homepage](https://sites.google.com/view/fiqa/) | ``fiqa`` | ``train``<br>``dev``<br>``test``| 648 | 57K | 2.6 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/fiqa.zip) | ``17918ed23cd04fb15047f73e6c3bd9d9`` |
| Signal-1M(RT) | [Homepage](https://research.signal-ai.com/datasets/signal1m-tweetir.html)| ``signal1m`` | ``test``| 97 | 2.86M | 19.6 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#4-signal-1m) |
| TREC-NEWS | [Homepage](https://trec.nist.gov/data/news2019.html) | ``trec-news`` | ``test``| 57 | 595K | 19.6 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#1-trec-news) |
| ArguAna | [Homepage](http://argumentation.bplaced.net/arguana/data) | ``arguana``| ``test`` | 1,406 | 8.67K | 1.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/arguana.zip) | ``8ad3e3c2a5867cdced806d6503f29b99`` |
| Touche-2020| [Homepage](https://webis.de/events/touche-20/shared-task-1.html) | ``webis-touche2020``| ``test``| 49 | 382K | 19.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/webis-touche2020.zip) | ``46f650ba5a527fc69e0a6521c5a23563`` |
| CQADupstack| [Homepage](http://nlp.cis.unimelb.edu.au/resources/cqadupstack/) | ``cqadupstack``| ``test``| 13,145 | 457K | 1.4 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/cqadupstack.zip) | ``4e41456d7df8ee7760a7f866133bda78`` |
| Quora| [Homepage](https://www.quora.com/q/quoradata/First-Quora-Dataset-Release-Question-Pairs) | ``quora``| ``dev``<br>``test``| 10,000 | 523K | 1.6 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/quora.zip) | ``18fb154900ba42a600f84b839c173167`` |
| DBPedia | [Homepage](https://github.com/iai-group/DBpedia-Entity/) | ``dbpedia-entity``| ``dev``<br>``test``| 400 | 4.63M | 38.2 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/dbpedia-entity.zip) | ``c2a39eb420a3164af735795df012ac2c`` |
| SCIDOCS| [Homepage](https://allenai.org/data/scidocs) | ``scidocs``| ``test``| 1,000 | 25K | 4.9 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/scidocs.zip) | ``38121350fc3a4d2f48850f6aff52e4a9`` |
| FEVER | [Homepage](http://fever.ai) | ``fever``| ``train``<br>``dev``<br>``test``| 6,666 | 5.42M | 1.2| [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/fever.zip) | ``5a818580227bfb4b35bb6fa46d9b6c03`` |
| Climate-FEVER| [Homepage](http://climatefever.ai) | ``climate-fever``|``test``| 1,535 | 5.42M | 3.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/climate-fever.zip) | ``8b66f0a9126c521bae2bde127b4dc99d`` |
| SciFact| [Homepage](https://github.com/allenai/scifact) | ``scifact``| ``train``<br>``test``| 300 | 5K | 1.1 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/scifact.zip) | ``5f7d1de60b170fc8027bb7898e2efca1`` |
| Robust04 | [Homepage](https://trec.nist.gov/data/robust/04.guidelines.html) | ``robust04``| ``test``| 249 | 528K | 69.9 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#3-robust04) |
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
[Needs More Information]
## Considerations for Using the Data
### Social Impact of Dataset
[Needs More Information]
### Discussion of Biases
[Needs More Information]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
[Needs More Information]
### Licensing Information
[Needs More Information]
### Citation Information
Cite as:
```
@inproceedings{
thakur2021beir,
title={{BEIR}: A Heterogeneous Benchmark for Zero-shot Evaluation of Information Retrieval Models},
author={Nandan Thakur and Nils Reimers and Andreas R{\"u}ckl{\'e} and Abhishek Srivastava and Iryna Gurevych},
booktitle={Thirty-fifth Conference on Neural Information Processing Systems Datasets and Benchmarks Track (Round 2)},
year={2021},
url={https://openreview.net/forum?id=wCu6T5xFjeJ}
}
```
### Contributions
Thanks to [@Nthakur20](https://github.com/Nthakur20) for adding this dataset. |
medalpaca/medical_meadow_health_advice | 2023-04-06T16:51:22.000Z | [
"task_categories:question-answering",
"task_categories:text-classification",
"language:en",
"region:us"
] | medalpaca | null | null | null | 2 | 305 | ---
task_categories:
- question-answering
- text-classification
language:
- en
---
# Health Advice
## Dataset Description
- **Paper:** https://experts.syr.edu/en/publications/detecting-causal-language-use-in-science-findings
### Dataset Summary
This is the dataset use in the paper: Detecting Causal Language Use in Science Findings.
It was cleaned and formated to fit into the alpaca template.
### Citation Information
```
@inproceedings{yu-etal-2019-detecting,
title = "Detecting Causal Language Use in Science Findings",
author = "Yu, Bei and
Li, Yingya and
Wang, Jun",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)",
month = nov,
year = "2019",
address = "Hong Kong, China",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/D19-1473",
doi = "10.18653/v1/D19-1473",
pages = "4664--4674",
}
``` |
togethercomputer/llama-instruct | 2023-08-18T05:04:06.000Z | [
"language:en",
"license:llama2",
"arxiv:2304.12244",
"region:us"
] | togethercomputer | null | null | null | 16 | 305 | ---
license: llama2
language:
- en
---
# llama-instruct
This dataset was used to finetune [Llama-2-7B-32K-Instruct](https://huggingface.co/togethercomputer/Llama-2-7B-32K-Instruct).
We follow the distillation paradigm that is used by [Alpaca](https://crfm.stanford.edu/2023/03/13/alpaca.html), [Vicuna](https://lmsys.org/blog/2023-03-30-vicuna/), [WizardLM](https://arxiv.org/abs/2304.12244), [Orca](https://www.microsoft.com/en-us/research/publication/orca-progressive-learning-from-complex-explanation-traces-of-gpt-4/)
— producing instructions by querying a powerful LLM, which in our case, is the [Llama-2-70B-Chat](https://huggingface.co/meta-llama/Llama-2-7b-chat-hf) model released by [Meta](https://ai.meta.com/llama/).
To build [Llama-2-7B-32K-Instruct](https://huggingface.co/togethercomputer/Llama-2-7B-32K-Instruct), we collect instructions from 19K human inputs extracted from [ShareGPT-90K](https://huggingface.co/datasets/philschmid/sharegpt-raw) (only using human inputs, not ChatGPT outputs).
The actual script handles multi-turn conversations and also supports restarting and caching via a SQLite3 database.
You can find the full script [here](https://github.com/togethercomputer/Llama-2-7B-32K-Instruct/blob/main/scripts/distill.py), with merely 122 lines!
The output of this step is a jsonl file, each line corresponding to one conversation:
```
{"text": "[INST] ... instruction ... [/INST] ... answer ... [INST] ... instruction ... [/INST] ..."}
{"text": "[INST] ... instruction ... [/INST] ... answer ... [INST] ... instruction ... [/INST] ..."}
{"text": "[INST] ... instruction ... [/INST] ... answer ... [INST] ... instruction ... [/INST] ..."}
```
For more details, please refer to the [Github repo](https://github.com/togethercomputer/Llama-2-7B-32K-Instruct).
## Languages
The language of the data is entirely English. |
GEM/totto | 2022-10-24T15:30:32.000Z | [
"task_categories:table-to-text",
"annotations_creators:none",
"language_creators:unknown",
"multilinguality:unknown",
"size_categories:unknown",
"source_datasets:original",
"language:en",
"license:cc-by-sa-3.0",
"data-to-text",
"arxiv:1603.07771",
"arxiv:2007.02871",
"arxiv:2005.10433",
"region:us"
] | GEM | ToTTo is an open-domain English table-to-text dataset with over 120,000 training examples that proposes a controlled generation task: given a Wikipedia table and a set of highlighted table cells, produce a one-sentence description. | \@inproceedings{parikh2020totto,
title={{ToTTo}: A Controlled Table-To-Text Generation Dataset},
author={Parikh, Ankur P and Wang, Xuezhi and Gehrmann, Sebastian and Faruqui, Manaal and Dhingra, Bhuwan and Yang, Diyi and Das, Dipanjan},
booktitle={Proceedings of EMNLP},
year={2020}
} | null | 1 | 304 | ---
annotations_creators:
- none
language_creators:
- unknown
language:
- en
license:
- cc-by-sa-3.0
multilinguality:
- unknown
size_categories:
- unknown
source_datasets:
- original
task_categories:
- table-to-text
task_ids: []
pretty_name: totto
tags:
- data-to-text
---
# Dataset Card for GEM/totto
## Dataset Description
- **Homepage:** n/a
- **Repository:** https://github.com/google-research-datasets/totto + [ToTTo Supplementary Repo
- **Paper:** https://aclanthology.org/2020.emnlp-main.89
- **Leaderboard:** https://github.com/google-research-datasets/totto
- **Point of Contact:** Ankur Parikh
### Link to Main Data Card
You can find the main data card on the [GEM Website](https://gem-benchmark.com/data_cards/totto).
### Dataset Summary
ToTTo is a high-quality English table-to-text dataset with more than 100,000 examples in which a table from Wikipedia with highlighted cells is paired with a sentence that describes the highlighted cells. All examples in the dataset were post-edited in multiple steps to ensure that the targets are fully faithful to the input information.
You can load the dataset via:
```
import datasets
data = datasets.load_dataset('GEM/totto')
```
The data loader can be found [here](https://huggingface.co/datasets/GEM/totto).
#### website
n/a
#### paper
[ACL Anthology](https://aclanthology.org/2020.emnlp-main.89)
#### authors
Ankur Parikh, Xuezhi Wang, Sebastian Gehrmann, Manaal Faruqui, Bhuwan Dhingra, Diyi Yang, Dipanjan Das
## Dataset Overview
### Where to find the Data and its Documentation
#### Download
<!-- info: What is the link to where the original dataset is hosted? -->
<!-- scope: telescope -->
[ToTTo Main Repo](https://github.com/google-research-datasets/totto) + [ToTTo Supplementary Repo](https://github.com/google-research/language/tree/master/language/totto)
#### Paper
<!-- info: What is the link to the paper describing the dataset (open access preferred)? -->
<!-- scope: telescope -->
[ACL Anthology](https://aclanthology.org/2020.emnlp-main.89)
#### BibTex
<!-- info: Provide the BibTex-formatted reference for the dataset. Please use the correct published version (ACL anthology, etc.) instead of google scholar created Bibtex. -->
<!-- scope: microscope -->
```
@inproceedings{parikh-etal-2020-totto,
title = "{ToTTo}: A Controlled Table-To-Text Generation Dataset",
author = "Parikh, Ankur and
Wang, Xuezhi and
Gehrmann, Sebastian and
Faruqui, Manaal and
Dhingra, Bhuwan and
Yang, Diyi and
Das, Dipanjan",
booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
month = nov,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2020.emnlp-main.89",
doi = "10.18653/v1/2020.emnlp-main.89",
pages = "1173--1186",
abstract = "We present ToTTo, an open-domain English table-to-text dataset with over 120,000 training examples that proposes a controlled generation task: given a Wikipedia table and a set of highlighted table cells, produce a one-sentence description. To obtain generated targets that are natural but also faithful to the source table, we introduce a dataset construction process where annotators directly revise existing candidate sentences from Wikipedia. We present systematic analyses of our dataset and annotation process as well as results achieved by several state-of-the-art baselines. While usually fluent, existing methods often hallucinate phrases that are not supported by the table, suggesting that this dataset can serve as a useful research benchmark for high-precision conditional text generation.",
}
```
#### Contact Name
<!-- quick -->
<!-- info: If known, provide the name of at least one person the reader can contact for questions about the dataset. -->
<!-- scope: periscope -->
Ankur Parikh
#### Contact Email
<!-- info: If known, provide the email of at least one person the reader can contact for questions about the dataset. -->
<!-- scope: periscope -->
totto@google.com
#### Has a Leaderboard?
<!-- info: Does the dataset have an active leaderboard? -->
<!-- scope: telescope -->
yes
#### Leaderboard Link
<!-- info: Provide a link to the leaderboard. -->
<!-- scope: periscope -->
[Github](https://github.com/google-research-datasets/totto)
#### Leaderboard Details
<!-- info: Briefly describe how the leaderboard evaluates models. -->
<!-- scope: microscope -->
This dataset has an associated, active [leaderboard](https://github.com/google-research-datasets/totto#leaderboard) maintained by the authors.
The test set ground truth targets / references are private, i.e they are not publicly shared or downloadable - hence, leaderboard submission is necessary for test set evaluation.
To evaluate your model on the dev or test set AND/OR submit to the leaderboard, you need to submit your model files through this [form](https://forms.gle/AcF9TRqWrPhPzztt7) (The form provides an option to opt-out of going on the leaderboard).
The leaderboard reports three sets of BLEU, PARENT and BLEURT scores for each submission - on the overall test set, the *Overlap* subset of the test set and the *non-Overlap* subset of the test set.
### Languages and Intended Use
#### Multilingual?
<!-- quick -->
<!-- info: Is the dataset multilingual? -->
<!-- scope: telescope -->
no
#### Covered Dialects
<!-- info: What dialects are covered? Are there multiple dialects per language? -->
<!-- scope: periscope -->
No specific dialects. The original language is from Wikipedia and it was post-edited by crowdraters
#### Covered Languages
<!-- quick -->
<!-- info: What languages/dialects are covered in the dataset? -->
<!-- scope: telescope -->
`English`
#### Whose Language?
<!-- info: Whose language is in the dataset? -->
<!-- scope: periscope -->
The language is post-edited English only (BCP-47: `en`) Wikipedia text. No demographic information about annotators is provided.
Some amounts of what may be called non-English text, including characters such as French accents or Cyrillic characters, could sometimes occur, especially through fields with entity names as values in the input table cells.
#### License
<!-- quick -->
<!-- info: What is the license of the dataset? -->
<!-- scope: telescope -->
cc-by-sa-3.0: Creative Commons Attribution Share Alike 3.0 Unported
#### Intended Use
<!-- info: What is the intended use of the dataset? -->
<!-- scope: microscope -->
ToTTo is a Table-to-Text NLG task, as the paper title says. The task is as follows: Given a Wikipedia table with row names, column names and table cells, with a subset of cells highlighted, generate a natural language description for the highlighted part of the table . The table need not be exactly rectangular in that - cells can sometimes be multi-row or multi-column.
An earlier example of a Table-to-Text NLG task is [Wikibio](https://arxiv.org/abs/1603.07771) - here the inputs were Wikipedia infoboxes (from the top right corner of entity-related Wiki pages). In contrast, ToTTo mostly has Wikipedia tables from the main article content itself. In general, Table-To-Text NLG tasks can be seen as a subclass of Data-To-Text NLG tasks - where the task is to generate natural language descriptions of inputs which are in the form of structured or semi-structured data. In general, all Data-To-Text NLG tasks need not have an explicit table or other structure - e.g the input in [WebNLG](https://www.aclweb.org/anthology/W16-6626.pdf) is simply a list of triples.
Importantly, ToTTo differs from earlier examples of Table-To-Text NLG in that:
1. It does not suffer from the problem of divergent references - where ground truth descriptions themselves have additional information not found in the table. ToTTo overcomes this by having a multi-step annotation process to edit the initial, free-form table descriptions (which are from Wikipedia) to make them faithful, unambiguous and independent of article context.
2. Since it provides **control** in the form of highlighted table cells, it prevents the problem of there being a large number of valid descriptions focussing on different parts of the table.
#### Primary Task
<!-- info: What primary task does the dataset support? -->
<!-- scope: telescope -->
Data-to-Text
#### Communicative Goal
<!-- quick -->
<!-- info: Provide a short description of the communicative goal of a model trained for this task on this dataset. -->
<!-- scope: periscope -->
The speaker is required to produce a single, coherent English sentence that describes the highlighted cells in the given table, also using metadata and any other information from the table as applicable.
### Credit
#### Curation Organization Type(s)
<!-- info: In what kind of organization did the dataset curation happen? -->
<!-- scope: telescope -->
`industry`
#### Curation Organization(s)
<!-- info: Name the organization(s). -->
<!-- scope: periscope -->
Google Research
#### Dataset Creators
<!-- info: Who created the original dataset? List the people involved in collecting the dataset and their affiliation(s). -->
<!-- scope: microscope -->
Ankur Parikh, Xuezhi Wang, Sebastian Gehrmann, Manaal Faruqui, Bhuwan Dhingra, Diyi Yang, Dipanjan Das
#### Funding
<!-- info: Who funded the data creation? -->
<!-- scope: microscope -->
Google Research
#### Who added the Dataset to GEM?
<!-- info: Who contributed to the data card and adding the dataset to GEM? List the people+affiliations involved in creating this data card and who helped integrate this dataset into GEM. -->
<!-- scope: microscope -->
Varun Gangal created the initial data card and Yacine Jernite wrote the data loader. The data card was updated with new splits by Simon Mille. Sebastian Gehrmann ported the data card and loader from the v1 to the v2 version and extended it with the new fields.
### Dataset Structure
#### Data Fields
<!-- info: List and describe the fields present in the dataset. -->
<!-- scope: telescope -->
- The `table` field is a `List[List[Dict]]` in row-major order, with outer lists representing rows and the inner lists columns.
- Each `Dict` has the fields `column_span: int`, `is_header: bool`, `row_span: int`, and `value: str`.
- Table metadata consists of `table_page_title`, `table_section_title` and `table_section_texts`
- The `highlighted_cells` are represented as `List[[row_index,column_index]]`, with each `[row_index,column_index]` indicating that `table[row_index][column_index]` is highlighted.
- `example_id` is the unique id per example.
- `sentence_annotations[final_sentence]` which is the table description/generation target
#### Reason for Structure
<!-- info: How was the dataset structure determined? -->
<!-- scope: microscope -->
The structure is aimed to encode highlighted tables in a way that allows rows and columns to span multiple fields in width. The other fields are meta-data about the source and the annotations
#### How were labels chosen?
<!-- info: How were the labels chosen? -->
<!-- scope: microscope -->
The initial table-description pairs are tables from Wikipedia articles, extracted through heuristics such as Number Matching (tables and sentences that overlap with a non-date number of atleast 3 non-zero digits) (Refer to Section 4 of the paper for more)
1. Table Readability: Tables which are deemed non-readable (due to foreign language, poor formatting etc - a very small fraction of 0.5%) are removed from the dataset here.
2. Cell Highlighting: The annotator highlights the cells of the table which support the description.
3. Deletion: The annotator removes phrases in the description which are not supported by the highlighted cells
4. Decontextualization: Descriptions may contain pronouns or other forms of anaphora, or other phenomena which depend on the overall article topic - these are fixed by replacement (e.g replacing pronouns with the entity, provided it occurs in the table). The replacements allowed are limited to one, and annotators are also instructed to conserve fluency.
5. Secondary Annotation: A second set of annotators is shown the output of Stage 4, and asked to fix it if required to ensure it is grammatical.
#### Example Instance
<!-- info: Provide a JSON formatted example of a typical instance in the dataset. -->
<!-- scope: periscope -->
The main repository's `README.md` already provides a thorough walkthrough of data instances and fields [here](https://github.com/google-research-datasets/totto#dataset-description)
Below is the instance for a table from the wiki-page for the musical artist _Weird Al' Yankovic_ , likely listing his on-television appearances.
```
{
"table_page_title": "'Weird Al' Yankovic",
"table_webpage_url": "https://en.wikipedia.org/wiki/%22Weird_Al%22_Yankovic",
"table_section_title": "Television",
"table_section_text": "",
"table": "[Described below]",
"highlighted_cells": [[22, 2], [22, 3], [22, 0], [22, 1], [23, 3], [23, 1], [23, 0]],
"example_id": 12345678912345678912,
"sentence_annotations": [{"original_sentence": "In 2016, Al appeared in 2 episodes of BoJack Horseman as Mr. Peanutbutter's brother, Captain Peanutbutter, and was hired to voice the lead role in the 2016 Disney XD series Milo Murphy's Law.",
"sentence_after_deletion": "In 2016, Al appeared in 2 episodes of BoJack Horseman as Captain Peanutbutter, and was hired to the lead role in the 2016 series Milo Murphy's Law.",
"sentence_after_ambiguity": "In 2016, Al appeared in 2 episodes of BoJack Horseman as Captain Peanutbutter, and was hired for the lead role in the 2016 series Milo Murphy's 'Law.",
"final_sentence": "In 2016, Al appeared in 2 episodes of BoJack Horseman as Captain Peanutbutter and was hired for the lead role in the 2016 series Milo Murphy's Law."}],
}
```
The `table` field is expanded as below:
```
[
[
{
"column_span": 1,
"is_header": true,
"row_span": 1,
"value": "Year"},
{ "column_span": 1,
"is_header": true,
"row_span": 1,
"value": "Title"},
{ "column_span": 1,
"is_header": true,
"row_span": 1,
"value": "Role"},
{ "column_span": 1,
"is_header": true,
"row_span": 1,
"value": "Notes"}
],
[
{ "column_span": 1,
"is_header": false,
"row_span": 1,
"value": "1997"},
{ "column_span": 1,
"is_header": false,
"row_span": 1,
"value": "Eek! The Cat"},
{ "column_span": 1,
"is_header": false,
"row_span": 1,
"value": "Himself"},
{ "column_span": 1,
"is_header": false,
"row_span": 1,
"value": "Episode: 'The FugEektive'"}
], ...
]
```
The [Supplementary Repo](https://github.com/google-research/language/tree/master/language/totto) also provides browsable samples under its `sample/` folder. It additionally provides HTML visualization scripts with their outputs located under the aforementioned folder. The instructions to access and visualize these samples can also be found [here](https://github.com/google-research/language/tree/master/language/totto#visualizing-sample-data).
#### Data Splits
<!-- info: Describe and name the splits in the dataset if there are more than one. -->
<!-- scope: periscope -->
The dataset consists of 120,000 train examples and equi-sized dev and test sets with 7700 examples.
Refer to Table 5 in the paper for a more extensive list of properties about table size, target vocabulary etc and their aggregates.
#### Splitting Criteria
<!-- info: Describe any criteria for splitting the data, if used. If there are differences between the splits (e.g., if the training annotations are machine-generated and the dev and test ones are created by humans, or if different numbers of annotators contributed to each example), describe them here. -->
<!-- scope: microscope -->
The dev and test splits are further equally distributed between _Overlap_ and _non-Overlap_ .
The examples in the _Overlap_ set are harder on account of the domain shift resulting from them having none of their header (row and column) names in common with those seen during training.
Refer to Table 5 in the paper for a more extensive list of properties about table size, target vocabulary etc and their aggregates.
####
<!-- info: What does an outlier of the dataset in terms of length/perplexity/embedding look like? -->
<!-- scope: microscope -->
There are some very large tables in the dataset with thousands of rows. Table 7 shows some of the challenges of the dataset, showing that very few examples require access to the table description itself which makes those examples an outlier.
## Dataset in GEM
### Rationale for Inclusion in GEM
#### Why is the Dataset in GEM?
<!-- info: What does this dataset contribute toward better generation evaluation and why is it part of GEM? -->
<!-- scope: microscope -->
ToTTo is one of the two datasets representing Table-to-Text NLG in GEM, the other one being [DART](https://arxiv.org/pdf/2007.02871.pdf). Unlike DART, which combines datasets from multiple sources and furnishes them in a unified setting, ToTTo is from a homogeneous source. As explained in the Task Summary above, it also has an annotation process explicitly crafted to reduce divergent descriptions, which is not true of DART.
Furthermore, ToTTo is also an instance of a **controlled** generation task - where in addition to the input (in this case the table) an additional **control** (in this case the highlighted cells) is given as an additional goal for the generation. The DART task formulation does not include controls.
#### Similar Datasets
<!-- info: Do other datasets for the high level task exist? -->
<!-- scope: telescope -->
yes
#### Unique Language Coverage
<!-- info: Does this dataset cover other languages than other datasets for the same task? -->
<!-- scope: periscope -->
no
#### Difference from other GEM datasets
<!-- info: What else sets this dataset apart from other similar datasets in GEM? -->
<!-- scope: microscope -->
The input is much more complex and the quality much better than that of comparable datasets. The highlighted table cells provide a unique challenge to models.
#### Ability that the Dataset measures
<!-- info: What aspect of model ability can be measured with this dataset? -->
<!-- scope: periscope -->
Reasoning, surface realization
### GEM-Specific Curation
#### Modificatied for GEM?
<!-- info: Has the GEM version of the dataset been modified in any way (data, processing, splits) from the original curated data? -->
<!-- scope: telescope -->
yes
#### Additional Splits?
<!-- info: Does GEM provide additional splits to the dataset? -->
<!-- scope: telescope -->
yes
#### Split Information
<!-- info: Describe how the new splits were created -->
<!-- scope: periscope -->
9 challenge sets for ToTTo were added to the GEM evaluation suite, 8 created specifically for the task and 1 coming from the original data.
1. We created subsets of the training and development sets of 500 randomly selected inputs each.
2. We applied input scrambling on a subset of 500 randomly selected test instances; the order of the highlighted cells was randomly reassigned.
3. For the input size, we created subpopulations based on the number of input highlighted cells in the whole table.
| Input length | Frequency English |
|---------------|-------------------|
| 1 | 898 |
| 2 | 1850 |
| 3 | 2221 |
| 4 | 1369 |
| 5 | 483 |
| 6 | 379 |
| 7 | 124 |
| 8 | 128 |
| 9 | 61 |
| 10 | 40 |
| 11 | 20 |
| 12 | 26 |
| 13 | 10 |
| 14 | 14 |
| 15 | 14 |
| 16 | 7 |
| 17 | 6 |
| 18 | 5 |
| 19 | 5 |
| 20 | 5 |
| 21 | 4 |
| 22 | 1 |
| 23 | 2 |
| 24 | 4 |
| 25 | 1 |
| 26...496 | 1 |
4. We also divided the test set according to the size of the whole table, based on the idea that larger tables represent a bigger space to take into account when generating the highlighted cells; a larger table could be more challenging to generate accurate text than a smaller table. There are 693 different table sizes, ranging from 2 to 15834 cells.
| Table size |Frequency English|
|-----------------|-----------------|
| 2 | 71 |
| 3 | 52 |
| 4 | 36 |
| 5 | 41 |
| 6 | 144 |
| 7 | 47 |
| 8 | 59 |
| 9 | 105 |
| 10 | 162 |
| 11 | 36 |
| 12 | 158 |
| 13 | 35 |
| 14 | 79 |
| 15 | 136 |
| 16 | 111 |
| 17 | 48 |
| 18 | 123 |
| 19 | 29 |
| 20 | 112 |
| 21 | 91 |
| 22 | 17 |
| 23 | 7 |
| 24 | 169 |
| 25 | 56 |
| 26 | 12 |
| 27 | 40 |
| 28 | 77 |
| 29 | 7 |
| 30 | 122 |
| 31 | 4 |
| 32 | 49 |
| 33 | 21 |
| 34 | 7 |
| 35 | 103 |
| 36 | 131 |
| 37 | 10 |
| 38 | 6 |
| 39 | 26 |
| 40 | 110 |
| 41 | 1 |
| 42 | 54 |
| 43 | 6 |
| 44 | 47 |
| 45 | 79 |
| 46 | 4 |
| 47 | 2 |
| 48 | 114 |
| 49 | 18 |
| 50 | 55 |
| 51 | 11 |
| 52 | 43 |
| 54 | 80 |
| 55 | 73 |
| 56 | 64 |
| 57 | 12 |
| 58 | 1 |
| 60 | 114 |
| 61 | 4 |
| 63 | 39 |
| 64 | 36 |
| 65 | 62 |
| 66 | 48 |
| 67 | 1 |
| 68 | 36 |
| 69 | 6 |
| 70 | 81 |
| 72 | 76 |
| 73 | 1 |
| 74 | 1 |
| 75 | 44 |
| 76 | 33 |
| 77 | 30 |
| 78 | 66 |
| 79 | 1 |
| 80 | 83 |
| 81 | 12 |
| 82 | 1 |
| 84 | 80 |
| 85 | 25 |
| 86 | 1 |
| 87 | 3 |
| 88 | 35 |
| 90 | 78 |
| 91 | 18 |
| 92 | 22 |
| 93 | 5 |
| 94 | 2 |
| 95 | 31 |
| 96 | 50 |
| 98 | 11 |
| 99 | 14 |
| 100 | 48 |
| 102 | 24 |
| 104 | 29 |
| 105 | 36 |
| 106 | 2 |
| 108 | 51 |
| 110 | 31 |
| ...8000+ | (up to 10) |
5. We also created three splits based on the subset of test examples in pages about people.
We then used the structured information in WikiData to identify the following information:
- gender (male, and female),
- nationality grouped by continent (Africa, Asia, Europe, North America, Oceania, and South America)
- ethnicity (African American and all USA)
The categories within gender, ethnicity, and nationality were chosen based on data availability; The ToTTo dataset includes mostly tables that do not focus on people. As a result, only seven people in the original test set are marked as having a non-binary gender. Similar sparsity informed the grouping of nationalities by continent – only 19 countries are represented by more than 10 people in the test set. In case a person has citizenships across multiple continents, we may include the person in any of the included continents.
Finally, ethnicity is very sparsely annotated in WikiData; only 150 test examples in ToTTo have this information and 128 of these are African Americans. We thus are unable to compare the performance on, e.g., Yoruba or Punjabi people, both of which have fewer than five instances. Another caveat here is that only 21 of the 128 people are female. We thus compare the African American population to results on a subset that includes all US citizens.
#### Split Motivation
<!-- info: What aspects of the model's generation capacities were the splits created to test? -->
<!-- scope: periscope -->
generalization, fairness, robustness
### Getting Started with the Task
#### Pointers to Resources
<!-- info: Getting started with in-depth research on the task. Add relevant pointers to resources that researchers can consult when they want to get started digging deeper into the task. -->
<!-- scope: microscope -->
- The highest spot on the leaderboard is currently held by an anonymous method, with BLEU=49.2, PARENT=58.7 and BLEURT=0.249 on the _Overall_ test set.
- The **highest scoring non-anonymous** method is the T5-based method of [Kale, 2020](https://arxiv.org/abs/2005.10433). This method uses a simple row-major linearization scheme to convert the table (it chooses only the highlighted cells and ignores the other cells - table titles and section titles are prefixed at the start of the respective section table) to a flat string. The linearized input - output description pairs from training examples are then used to finetune T5, with BLEU being used as the dev metric to pick checkpoints, and beam search with beam size 10 being the decoding method.
Though the best numbers from this method are naturally from the largest T5-pretrained architecture (T5-3B), the paper shows improvements over the next-highest BERT-to-BERT method even when using T5-Base or T5-Small, which have the same and lesser parameters than BERT-to-BERT respectively.
- The [Supplementary Repo](https://github.com/google-research/language/tree/master/language/totto) provides several useful modules to get started with for new approach implementation:
1. Code for the particular preprocessing / linearization scheme used to linearize the tables into flat sequences for the baseline approaches described in the paper has been described and shared [herein](https://github.com/google-research/language/tree/master/language/totto#baseline-preprocessing)
2. An [evaluation script](https://github.com/google-research/language/tree/master/language/totto#running-the-evaluation-scripts-locally) for locally scoring BLEU and PARENT system outputs on dev (or train) sets. Since BLEURT is a model-based metric, a [slightly separate](https://github.com/google-research/language/tree/master/language/totto#running-the-evaluation-scripts-locall://github.com/google-research/language/tree/master/language/totto#computing-the-bleurt-score) set of instructions is provided to evaluate on the same.
## Previous Results
### Previous Results
#### Measured Model Abilities
<!-- info: What aspect of model ability can be measured with this dataset? -->
<!-- scope: telescope -->
Reasoning, surface realization
#### Metrics
<!-- info: What metrics are typically used for this task? -->
<!-- scope: periscope -->
`BLEU`, `BLEURT`, `Other: Other Metrics`
#### Other Metrics
<!-- info: Definitions of other metrics -->
<!-- scope: periscope -->
Parent: a metric that measures the F-1 score of overlap between input content words and those used in references and those in generated text while ignoring the general surface form. It can thus measure the faithfulness much better than metrics that measure overlap with a reference
#### Proposed Evaluation
<!-- info: List and describe the purpose of the metrics and evaluation methodology (including human evaluation) that the dataset creators used when introducing this task. -->
<!-- scope: microscope -->
The metrics are used as in the leaderboard. The original paper additionally conducted a human evaluation focusing on fluency, faithfulness, and coverage.
Faithfulness was measured as whether facts in the text are not supported by the input, and coverage as the number of highlighted cells that were considered. They thus represent precision and recall of the content.
#### Previous results available?
<!-- info: Are previous results available? -->
<!-- scope: telescope -->
yes
#### Relevant Previous Results
<!-- info: What are the most relevant previous results for this task/dataset? -->
<!-- scope: microscope -->
See leaderboard.
## Dataset Curation
### Original Curation
#### Original Curation Rationale
<!-- info: Original curation rationale -->
<!-- scope: telescope -->
Tables occurring in Wikipedia articles were chosen as the data source with the following reasons in mind:
1. Wide coverage in terms of both vocabulary and concepts.
2. Wikipedia tables are not confined to a regular structure, with multi-row or multi-column cells occurring with a sufficient frequency.
3. Likely to contain reasonable-quality, natural text descriptions in the proximity of the table, which are also extractable by heuristics. (see the start of Section 4 for the heuristics used)
To prevent an overlap with the earlier [Wikibio](https://arxiv.org/abs/1603.07771) dataset which focussed on Infobox-first sentence pairs from Wikipedia biography articles, the authors avoid using Infoboxes as a data source.
The overall curation process of initially collecting free text and then annotator-revising it, was designed to combine the advantages of free-form text descriptions (which are fluent, high-quality and unhurriedly written, but also divergent and unfaithful) with annotator descriptions (which can be tailored to be faithful and to conform exactly to desired task requirements)
#### Communicative Goal
<!-- info: What was the communicative goal? -->
<!-- scope: periscope -->
The speaker is required to produce a single, coherent English sentence that describes the highlighted cells in the given table, also using metadata and any other information from the table as applicable.
#### Sourced from Different Sources
<!-- info: Is the dataset aggregated from different data sources? -->
<!-- scope: telescope -->
yes
#### Source Details
<!-- info: List the sources (one per line) -->
<!-- scope: periscope -->
wikipedia.org
### Language Data
#### How was Language Data Obtained?
<!-- info: How was the language data obtained? -->
<!-- scope: telescope -->
`Crowdsourced`
#### Where was it crowdsourced?
<!-- info: If crowdsourced, where from? -->
<!-- scope: periscope -->
`Other crowdworker platform`
#### Language Producers
<!-- info: What further information do we have on the language producers? -->
<!-- scope: microscope -->
The basic source language producers are Wikipedia authors and/or editors, since the annotation starts with the natural text description near the Wikipedia table.
The auxiliary source language producers are the annotators (two per example) who iteratively revise these descriptions to make them unambiguous and faithful to a subset of highlighted cells in the table.
#### Data Validation
<!-- info: Was the text validated by a different worker or a data curator? -->
<!-- scope: telescope -->
validated by crowdworker
#### Data Preprocessing
<!-- info: How was the text data pre-processed? (Enter N/A if the text was not pre-processed) -->
<!-- scope: microscope -->
The initial table-description pairs are tables from Wikipedia articles, extracted through heuristics such as Number Matching (tables and sentences that overlap with a non-date number of atleast 3 non-zero digits) (Refer to Section 4 of the paper for more)
1. Table Readability: Tables which are deemed non-readable (due to foreign language, poor formatting etc - a very small fraction of 0.5%) are removed from the dataset here.
2. Cell Highlighting: The annotator highlights the cells of the table which support the description.
3. Deletion: The annotator removes phrases in the description which are not supported by the highlighted cells
4. Decontextualization: Descriptions may contain pronouns or other forms of anaphora, or other phenomena which depend on the overall article topic - these are fixed by replacement (e.g replacing pronouns with the entity, provided it occurs in the table). The replacements allowed are limited to one, and annotators are also instructed to conserve fluency.
5. Secondary Annotation: A second set of annotators is shown the output of Stage 4, and asked to fix it if required to ensure it is grammatical.
The paper does not specifically describe the annotation platform or location profiles of the annotators.
#### Was Data Filtered?
<!-- info: Were text instances selected or filtered? -->
<!-- scope: telescope -->
algorithmically
#### Filter Criteria
<!-- info: What were the selection criteria? -->
<!-- scope: microscope -->
After construction of the splits, the data curators filtered training examples that had rare table header combinations (<=5 examples) and which had an overlap with the validation or test splits.
### Structured Annotations
#### Additional Annotations?
<!-- quick -->
<!-- info: Does the dataset have additional annotations for each instance? -->
<!-- scope: telescope -->
none
#### Annotation Service?
<!-- info: Was an annotation service used? -->
<!-- scope: telescope -->
no
### Consent
#### Any Consent Policy?
<!-- info: Was there a consent policy involved when gathering the data? -->
<!-- scope: telescope -->
yes
#### Consent Policy Details
<!-- info: What was the consent policy? -->
<!-- scope: microscope -->
Annotators were full time employees that were aware of the goal of the project and consented to having the data released as part of the dataset.
### Private Identifying Information (PII)
#### Contains PII?
<!-- quick -->
<!-- info: Does the source language data likely contain Personal Identifying Information about the data creators or subjects? -->
<!-- scope: telescope -->
no PII
#### Justification for no PII
<!-- info: Provide a justification for selecting `no PII` above. -->
<!-- scope: periscope -->
Since the source data is from wikipedia, only data in the public domain is included in the dataset.
### Maintenance
#### Any Maintenance Plan?
<!-- info: Does the original dataset have a maintenance plan? -->
<!-- scope: telescope -->
yes
#### Maintenance Plan Details
<!-- info: Describe the original dataset's maintenance plan. -->
<!-- scope: microscope -->
For submissions, you can delete your data by emailing totto@google.com from the email account used to sign up for the submission. Deletion requests will be responded to within 60 days.
#### Maintainer Contact Information
<!-- info: Provide contact information of a person responsible for the dataset maintenance -->
<!-- scope: periscope -->
Ankur Parikh (aparikh@google.com)
#### Any Contestation Mechanism?
<!-- info: Does the maintenance plan include a contestation mechanism allowing individuals to request removal fo content? -->
<!-- scope: periscope -->
form submission
#### Contestation Form Link
<!-- info: Provide the form link or contact information -->
<!-- scope: periscope -->
totto@google.com
## Broader Social Context
### Previous Work on the Social Impact of the Dataset
#### Usage of Models based on the Data
<!-- info: Are you aware of cases where models trained on the task featured in this dataset ore related tasks have been used in automated systems? -->
<!-- scope: telescope -->
no
### Impact on Under-Served Communities
#### Addresses needs of underserved Communities?
<!-- info: Does this dataset address the needs of communities that are traditionally underserved in language technology, and particularly language generation technology? Communities may be underserved for exemple because their language, language variety, or social or geographical context is underepresented in NLP and NLG resources (datasets and models). -->
<!-- scope: telescope -->
no
### Discussion of Biases
#### Any Documented Social Biases?
<!-- info: Are there documented social biases in the dataset? Biases in this context are variations in the ways members of different social categories are represented that can have harmful downstream consequences for members of the more disadvantaged group. -->
<!-- scope: telescope -->
yes
#### Links and Summaries of Analysis Work
<!-- info: Provide links to and summaries of works analyzing these biases. -->
<!-- scope: microscope -->
The original work as well as our GEM paper analyzes some biases
#### Are the Language Producers Representative of the Language?
<!-- info: Does the distribution of language producers in the dataset accurately represent the full distribution of speakers of the language world-wide? If not, how does it differ? -->
<!-- scope: periscope -->
This dataset is created using tables and the table cell contents may hence naturally exhibit biases which have been found to exist in Wikipedia such as some forms of gender bias (e.g [(Graells-Garido et al.,2015)](https://labtomarket.files.wordpress.com/2018/01/wiki_gender_bias.pdf) notes that spouse information is more likely discussed for females than males)
The table descriptions (targets/references) are, as discussed earlier, collected through a two-step process.
1. The natural text description near the table is taken as a starting point. This is Wikipedia article text as created upto that point in time by a chain of collaborative edits from Wikipedia authors.
2. The initial description is revised by chain of two or more annotated revisions, to make it unambiguous and faithful to a set of highlighted table cells.
From their origin in 1), the descriptions may exhibit biases seen in Wikipedia text as mentioned above. From their revisions in 2), the descriptions may show biases originating from annotator-authored text, such as a preference for shorter descriptions since they're faster to write, or linguistic preferences influenced by the locations dominant in the annotator distribution. (However, note that these are likely to be much reduced since the annotators here are merely revising rather than completely authoring. Moreover, each sentence goes through atleast two annotators, which acts as a check against the personal biases of a single annotator.)
Naturally-occurring text is also known to suffer from other biases such as reporting bias [(Gordon and Van Durme, 2013)](https://openreview.net/forum?id=AzxEzvpdE3Wcy¬eId=vmR8qaby8fqxittps://labtomarket.files.wordpress.com/2018/01/wiki_gender_bias.pdf) - this also applies to this dataset via its origin from Wikipedia.
## Considerations for Using the Data
### PII Risks and Liability
#### Potential PII Risk
<!-- info: Considering your answers to the PII part of the Data Curation Section, describe any potential privacy to the data subjects and creators risks when using the dataset. -->
<!-- scope: microscope -->
Since the source data is from wikipedia, only data in the public domain is included in the dataset.
### Licenses
#### Copyright Restrictions on the Dataset
<!-- info: Based on your answers in the Intended Use part of the Data Overview Section, which of the following best describe the copyright and licensing status of the dataset? -->
<!-- scope: periscope -->
`open license - commercial use allowed`
#### Copyright Restrictions on the Language Data
<!-- info: Based on your answers in the Language part of the Data Curation Section, which of the following best describe the copyright and licensing status of the underlying language data? -->
<!-- scope: periscope -->
`open license - commercial use allowed`
### Known Technical Limitations
#### Technical Limitations
<!-- info: Describe any known technical limitations, such as spurrious correlations, train/test overlap, annotation biases, or mis-annotations, and cite the works that first identified these limitations when possible. -->
<!-- scope: microscope -->
The dataset is limited to topics that are present in Wikipedia, more specifically those topics that are present in articles which contain atleast one table
_Sports_ and _Countries_ form 53.4% of the dataset. The remaining fraction is made up of broader topics like _Europe_, *North America*and _Politics_
|
Divyanshu/indicxnli | 2022-10-06T15:26:00.000Z | [
"task_categories:text-classification",
"task_ids:natural-language-inference",
"annotations_creators:machine-generated",
"language_creators:machine-generated",
"multilinguality:multilingual",
"size_categories:1M<n<10M",
"source_datasets:original",
"language:as",
"language:bn",
"language:gu",
"language:hi",
"language:kn",
"language:ml",
"language:mr",
"language:or",
"language:pa",
"language:ta",
"language:te",
"license:cc0-1.0",
"arxiv:2204.08776",
"region:us"
] | Divyanshu | IndicXNLI is a translated version of XNLI to 11 Indic Languages. As with XNLI, the goal is
to predict textual entailment (does sentence A imply/contradict/neither sentence
B) and is a classification task (given two sentences, predict one of three
labels). | @misc{https://doi.org/10.48550/arxiv.2204.08776,
doi = {10.48550/ARXIV.2204.08776},
url = {https://arxiv.org/abs/2204.08776},
author = {Aggarwal, Divyanshu and Gupta, Vivek and Kunchukuttan, Anoop},
keywords = {Computation and Language (cs.CL), Artificial Intelligence (cs.AI), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {IndicXNLI: Evaluating Multilingual Inference for Indian Languages},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
} | null | 1 | 304 | ---
annotations_creators:
- machine-generated
language_creators:
- machine-generated
language:
- as
- bn
- gu
- hi
- kn
- ml
- mr
- or
- pa
- ta
- te
license:
- cc0-1.0
multilinguality:
- multilingual
pretty_name: IndicXNLI
size_categories:
- 1M<n<10M
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- natural-language-inference
---
# Dataset Card for "IndicXNLI"
## Table of Contents
- [Dataset Card for "IndicXNLI"](#dataset-card-for-indicxnli)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
## Dataset Description
- **Homepage:** <https://github.com/divyanshuaggarwal/IndicXNLI>
- **Paper:** [IndicXNLI: Evaluating Multilingual Inference for Indian Languages](https://arxiv.org/abs/2204.08776)
- **Point of Contact:** [Divyanshu Aggarwal](mailto:divyanshuggrwl@gmail.com)
### Dataset Summary
INDICXNLI is similar to existing
XNLI dataset in shape/form, but focusses on Indic language family. INDICXNLI include NLI
data for eleven major Indic languages that includes
Assamese (‘as’), Gujarat (‘gu’), Kannada (‘kn’),
Malayalam (‘ml’), Marathi (‘mr’), Odia (‘or’),
Punjabi (‘pa’), Tamil (‘ta’), Telugu (‘te’), Hindi
(‘hi’), and Bengali (‘bn’).
### Supported Tasks and Leaderboards
**Tasks:** Natural Language Inference
**Leaderboards:** Currently there is no Leaderboard for this dataset.
### Languages
- `Assamese (as)`
- `Bengali (bn)`
- `Gujarati (gu)`
- `Kannada (kn)`
- `Hindi (hi)`
- `Malayalam (ml)`
- `Marathi (mr)`
- `Oriya (or)`
- `Punjabi (pa)`
- `Tamil (ta)`
- `Telugu (te)`
## Dataset Structure
### Data Instances
One example from the `hi` dataset is given below in JSON format.
```python
{'premise': 'अवधारणात्मक रूप से क्रीम स्किमिंग के दो बुनियादी आयाम हैं-उत्पाद और भूगोल।',
'hypothesis': 'उत्पाद और भूगोल क्रीम स्किमिंग का काम करते हैं।',
'label': 1 (neutral) }
```
### Data Fields
- `premise (string)`: Premise Sentence
- `hypothesis (string)`: Hypothesis Sentence
- `label (integer)`: Integer label `0` if hypothesis `entails` the premise, `2` if hypothesis `negates` the premise and `1` otherwise.
### Data Splits
<!-- Below is the dataset split given for `hi` dataset.
```python
DatasetDict({
train: Dataset({
features: ['premise', 'hypothesis', 'label'],
num_rows: 392702
})
test: Dataset({
features: ['premise', 'hypothesis', 'label'],
num_rows: 5010
})
validation: Dataset({
features: ['premise', 'hypothesis', 'label'],
num_rows: 2490
})
})
``` -->
Language | ISO 639-1 Code |Train | Test | Dev |
--------------|----------------|-------|-----|------|
Assamese | as | 392,702 | 5,010 | 2,490 |
Bengali | bn | 392,702 | 5,010 | 2,490 |
Gujarati | gu | 392,702 | 5,010 | 2,490 |
Hindi | hi | 392,702 | 5,010 | 2,490 |
Kannada | kn | 392,702 | 5,010 | 2,490 |
Malayalam | ml |392,702 | 5,010 | 2,490 |
Marathi | mr |392,702 | 5,010 | 2,490 |
Oriya | or | 392,702 | 5,010 | 2,490 |
Punjabi | pa | 392,702 | 5,010 | 2,490 |
Tamil | ta | 392,702 | 5,010 | 2,490 |
Telugu | te | 392,702 | 5,010 | 2,490 |
<!-- The dataset split remains same across all languages. -->
## Dataset usage
Code snippet for using the dataset using datasets library.
```python
from datasets import load_dataset
dataset = load_dataset("Divyanshu/indicxnli")
```
## Dataset Creation
Machine translation of XNLI english dataset to 11 listed Indic Languages.
### Curation Rationale
[More information needed]
### Source Data
[XNLI dataset](https://cims.nyu.edu/~sbowman/xnli/)
#### Initial Data Collection and Normalization
[Detailed in the paper](https://arxiv.org/abs/2204.08776)
#### Who are the source language producers?
[Detailed in the paper](https://arxiv.org/abs/2204.08776)
#### Human Verification Process
[Detailed in the paper](https://arxiv.org/abs/2204.08776)
## Considerations for Using the Data
### Social Impact of Dataset
[Detailed in the paper](https://arxiv.org/abs/2204.08776)
### Discussion of Biases
[Detailed in the paper](https://arxiv.org/abs/2204.08776)
### Other Known Limitations
[Detailed in the paper](https://arxiv.org/abs/2204.08776)
### Dataset Curators
Divyanshu Aggarwal, Vivek Gupta, Anoop Kunchukuttan
### Licensing Information
Contents of this repository are restricted to only non-commercial research purposes under the [Creative Commons Attribution-NonCommercial 4.0 International License (CC BY-NC 4.0)](https://creativecommons.org/licenses/by-nc/4.0/). Copyright of the dataset contents belongs to the original copyright holders.
### Citation Information
If you use any of the datasets, models or code modules, please cite the following paper:
```
@misc{https://doi.org/10.48550/arxiv.2204.08776,
doi = {10.48550/ARXIV.2204.08776},
url = {https://arxiv.org/abs/2204.08776},
author = {Aggarwal, Divyanshu and Gupta, Vivek and Kunchukuttan, Anoop},
keywords = {Computation and Language (cs.CL), Artificial Intelligence (cs.AI), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {IndicXNLI: Evaluating Multilingual Inference for Indian Languages},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
<!-- ### Contributions -->
|
AlekseyKorshuk/hellaswag | 2022-06-06T10:33:23.000Z | [
"region:us"
] | AlekseyKorshuk | null | null | null | 2 | 304 | Entry not found |
result-kand2-sdxl-wuerst-karlo/694df328 | 2023-09-28T17:05:56.000Z | [
"region:us"
] | result-kand2-sdxl-wuerst-karlo | null | null | null | 0 | 304 | ---
dataset_info:
features:
- name: result
dtype: string
- name: id
dtype: int64
splits:
- name: train
num_bytes: 162
num_examples: 10
download_size: 1318
dataset_size: 162
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "694df328"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
dynabench/dynasent | 2021-04-29T11:30:24.000Z | [
"arxiv:2012.15349",
"arxiv:1803.09010",
"arxiv:1810.03993",
"region:us"
] | dynabench | Dynabench.DynaSent is a Sentiment Analysis dataset collected using a
human-and-model-in-the-loop. | null | null | 3 | 303 | # DynaSent: Dynamic Sentiment Analysis Dataset
DynaSent is an English-language benchmark task for ternary (positive/negative/neutral) sentiment analysis. This dataset card is forked from the original [DynaSent Repository](https://github.com/cgpotts/dynasent).
## Contents
* [Citation](#Citation)
* [Dataset files](#dataset-files)
* [Quick start](#quick-start)
* [Data format](#data-format)
* [Models](#models)
* [Other files](#other-files)
* [License](#license)
## Citation
[Christopher Potts](http://web.stanford.edu/~cgpotts/), [Zhengxuan Wu](http://zen-wu.social), Atticus Geiger, and [Douwe Kiela](https://douwekiela.github.io). 2020. [DynaSent: A dynamic benchmark for sentiment analysis](https://arxiv.org/abs/2012.15349). Ms., Stanford University and Facebook AI Research.
```stex
@article{potts-etal-2020-dynasent,
title={{DynaSent}: A Dynamic Benchmark for Sentiment Analysis},
author={Potts, Christopher and Wu, Zhengxuan and Geiger, Atticus and Kiela, Douwe},
journal={arXiv preprint arXiv:2012.15349},
url={https://arxiv.org/abs/2012.15349},
year={2020}}
```
## Dataset files
The dataset is [dynasent-v1.1.zip](dynasent-v1.1.zip), which is included in this repository. `v1.1` differs from `v1` only in that `v1.1` has proper unique ids for Round 1 and corrects a bug that led to some non-unique ids in Round 2. There are no changes to the examples or other metadata.
The dataset consists of two rounds, each with a train/dev/test split:
### Round 1: Naturally occurring sentences
* `dynasent-v1.1-round01-yelp-train.jsonl`
* `dynasent-v1.1-round01-yelp-dev.jsonl`
* `dynasent-v1.1-round01-yelp-test.jsonl`
### Round 1: Sentences crowdsourced using Dynabench
* `dynasent-v1.1-round02-dynabench-train.jsonl`
* `dynasent-v1.1-round02-dynabench-dev.jsonl`
* `dynasent-v1.1-round02-dynabench-test.jsonl`
### SST-dev revalidation
The dataset also contains a version of the [Stanford Sentiment Treebank](https://nlp.stanford.edu/sentiment/) dev set in our format with labels from our validation task:
* `sst-dev-validated.jsonl`
## Quick start
This function can be used to load any subset of the files:
```python
import json
def load_dataset(*src_filenames, labels=None):
data = []
for filename in src_filenames:
with open(filename) as f:
for line in f:
d = json.loads(line)
if labels is None or d['gold_label'] in labels:
data.append(d)
return data
```
For example, to create a Round 1 train set restricting to examples with ternary gold labels:
```python
import os
r1_train_filename = os.path.join('dynasent-v1.1', 'dynasent-v1.1-round01-yelp-train.jsonl')
ternary_labels = ('positive', 'negative', 'neutral')
r1_train = load_dataset(r1_train_filename, labels=ternary_labels)
X_train, y_train = zip(*[(d['sentence'], d['gold_label']) for d in r1_train])
```
## Data format
### Round 1 format
```python
{'hit_ids': ['y5238'],
'sentence': 'Roto-Rooter is always good when you need someone right away.',
'indices_into_review_text': [0, 60],
'model_0_label': 'positive',
'model_0_probs': {'negative': 0.01173639390617609,
'positive': 0.7473671436309814,
'neutral': 0.24089649319648743},
'text_id': 'r1-0000001',
'review_id': 'IDHkeGo-nxhqX4Exkdr08A',
'review_rating': 1,
'label_distribution': {'positive': ['w130', 'w186', 'w207', 'w264', 'w54'],
'negative': [],
'neutral': [],
'mixed': []},
'gold_label': 'positive'}
```
Details:
* `'hit_ids'`: List of Amazon Mechanical Turk Human Interface Tasks (HITs) in which this example appeared during validation. The values are anonymized but used consistently throughout the dataset.
* `'sentence'`: The example text.
* `'indices_into_review_text':` indices of `'sentence'` into the original review in the [Yelp Academic Dataset](https://www.yelp.com/dataset).
* `'model_0_label'`: prediction of Model 0 as described in the paper. The possible values are `'positive'`, `'negative'`, and `'neutral'`.
* `'model_0_probs'`: probability distribution predicted by Model 0. The keys are `('positive', 'negative', 'neutral')` and the values are floats.
* `'text_id'`: unique identifier for this entry.
* `'review_id'`: review-level identifier for the review from the [Yelp Academic Dataset](https://www.yelp.com/dataset) containing `'sentence'`.
* `'review_rating'`: review-level star-rating for the review containing `'sentence'` in the [Yelp Academic Dataset](https://www.yelp.com/dataset). The possible values are `1`, `2`, `3`, `4`, and `5`.
* `'label_distribution':` response distribution from the MTurk validation task. The keys are `('positive', 'negative', 'neutral')` and the values are lists of anonymized MTurk ids, which are used consistently throughout the dataset.
* `'gold_label'`: the label chosen by at least three of the five workers if there is one (possible values: `'positive'`, `'negative'`, '`neutral'`, and `'mixed'`), else `None`.
Here is some code one could use to augment a dataset, as loaded by `load_dataset`, with a field giving the full review text from the [Yelp Academic Dataset](https://www.yelp.com/dataset):
```python
import json
def index_yelp_reviews(yelp_src_filename='yelp_academic_dataset_review.json'):
index = {}
with open(yelp_src_filename) as f:
for line in f:
d = json.loads(line)
index[d['review_id']] = d['text']
return index
yelp_index = index_yelp_reviews()
def add_review_text_round1(dataset, yelp_index):
for d in dataset:
review_text = yelp_index[d['text_id']]
# Check that we can find the sentence as expected:
start, end = d['indices_into_review_text']
assert review_text[start: end] == d['sentence']
d['review_text'] = review_text
return dataset
```
### Round 2 format
```python
{'hit_ids': ['y22661'],
'sentence': "We enjoyed our first and last meal in Toronto at Bombay Palace, and I can't think of a better way to book our journey.",
'sentence_author': 'w250',
'has_prompt': True,
'prompt_data': {'indices_into_review_text': [2093, 2213],
'review_rating': 5,
'prompt_sentence': "Our first and last meals in Toronto were enjoyed at Bombay Palace and I can't think of a better way to bookend our trip.",
'review_id': 'Krm4kSIb06BDHternF4_pA'},
'model_1_label': 'positive',
'model_1_probs': {'negative': 0.29140257835388184,
'positive': 0.6788994669914246,
'neutral': 0.029697999358177185},
'text_id': 'r2-0000001',
'label_distribution': {'positive': ['w43', 'w26', 'w155', 'w23'],
'negative': [],
'neutral': [],
'mixed': ['w174']},
'gold_label': 'positive'}
```
Details:
* `'hit_ids'`: List of Amazon Mechanical Turk Human Interface Tasks (HITs) in which this example appeared during validation. The values are anonymized but used consistently throughout the dataset.
* `'sentence'`: The example text.
* `'sentence_author'`: Anonymized MTurk id of the worker who wrote `'sentence'`. These are from the same family of ids as used in `'label_distribution'`, but this id is never one of the ids in `'label_distribution'` for this example.
* `'has_prompt'`: `True` if the `'sentence'` was written with a Prompt else `False`.
* `'prompt_data'`: None if `'has_prompt'` is False, else:
* `'indices_into_review_text'`: indices of `'prompt_sentence'` into the original review in the [Yelp Academic Dataset](https://www.yelp.com/dataset).
* `'review_rating'`: review-level star-rating for the review containing `'sentence'` in the [Yelp Academic Dataset](https://www.yelp.com/dataset).
* `'prompt_sentence'`: The prompt text.
* `'review_id'`: review-level identifier for the review from the [Yelp Academic Dataset](https://www.yelp.com/dataset) containing `'prompt_sentence'`.
* `'model_1_label'`: prediction of Model 1 as described in the paper. The possible values are `'positive'`, `'negative'`, and '`neutral'`.
* `'model_1_probs'`: probability distribution predicted by Model 1. The keys are `('positive', 'negative', 'neutral')` and the values are floats.
* `'text_id'`: unique identifier for this entry.
* `'label_distribution'`: response distribution from the MTurk validation task. The keys are `('positive', 'negative', 'neutral')` and the values are lists of anonymized MTurk ids, which are used consistently throughout the dataset.
* `'gold_label'`: the label chosen by at least three of the five workers if there is one (possible values: `'positive'`, `'negative'`, '`neutral'`, and `'mixed'`), else `None`.
To add the review texts to the `'prompt_data'` field, one can extend the code above for Round 1 with the following function:
```python
def add_review_text_round2(dataset, yelp_index):
for d in dataset:
if d['has_prompt']:
prompt_data = d['prompt_data']
review_text = yelp_index[prompt_data['review_id']]
# Check that we can find the sentence as expected:
start, end = prompt_data['indices_into_review_text']
assert review_text[start: end] == prompt_data['prompt_sentence']
prompt_data['review_text'] = review_text
return dataset
```
### SST-dev format
```python
{'hit_ids': ['s20533'],
'sentence': '-LRB- A -RRB- n utterly charming and hilarious film that reminded me of the best of the Disney comedies from the 60s.',
'tree': '(4 (2 (1 -LRB-) (2 (2 A) (3 -RRB-))) (4 (4 (2 n) (4 (3 (2 utterly) (4 (3 (4 charming) (2 and)) (4 hilarious))) (3 (2 film) (3 (2 that) (4 (4 (2 (2 reminded) (3 me)) (4 (2 of) (4 (4 (2 the) (4 best)) (2 (2 of) (3 (2 the) (3 (3 Disney) (2 comedies))))))) (2 (2 from) (2 (2 the) (2 60s)))))))) (2 .)))',
'text_id': 'sst-dev-validate-0000437',
'sst_label': '4',
'label_distribution': {'positive': ['w207', 'w3', 'w840', 'w135', 'w26'],
'negative': [],
'neutral': [],
'mixed': []},
'gold_label': 'positive'}
```
Details:
* `'hit_ids'`: List of Amazon Mechanical Turk Human Interface Tasks (HITs) in which this example appeared during validation. The values are anonymized but used consistently throughout the dataset.
* `'sentence'`: The example text.
* `'tree'`: The parsetree for the example as given in the SST distribution.
* `'text_id'`: A new identifier for this example.
* `'sst_label'`: The root-node label from the SST. Possible values `'0'`, `'1'` `'2'`, `'3'`, and `'4'`.
* `'label_distribution':` response distribution from the MTurk validation task. The keys are `('positive', 'negative', 'neutral')` and the values are lists of anonymized MTurk ids, which are used consistently throughout the dataset.
* `'gold_label'`: the label chosen by at least three of the five workers if there is one (possible values: `'positive'`, `'negative'`, '`neutral'`, and `'mixed'`), else `None`.
## Models
Model 0 and Model 1 from the paper are available here:
https://drive.google.com/drive/folders/1dpKrjNJfAILUQcJPAFc5YOXUT51VEjKQ?usp=sharing
This repository includes a Python module `dynasent_models.py` that provides a [Hugging Face](https://huggingface.co)-based wrapper around these ([PyTorch](https://pytorch.org)) models. Simple examples:
```python
import os
from dynasent_models import DynaSentModel
# `dynasent_model0` should be downloaded from the above Google Drive link and
# placed in the `models` directory. `dynasent_model1` works the same way.
model = DynaSentModel(os.path.join('models', 'dynasent_model0.bin'))
examples = [
"superb",
"They said the experience would be amazing, and they were right!",
"They said the experience would be amazing, and they were wrong!"]
model.predict(examples)
```
This should return the list `['positive', 'positive', 'negative']`.
The `predict_proba` method provides access to the predicted distribution over the class labels; see the demo at the bottom of `dynasent_models.py` for details.
The following code uses `load_dataset` from above to reproduce the Round 2 dev-set report on Model 0 from the paper:
```python
import os
from sklearn.metrics import classification_report
from dynasent_models import DynaSentModel
dev_filename = os.path.join('dynasent-v1.1', 'dynasent-v1.1-round02-dynabench-dev.jsonl')
dev = load_dataset(dev_filename)
X_dev, y_dev = zip(*[(d['sentence'], d['gold_label']) for d in dev])
model = DynaSentModel(os.path.join('models', 'dynasent_model0.bin'))
preds = model.predict(X_dev)
print(classification_report(y_dev, preds, digits=3))
```
For a fuller report on these models, see our paper and [our model card](dynasent_modelcard.md).
## Other files
### Analysis notebooks
The following notebooks reproduce the dataset statistics, figures, and random example selections from the paper:
* `analyses_comparative.ipynb`
* `analysis_round1.ipynb`
* `analysis_round2.ipynb`
* `analysis_sst_dev_revalidate.ipynb`
The Python module `dynasent_utils.py` contains functions that support those notebooks, and `dynasent.mplstyle` helps with styling the plots.
### Datasheet
The [Datasheet](https://arxiv.org/abs/1803.09010) for our dataset:
* [dynasent_datasheet.md](dynasent_datasheet.md)
### Model Card
The [Model Card](https://arxiv.org/pdf/1810.03993.pdf) for our models:
* [dynasent_modelcard.md](dynasent_modelcard.md)
### Tests
The module `test_dataset.py` contains PyTest tests for the dataset. To use it, run
```
py.test -vv test_dataset.py
```
in the root directory of this repository.
### Validation HIT code
The file `validation-hit-contents.html` contains the HTML/Javascript used in the validation task. It could be used directly on Amazon Mechanical Turk, by simply pasting its contents into the usual HIT creation window.
## License
DynaSent has a [Creative Commons Attribution 4.0 International License](https://creativecommons.org/licenses/by/4.0/). |
open-source-metrics/stars | 2023-09-06T18:46:39.000Z | [
"region:us"
] | open-source-metrics | null | null | null | 0 | 303 | ---
dataset_info:
features:
- name: login
dtype: string
- name: dates
dtype: string
splits:
- name: peft
num_bytes: 350334
num_examples: 9427
- name: hub_docs
num_bytes: 6113
num_examples: 163
- name: evaluate
num_bytes: 56836
num_examples: 1517
- name: huggingface_hub
num_bytes: 42720
num_examples: 1134
- name: accelerate
num_bytes: 209877
num_examples: 5628
- name: datasets
num_bytes: 641185
num_examples: 17075
- name: optimum
num_bytes: 57345
num_examples: 1529
- name: pytorch_image_models
num_bytes: 993041
num_examples: 26636
- name: gradio
num_bytes: 803110
num_examples: 21598
- name: tokenizers
num_bytes: 279046
num_examples: 7528
- name: diffusers
num_bytes: 657188
num_examples: 17675
- name: transformers
num_bytes: 4154563
num_examples: 111365
- name: safetensors
num_bytes: 55868
num_examples: 1509
download_size: 5048468
dataset_size: 8307226
configs:
- config_name: default
data_files:
- split: peft
path: data/peft-*
- split: hub_docs
path: data/hub_docs-*
- split: evaluate
path: data/evaluate-*
- split: huggingface_hub
path: data/huggingface_hub-*
- split: accelerate
path: data/accelerate-*
- split: datasets
path: data/datasets-*
- split: optimum
path: data/optimum-*
- split: pytorch_image_models
path: data/pytorch_image_models-*
- split: gradio
path: data/gradio-*
- split: tokenizers
path: data/tokenizers-*
- split: diffusers
path: data/diffusers-*
- split: transformers
path: data/transformers-*
- split: safetensors
path: data/safetensors-*
---
# Dataset Card for "stars"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
coconutzhang/ghc_session_data_v2 | 2023-08-29T21:26:51.000Z | [
"region:us"
] | coconutzhang | null | null | null | 0 | 303 | ---
dataset_info:
features:
- name: User
dtype: string
- name: Prompt
dtype: string
splits:
- name: train
num_bytes: 307868
num_examples: 1215
download_size: 140534
dataset_size: 307868
---
# Dataset Card for "ghc_session_data_v2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
result-kand2-sdxl-wuerst-karlo/3af02cc5 | 2023-09-28T17:37:53.000Z | [
"region:us"
] | result-kand2-sdxl-wuerst-karlo | null | null | null | 0 | 303 | ---
dataset_info:
features:
- name: result
dtype: string
- name: id
dtype: int64
splits:
- name: train
num_bytes: 164
num_examples: 10
download_size: 1315
dataset_size: 164
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "3af02cc5"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
multidoc2dial | 2023-08-29T09:45:02.000Z | [
"task_categories:question-answering",
"task_ids:open-domain-qa",
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"size_categories:1K<n<10K",
"size_categories:n<1K",
"source_datasets:extended|doc2dial",
"language:en",
"license:apache-2.0",
"arxiv:2109.12595",
"region:us"
] | null | MultiDoc2Dial is a new task and dataset on modeling goal-oriented dialogues grounded in multiple documents. Most previous works treat document-grounded dialogue modeling as a machine reading comprehension task based on a single given document or passage. We aim to address more realistic scenarios where a goal-oriented information-seeking conversation involves multiple topics, and hence is grounded on different documents. | @inproceedings{feng2021multidoc2dial,
title={MultiDoc2Dial: Modeling Dialogues Grounded in Multiple Documents},
author={Feng, Song and Patel, Siva Sankalp and Wan, Hui and Joshi, Sachindra},
booktitle={EMNLP},
year={2021}
} | null | 2 | 302 | ---
annotations_creators:
- crowdsourced
language_creators:
- crowdsourced
- expert-generated
language:
- en
license:
- apache-2.0
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
- 1K<n<10K
- n<1K
source_datasets:
- extended|doc2dial
task_categories:
- question-answering
task_ids:
- open-domain-qa
paperswithcode_id: multidoc2dial
pretty_name: MultiDoc2Dial
config_names:
- dialogue_domain
- document_domain
- multidoc2dial
dataset_info:
- config_name: dialogue_domain
features:
- name: dial_id
dtype: string
- name: domain
dtype: string
- name: turns
list:
- name: turn_id
dtype: int32
- name: role
dtype: string
- name: da
dtype: string
- name: references
list:
- name: id_sp
dtype: string
- name: label
dtype: string
- name: doc_id
dtype: string
- name: utterance
dtype: string
splits:
- name: train
num_bytes: 11700558
num_examples: 3474
- name: validation
num_bytes: 2210338
num_examples: 661
download_size: 6868509
dataset_size: 13910896
- config_name: document_domain
features:
- name: domain
dtype: string
- name: doc_id
dtype: string
- name: title
dtype: string
- name: doc_text
dtype: string
- name: spans
list:
- name: id_sp
dtype: string
- name: tag
dtype: string
- name: start_sp
dtype: int32
- name: end_sp
dtype: int32
- name: text_sp
dtype: string
- name: title
dtype: string
- name: parent_titles
sequence:
- name: id_sp
dtype: string
- name: text
dtype: string
- name: level
dtype: string
- name: id_sec
dtype: string
- name: start_sec
dtype: int32
- name: text_sec
dtype: string
- name: end_sec
dtype: int32
- name: doc_html_ts
dtype: string
- name: doc_html_raw
dtype: string
splits:
- name: train
num_bytes: 29378879
num_examples: 488
download_size: 6868509
dataset_size: 29378879
- config_name: multidoc2dial
features:
- name: id
dtype: string
- name: title
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: da
dtype: string
- name: answers
sequence:
- name: text
dtype: string
- name: answer_start
dtype: int32
- name: utterance
dtype: string
- name: domain
dtype: string
splits:
- name: validation
num_bytes: 24331936
num_examples: 4201
- name: train
num_bytes: 126589862
num_examples: 21451
- name: test
num_bytes: 23026892
num_examples: 4094
download_size: 6868509
dataset_size: 173948690
---
# Dataset Card for MultiDoc2Dial
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://doc2dial.github.io/multidoc2dial/
- **Repository:** https://github.com/IBM/multidoc2dial
- **Paper:** https://arxiv.org/pdf/2109.12595.pdf
- **Leaderboard:**
- **Point of Contact:** sngfng@gmail.com
### Dataset Summary
MultiDoc2Dial is a new task and dataset on modeling goal-oriented dialogues grounded in multiple documents.
Most previous works treat document-grounded dialogue modeling as a machine reading comprehension task based on a
single given document or passage. We aim to address more realistic scenarios where a goal-oriented information-seeking
conversation involves multiple topics, and hence is grounded on different documents.
### Supported Tasks and Leaderboards
> Supported Task: Open domain question answering, document-grounded dialogue, passage retrieval
> Leaderboard:
### Languages
English
## Dataset Structure
### Data Instances
Sample data instance for `multidoc2dial` :
```
{
"id": "8df07b7a98990db27c395cb1f68a962e_1",
"title": "Top 5 DMV Mistakes and How to Avoid Them#3_0",
"context": "Many DMV customers make easily avoidable mistakes that cause them significant problems, including encounters with law enforcement and impounded vehicles. Because we see customers make these mistakes over and over again , we are issuing this list of the top five DMV mistakes and how to avoid them. \n\n1. Forgetting to Update Address \nBy statute , you must report a change of address to DMV within ten days of moving. That is the case for the address associated with your license, as well as all the addresses associated with each registered vehicle, which may differ. It is not sufficient to only: write your new address on the back of your old license; tell the United States Postal Service; or inform the police officer writing you a ticket. If you fail to keep your address current , you will miss a suspension order and may be charged with operating an unregistered vehicle and/or aggravated unlicensed operation, both misdemeanors. This really happens , but the good news is this is a problem that is easily avoidable. Learn more about how to change the address on your license and registrations [1 ] \n\n2. Leaving the State Without Notifying DMV \nStates communicate with each other , so when you move to another state, be sure to tie up any loose ends regarding your New York State license or registration. That means resolving any unanswered tickets, suspensions or revocations, and surrendering your license plates to NYS when you get to your new home state. A license suspension or revocation here could mean that your new home state will not issue you a license there. Remember , it is important to notify DMV of your new address so that any possible mail correspondence can reach you. Also , turning in your plates is important to avoid an insurance lapse. \n\n3. Letting Insurance Lapse \nBecause we all pay indirectly for crashes involving uninsured motorists , New York State requires every motorist to maintain auto insurance every single day a vehicle is registered. DMV works with insurance companies to electronically monitor your insurance coverage , and we know when coverage is dropped for any reason. When that happens , we mail you an insurance inquiry letter to allow you to clear up the problem. We send 500,000 inquiry letters a year. If the inquiry letter does not resolve the problem , we must suspend the vehicle registration and , if it persists, your driver license!We suspend 300,000 registrations a year for failure to maintain insurance. If you fail to maintain an updated address with us , you won t learn that you have an insurance problem , and we will suspend your registration and license. Make sure you turn in your vehicle s license plates at DMV before you cancel your insurance policy. Insurance policies must be from a company licensed in New York State. Learn more about Insurances Lapes [2] and How to Surrender your Plates [3 ] \n\n4. Understanding how Much Traffic Points Cost \nDMV maintains a point system to track dangerous drivers. Often , motorists convicted of a traffic ticket feel they have resolved all their motoring issues with the local court, but later learn that the Driver Responsibility Assessment DRA is a separate DMV charge based on the total points they accumulate. The $300 DRA fee can be paid in $100 annual installments over three years. Motorists who fail to maintain an updated address with DMV may resolve their tickets with the court, but never receive their DRA assessment because we do not have their new address on record. Failure to pay the DRA will result in a suspended license. Learn more about About the NYS Driver Point System [4] and how to Pay Driver Responsibility Assessment [5 ] \n\n5. Not Bringing Proper Documentation to DMV Office \nAbout ten percent of customers visiting a DMV office do not bring what they need to complete their transaction, and have to come back a second time to finish their business. This can be as simple as not bringing sufficient funds to pay for a license renewal or not having the proof of auto insurance required to register a car. Better yet , don t visit a DMV office at all, and see if your transaction can be performed online, like an address change, registration renewal, license renewal, replacing a lost title, paying a DRA or scheduling a road test. Our award - winning website is recognized as one of the best in the nation. It has all the answers you need to efficiently perform any DMV transaction. Consider signing up for our MyDMV service, which offers even more benefits. Sign up or log into MyDMV [6 ] ",
"question": "Hello, I forgot o update my address, can you help me with that?[SEP]",
"da": "query_condition",
"answers":
{
"text": ['you must report a change of address to DMV within ten days of moving. That is the case for the address associated with your license, as well as all the addresses associated with each registered vehicle, which may differ. "],
"answer_start": [346]
},
"utterance": "hi, you have to report any change of address to DMV within 10 days after moving. You should do this both for the address associated with your license and all the addresses associated with all your vehicles.",
"domain": "dmv"
}
```
Sample data instance for `document_domain` :
```
{
"domain": "ssa",
"doc_id": "Benefits Planner: Survivors | Planning For Your Survivors | Social Security Administration#1_0",
"title": "Benefits Planner: Survivors | Planning For Your Survivors | Social Security Administration#1",
"doc_text": "\n\nBenefits Planner: Survivors | Planning For Your Survivors \nAs you plan for the future , you'll want to think about what your family would need if you should die now. Social Security can help your family if you have earned enough Social Security credits through your work. You can earn up to four credits each year. In 2019 , for example , you earn one credit for each $1,360 of wages or self - employment income. When you have earned $5,440 , you have earned your four credits for the year. The number of credits needed to provide benefits for your survivors depends on your age when you die. No one needs more than 40 credits 10 years of work to be eligible for any Social Security benefit. But , the younger a person is , the fewer credits they must have for family members to receive survivors benefits. Benefits can be paid to your children and your spouse who is caring for the children even if you don't have the required number of credits. They can get benefits if you have credit for one and one - half years of work 6 credits in the three years just before your death. \n\nFor Your Widow Or Widower \nThere are about five million widows and widowers receiving monthly Social Security benefits based on their deceased spouse's earnings record. And , for many of those survivors, particularly aged women, those benefits are keeping them out of poverty. Widows and widowers can receive : reduced benefits as early as age 60 or full benefits at full retirement age or older. benefits as early as age 50 if they're disabled AND their disability started before or within seven years of your death. benefits at any age , if they have not remarried , and if they take care of your child who is under age 16 or disabled and receives benefits on your record. If applying for disability benefits on a deceased worker s record , they can speed up the application process if they complete an Adult Disability Report and have it available at the time of their appointment. We use the same definition of disability for widows and widowers as we do for workers. \n\nFor Your Surviving Divorced Spouse \nIf you have a surviving divorced spouse , they could get the same benefits as your widow or widower provided that your marriage lasted 10 years or more. Benefits paid to a surviving divorced spouse won't affect the benefit amounts your other survivors will receive based on your earnings record. If your former spouse is caring for your child who is under age 16 or disabled and gets benefits on your record , they will not have to meet the length - of - marriage rule. The child must be your natural or legally adopted child. \n\nFor Your Children \nYour unmarried children who are under 18 up to age 19 if attending elementary or secondary school full time can be eligible to receive Social Security benefits when you die. And your child can get benefits at any age if they were disabled before age 22 and remain disabled. Besides your natural children , your stepchildren, grandchildren, step grandchildren or adopted children may receive benefits under certain circumstances. For further information , view our publication. \n\nFor Your Parents \nYou must have been providing at least half of your parent s support and your parent must not be eligible to receive a retirement benefit that is higher than the benefit we could pay on your record. Generally, your parent also must not have married after your death ; however, there are some exceptions. In addition to your natural parent , your stepparent or adoptive parent may receive benefits if they became your parent before you were age 16. \n\nHow Much Would Your Survivors Receive \nHow much your family could receive in benefits depends on your average lifetime earnings. The higher your earnings were , the higher their benefits would be. We calculate a basic amount as if you had reached full retirement age at the time you die. These are examples of monthly benefit payments : Widow or widower, full retirement age or older 100 percent of your benefit amount ; Widow or widower , age 60 to full retirement age 71 to 99 percent of your basic amount ; Disabled widow or widower , age 50 through 59 71 percent ; Widow or widower , any age, caring for a child under age 16 75 percent ; A child under age 18 19 if still in elementary or secondary school or disabled 75 percent ; and Your dependent parent , age 62 or older : One surviving parent 82 percent. Two surviving parents 75 percent to each parent. Percentages for a surviving divorced spouse would be the same as above. There may also be a special lump - sum death payment. \n\nMaximum Family Amount \nThere's a limit to the amount that family members can receive each month. The limit varies , but it is generally equal to between 150 and 180 percent of the basic benefit rate. If the sum of the benefits payable to family members is greater than this limit , the benefits will be reduced proportionately. Any benefits paid to a surviving divorced spouse based on disability or age won't count toward this maximum amount. Get your online or check our Benefit Calculators for an estimate of the benefits your family could receive if you died right now. \n\nOther Things You Need To Know \nThere are limits on how much survivors may earn while they receive benefits. Benefits for a widow, widower, or surviving divorced spouse may be affected by several additional factors : If your widow, widower, or surviving divorced spouse remarries before they reach age 60 age 50 if disabled , they cannot receive benefits as a surviving spouse while they're married. If your widow, widower, or surviving divorced spouse remarries after they reach age 60 age 50 if disabled , they will continue to qualify for benefits on your Social Security record. However , if their current spouse is a Social Security beneficiary , they may want to apply for spouse's benefits on their record. If that amount is more than the widow's or widower's benefit on your record , they will receive a combination of benefits that equals the higher amount. If your widow, widower, or surviving divorced spouse receives benefits on your record , they can switch to their own retirement benefit as early as age 62. This assumes they're eligible for retirement benefits and their retirement rate is higher than their rate as a widow, widower, or surviving divorced spouse. In many cases , a widow or widower can begin receiving one benefit at a reduced rate and then, at full retirement age, switch to the other benefit at an unreduced rate. If your widow, widower, or surviving divorced spouse will also receive a pension based on work not covered by Social Security, such as government or foreign work , their Social Security benefits as a survivor may be affected. ",
"spans": [
{
"id_sp": "1",
"tag": "h2",
"start_sp": 0,
"end_sp": 61,
"text_sp": "\n\nBenefits Planner: Survivors | Planning For Your Survivors \n",
"title": "Benefits Planner: Survivors | Planning For Your Survivors",
"parent_titles": {
"id_sp": [],
"text": [],
"level": []
},
"id_sec": "t_0",
"start_sec": 0,
"text_sec": "\n\nBenefits Planner: Survivors | Planning For Your Survivors \n",
"end_sec": 61
},
{
"id_sp": "2",
"tag": "u",
"start_sp": 61,
"end_sp": 90,
"text_sp": "As you plan for the future , ",
"title": "Benefits Planner: Survivors | Planning For Your Survivors",
"parent_titles": {
"id_sp": [],
"text": [],
"level": []
},
"id_sec": "1",
"start_sec": 61,
"text_sec": "As you plan for the future , you'll want to think about what your family would need if you should die now. Social Security can help your family if you have earned enough Social Security credits through your work. ",
"end_sec": 274
},
{
"id_sp": "3",
"tag": "u",
"start_sp": 90,
"end_sp": 168,
"text_sp": "you'll want to think about what your family would need if you should die now. ",
"title": "Benefits Planner: Survivors | Planning For Your Survivors",
"parent_titles": {
"id_sp": [],
"text": [],
"level": []
},
"id_sec": "1",
"start_sec": 61,
"text_sec": "As you plan for the future , you'll want to think about what your family would need if you should die now. Social Security can help your family if you have earned enough Social Security credits through your work. ",
"end_sec": 274
}
],
"doc_html_ts": "<main><section><div><h2 sent_id=\"1\" text_id=\"1\">Benefits Planner: Survivors | Planning For Your Survivors</h2></div></section><section><div><article><section><div tag_id=\"1\"><u sent_id=\"2\" tag_id=\"1\"><u sent_id=\"2\" tag_id=\"1\" text_id=\"2\">As you plan for the future ,</u><u sent_id=\"2\" tag_id=\"1\" text_id=\"3\">you 'll want to think about what your family would need if you should die now .</u></u><u sent_id=\"3\" tag_id=\"1\"><u sent_id=\"3\" tag_id=\"1\" text_id=\"4\">Social Security can help your family if you have earned enough Social Security credits through your work .</u></u></div><div tag_id=\"2\"><u sent_id=\"4\" tag_id=\"2\"><u sent_id=\"4\" tag_id=\"2\" text_id=\"5\">You can earn up to four credits each year .</u></u><u sent_id=\"5\" tag_id=\"2\"><u sent_id=\"5\" tag_id=\"2\" text_id=\"6\">In 2019 ,</u><u sent_id=\"5\" tag_id=\"2\" text_id=\"7\">for example ,</u><u sent_id=\"5\" tag_id=\"2\" text_id=\"8\">you earn one credit for each $ 1,360 of wages or self - employment income .</u></u><u sent_id=\"6\" tag_id=\"2\"><u sent_id=\"6\" tag_id=\"2\" text_id=\"9\">When you have earned $ 5,440 ,</u><u sent_id=\"6\" tag_id=\"2\" text_id=\"10\">you have earned your four credits for the year .</u></u></div><div tag_id=\"3\"><u sent_id=\"7\" tag_id=\"3\"><u sent_id=\"7\" tag_id=\"3\" text_id=\"11\">The number of credits needed to provide benefits for your survivors depends on your age when you die .</u></u><u sent_id=\"8\" tag_id=\"3\"><u sent_id=\"8\" tag_id=\"3\" text_id=\"12\">No one needs more than 40 credits 10 years of work to be eligible for any Social Security benefit .</u></u><u sent_id=\"9\" tag_id=\"3\"><u sent_id=\"9\" tag_id=\"3\" text_id=\"13\">But ,</u><u sent_id=\"9\" tag_id=\"3\" text_id=\"14\">the younger a person is ,</u><u sent_id=\"9\" tag_id=\"3\" text_id=\"15\">the fewer credits they must have for family members to receive survivors benefits .</u></u></div><div tag_id=\"4\"><u sent_id=\"10\" tag_id=\"4\"><u sent_id=\"10\" tag_id=\"4\" text_id=\"16\">Benefits can be paid to your children and your spouse who is caring for the children even if you do n't have the required number of credits .</u></u><u sent_id=\"11\" tag_id=\"4\"><u sent_id=\"11\" tag_id=\"4\" text_id=\"17\">They can get benefits if you have credit for one and one - half years of work 6 credits in the three years just before your death .</u></u></div></section><section><h3 sent_id=\"12\" text_id=\"18\">For Your Widow Or Widower</h3><div tag_id=\"5\"><u sent_id=\"13\" tag_id=\"5\"><u sent_id=\"13\" tag_id=\"5\" text_id=\"19\">There are about five million widows and widowers receiving monthly Social Security benefits based on their deceased spouse 's earnings record .</u></u><u sent_id=\"14\" tag_id=\"5\"><u sent_id=\"14\" tag_id=\"5\" text_id=\"20\">And ,</u><u sent_id=\"14\" tag_id=\"5\" text_id=\"21\">for many of those survivors , particularly aged women , those benefits are keeping them out of poverty .</u></u></div><div tag_id=\"6\"><u sent_id=\"15\" tag_id=\"6\"><u sent_id=\"15\" tag_id=\"6\" text_id=\"22\">Widows and widowers can receive :</u></u></div><ul class=\"browser-default\" tag_id=\"6\"><li tag_id=\"6\"><u sent_id=\"16\" tag_id=\"6\"><u sent_id=\"16\" tag_id=\"6\" text_id=\"23\">reduced benefits as early as age 60 or full benefits at full retirement age or older .</u></u></li><div>If widows or widowers qualify for retirement benefits on their own record, they can switch to their own retirement benefit as early as age 62.</div><li tag_id=\"6\"><u sent_id=\"17\" tag_id=\"6\"><u sent_id=\"17\" tag_id=\"6\" text_id=\"24\">benefits as early as age 50 if they 're disabled AND their disability started before or within seven years of your death .</u></u></li><div>If a widow or widower who is caring for your children receives Social Security benefits, they're still eligible if their disability starts before those payments end or within seven years after they end.</div><li tag_id=\"6\"><u sent_id=\"18\" tag_id=\"6\"><u sent_id=\"18\" tag_id=\"6\" text_id=\"25\">benefits at any age ,</u><u sent_id=\"18\" tag_id=\"6\" text_id=\"26\">if they have not remarried ,</u><u sent_id=\"18\" tag_id=\"6\" text_id=\"27\">and if they take care of your child who is under age 16 or disabled and receives benefits on your record .</u></u></li><div>If a widow or widower remarries <strong>after they reach age 60</strong> (age 50 if disabled), the remarriage will not affect their eligibility for survivors benefits.</div></ul><div>Widows, widowers, and surviving divorced spouses cannot apply online for survivors benefits. They should <a>contact Social Security</a> at <nobr><strong>1-800-772-1213</strong></nobr> (TTY <nobr><strong>1-800-325-0778</strong>) to request an appointment.</nobr></div><div tag_id=\"7\"><u sent_id=\"19\" tag_id=\"7\"><u sent_id=\"19\" tag_id=\"7\" text_id=\"28\">If applying for disability benefits on a deceased worker s record ,</u><u sent_id=\"19\" tag_id=\"7\" text_id=\"29\">they can speed up the application process if they complete an Adult Disability Report and have it available at the time of their appointment .</u></u></div><div tag_id=\"8\"><u sent_id=\"20\" tag_id=\"8\"><u sent_id=\"20\" tag_id=\"8\" text_id=\"30\">We use the same definition of disability for widows and widowers as we do for workers .</u></u></div></section><section><h3 sent_id=\"21\" text_id=\"31\">For Your Surviving Divorced Spouse</h3><div tag_id=\"9\"><u sent_id=\"22\" tag_id=\"9\"><u sent_id=\"22\" tag_id=\"9\" text_id=\"32\">If you have a surviving divorced spouse ,</u><u sent_id=\"22\" tag_id=\"9\" text_id=\"33\">they could get the same benefits as your widow or widower provided that your marriage lasted 10 years or more .</u></u></div><div>If your surviving divorced spouse qualifies for retirement benefits on their own record they can switch to their own retirement benefit as early as age 62.</div><div tag_id=\"10\"><u sent_id=\"23\" tag_id=\"10\"><u sent_id=\"23\" tag_id=\"10\" text_id=\"34\">Benefits paid to a surviving divorced spouse wo n't affect the benefit amounts your other survivors will receive based on your earnings record .</u></u></div><div>If your surviving divorced spouse remarries <strong>after they reach age 60</strong> (age 50 if disabled), the remarriage will not affect their eligibility for survivors benefits.</div><div tag_id=\"11\"><u sent_id=\"24\" tag_id=\"11\"><u sent_id=\"24\" tag_id=\"11\" text_id=\"35\">If your former spouse is caring for your child who is under age 16 or disabled and gets benefits on your record ,</u><u sent_id=\"24\" tag_id=\"11\" text_id=\"36\">they will not have to meet the length - of - marriage rule .</u></u><u sent_id=\"25\" tag_id=\"11\"><u sent_id=\"25\" tag_id=\"11\" text_id=\"37\">The child must be your natural or legally adopted child .</u></u></div><div>However, if they qualify for benefits as a surviving divorced mother or father who is caring for your child, their benefits may affect the amount of benefits your other survivors will receive based on your earnings record.</div></section><section><h3 sent_id=\"26\" text_id=\"38\">For Your Children</h3><div tag_id=\"12\"><u sent_id=\"27\" tag_id=\"12\"><u sent_id=\"27\" tag_id=\"12\" text_id=\"39\">Your unmarried children who are under 18 up to age 19 if attending elementary or secondary school full time can be eligible to receive Social Security benefits when you die .</u></u></div><div tag_id=\"13\"><u sent_id=\"28\" tag_id=\"13\"><u sent_id=\"28\" tag_id=\"13\" text_id=\"40\">And your child can get benefits at any age if they were disabled before age 22 and remain disabled .</u></u></div><div tag_id=\"14\"><u sent_id=\"29\" tag_id=\"14\"><u sent_id=\"29\" tag_id=\"14\" text_id=\"41\">Besides your natural children ,</u><u sent_id=\"29\" tag_id=\"14\" text_id=\"42\">your stepchildren , grandchildren , step grandchildren or adopted children may receive benefits under certain circumstances .</u></u><u sent_id=\"30\" tag_id=\"14\"><u sent_id=\"30\" tag_id=\"14\" text_id=\"43\">For further information ,</u><u sent_id=\"30\" tag_id=\"14\" text_id=\"44\">view our publication .</u></u></div></section><section><h3 sent_id=\"31\" text_id=\"45\">For Your Parents</h3><div tag_id=\"15\"><u sent_id=\"32\" tag_id=\"15\"><u sent_id=\"32\" tag_id=\"15\" text_id=\"46\">You must have been providing at least half of your parent s support and your parent must not be eligible to receive a retirement benefit that is higher than the benefit we could pay on your record .</u></u><u sent_id=\"33\" tag_id=\"15\"><u sent_id=\"33\" tag_id=\"15\" text_id=\"47\">Generally , your parent also must not have married after your death ;</u><u sent_id=\"33\" tag_id=\"15\" text_id=\"48\">however , there are some exceptions .</u></u></div><div tag_id=\"16\"><u sent_id=\"34\" tag_id=\"16\"><u sent_id=\"34\" tag_id=\"16\" text_id=\"49\">In addition to your natural parent ,</u><u sent_id=\"34\" tag_id=\"16\" text_id=\"50\">your stepparent or adoptive parent may receive benefits if they became your parent before you were age 16 .</u></u></div></section><section><h3 sent_id=\"35\" text_id=\"51\">How Much Would Your Survivors Receive</h3><div tag_id=\"17\"><u sent_id=\"36\" tag_id=\"17\"><u sent_id=\"36\" tag_id=\"17\" text_id=\"52\">How much your family could receive in benefits</u><u sent_id=\"36\" tag_id=\"17\" text_id=\"53\">depends on your average lifetime earnings .</u></u><u sent_id=\"37\" tag_id=\"17\"><u sent_id=\"37\" tag_id=\"17\" text_id=\"54\">The higher your earnings were ,</u><u sent_id=\"37\" tag_id=\"17\" text_id=\"55\">the higher their benefits would be .</u></u><u sent_id=\"38\" tag_id=\"17\"><u sent_id=\"38\" tag_id=\"17\" text_id=\"56\">We calculate a basic amount as if you had reached full retirement age at the time you die .</u></u></div><div>If you are already receiving reduced benefits when you die, survivors benefits are based on that amount.</div><div tag_id=\"18\"><u sent_id=\"39\" tag_id=\"18\"><u sent_id=\"39\" tag_id=\"18\" text_id=\"57\">These are examples of monthly benefit payments :</u></u></div><ul class=\"browser-default\" tag_id=\"18\"><li tag_id=\"18\"><u sent_id=\"40\" tag_id=\"18\"><u sent_id=\"40\" tag_id=\"18\" text_id=\"58\">Widow or widower , full retirement age or older 100 percent of your benefit amount ;</u></u></li><li tag_id=\"18\"><u sent_id=\"41\" tag_id=\"18\"><u sent_id=\"41\" tag_id=\"18\" text_id=\"59\">Widow or widower ,</u><u sent_id=\"41\" tag_id=\"18\" text_id=\"60\">age 60 to full retirement age 71 to 99 percent of your basic amount ;</u></u></li><li tag_id=\"18\"><u sent_id=\"42\" tag_id=\"18\"><u sent_id=\"42\" tag_id=\"18\" text_id=\"61\">Disabled widow or widower ,</u><u sent_id=\"42\" tag_id=\"18\" text_id=\"62\">age 50 through 59 71 percent ;</u></u></li><li tag_id=\"18\"><u sent_id=\"43\" tag_id=\"18\"><u sent_id=\"43\" tag_id=\"18\" text_id=\"63\">Widow or widower ,</u><u sent_id=\"43\" tag_id=\"18\" text_id=\"64\">any age , caring for a child under age 16 75 percent ;</u></u></li><li tag_id=\"18\"><u sent_id=\"44\" tag_id=\"18\"><u sent_id=\"44\" tag_id=\"18\" text_id=\"65\">A child under age 18 19 if still in elementary or secondary school or disabled 75 percent ;</u><u sent_id=\"44\" tag_id=\"18\" text_id=\"66\">and</u></u></li><li tag_id=\"18\"><div tag_id=\"18\"><u sent_id=\"48\" tag_id=\"18\"><u sent_id=\"48\" tag_id=\"18\" text_id=\"67\">Your dependent parent ,</u><u sent_id=\"48\" tag_id=\"18\" text_id=\"68\">age 62 or older :</u></u></div><ul class=\"browser-default\" tag_id=\"18\"><li tag_id=\"18\"><u sent_id=\"49\" tag_id=\"18\"><u sent_id=\"49\" tag_id=\"18\" text_id=\"69\">One surviving parent 82 percent .</u></u></li><li tag_id=\"18\"><u sent_id=\"50\" tag_id=\"18\"><u sent_id=\"50\" tag_id=\"18\" text_id=\"70\">Two surviving parents 75 percent to each parent .</u></u></li></ul></li></ul><div tag_id=\"19\"><u sent_id=\"51\" tag_id=\"19\"><u sent_id=\"51\" tag_id=\"19\" text_id=\"71\">Percentages for a surviving divorced spouse would be the same as above .</u></u></div><div tag_id=\"20\"><u sent_id=\"52\" tag_id=\"20\"><u sent_id=\"52\" tag_id=\"20\" text_id=\"72\">There may also be a special lump - sum death payment .</u></u></div><h3 sent_id=\"53\" text_id=\"73\">Maximum Family Amount</h3><div tag_id=\"21\"><u sent_id=\"54\" tag_id=\"21\"><u sent_id=\"54\" tag_id=\"21\" text_id=\"74\">There 's a limit to the amount that family members can receive each month .</u></u><u sent_id=\"55\" tag_id=\"21\"><u sent_id=\"55\" tag_id=\"21\" text_id=\"75\">The limit varies ,</u><u sent_id=\"55\" tag_id=\"21\" text_id=\"76\">but it is generally equal to between 150 and 180 percent of the basic benefit rate .</u></u></div><div tag_id=\"22\"><u sent_id=\"56\" tag_id=\"22\"><u sent_id=\"56\" tag_id=\"22\" text_id=\"77\">If the sum of the benefits payable to family members is greater than this limit ,</u><u sent_id=\"56\" tag_id=\"22\" text_id=\"78\">the benefits will be reduced proportionately .</u></u><u sent_id=\"57\" tag_id=\"22\"><u sent_id=\"57\" tag_id=\"22\" text_id=\"79\">Any benefits paid to a surviving divorced spouse based on disability or age wo n't count toward this maximum amount .</u></u></div><div tag_id=\"23\"><u sent_id=\"58\" tag_id=\"23\"><u sent_id=\"58\" tag_id=\"23\" text_id=\"80\">Get your online or check our Benefit Calculators for an estimate of the benefits your family could receive if you died right now .</u></u></div><h3 sent_id=\"59\" text_id=\"81\">Other Things You Need To Know</h3><div tag_id=\"24\"><u sent_id=\"60\" tag_id=\"24\"><u sent_id=\"60\" tag_id=\"24\" text_id=\"82\">There are limits on how much survivors may earn while they receive benefits .</u></u></div><div tag_id=\"25\"><u sent_id=\"61\" tag_id=\"25\"><u sent_id=\"61\" tag_id=\"25\" text_id=\"83\">Benefits for a widow , widower , or surviving divorced spouse may be affected by several additional factors :</u></u></div><div><a>If they remarry</a><section><div tag_id=\"26\"><u sent_id=\"62\" tag_id=\"26\"><u sent_id=\"62\" tag_id=\"26\" text_id=\"84\">If your widow , widower , or surviving divorced spouse remarries before they reach age 60 age 50 if disabled ,</u><u sent_id=\"62\" tag_id=\"26\" text_id=\"85\">they can not receive benefits as a surviving spouse while they 're married .</u></u></div><div tag_id=\"27\"><u sent_id=\"63\" tag_id=\"27\"><u sent_id=\"63\" tag_id=\"27\" text_id=\"86\">If your widow , widower , or surviving divorced spouse remarries after they reach age 60 age 50 if disabled ,</u><u sent_id=\"63\" tag_id=\"27\" text_id=\"87\">they will continue to qualify for benefits on your Social Security record .</u></u></div><div tag_id=\"28\"><u sent_id=\"64\" tag_id=\"28\"><u sent_id=\"64\" tag_id=\"28\" text_id=\"88\">However ,</u><u sent_id=\"64\" tag_id=\"28\" text_id=\"89\">if their current spouse is a Social Security beneficiary ,</u><u sent_id=\"64\" tag_id=\"28\" text_id=\"90\">they may want to apply for spouse 's benefits on their record .</u></u><u sent_id=\"65\" tag_id=\"28\"><u sent_id=\"65\" tag_id=\"28\" text_id=\"91\">If that amount is more than the widow 's or widower 's benefit on your record ,</u><u sent_id=\"65\" tag_id=\"28\" text_id=\"92\">they will receive a combination of benefits that equals the higher amount .</u></u></div></section></div><div><a>If they're eligible for retirement benefits on their own record</a><section><div tag_id=\"29\"><u sent_id=\"66\" tag_id=\"29\"><u sent_id=\"66\" tag_id=\"29\" text_id=\"93\">If your widow , widower , or surviving divorced spouse receives benefits on your record ,</u><u sent_id=\"66\" tag_id=\"29\" text_id=\"94\">they can switch to their own retirement benefit as early as age 62 .</u></u><u sent_id=\"67\" tag_id=\"29\"><u sent_id=\"67\" tag_id=\"29\" text_id=\"95\">This assumes they 're eligible for retirement benefits and their retirement rate is higher than their rate as a widow , widower , or surviving divorced spouse .</u></u></div><div tag_id=\"30\"><u sent_id=\"68\" tag_id=\"30\"><u sent_id=\"68\" tag_id=\"30\" text_id=\"96\">In many cases ,</u><u sent_id=\"68\" tag_id=\"30\" text_id=\"97\">a widow or widower can begin receiving one benefit at a reduced rate and then , at full retirement age , switch to the other benefit at an unreduced rate .</u></u></div><div><a>Full retirement age for retirement benefits</a> may not match full retirement age for survivors benefits.</div></section></div><div><a>If they will also receive a pension based on work not covered by Social Security</a><section><div tag_id=\"31\"><u sent_id=\"69\" tag_id=\"31\"><u sent_id=\"69\" tag_id=\"31\" text_id=\"98\">If your widow , widower , or surviving divorced spouse will also receive a pension based on work not covered by Social Security , such as government or foreign work ,</u><u sent_id=\"69\" tag_id=\"31\" text_id=\"99\">their Social Security benefits as a survivor may be affected .</u></u></div></section></div></section></article></div></section></main>",
"doc_html_raw": "<main class=\"content\" id=\"content\" role=\"main\">\n\n<section>\n\n<div>\n<h2>Benefits Planner: Survivors | Planning For Your Survivors</h2>\n</div>\n</section>\n\n<section>\n\n<div>\n\n<div>\n\n\n</div>\n\n\n\n<article>\n<section>\n<p>As you plan for the future, you'll want to think about what your family would need if you should die now. Social Security can help your family if you have earned enough Social Security credits through your work.</p>\n<p><a>You can earn up to four credits each year</a>. In 2019, for example, you earn one credit for each $1,360 of wages or <a>self-employment</a> income. When you have earned $5,440, you have earned your four credits for the year.</p>\n<p>The number of credits needed to provide benefits for your survivors depends on your age when you die. No one needs more than 40 credits (10 years of work) to be eligible for any Social Security benefit. But, the younger a person is, the fewer credits they must have for family members to receive survivors benefits.</p>\n<p>Benefits can be paid to your children and your spouse who is caring for the children even if you don't have the required number of credits. They can get benefits if you have credit for one and one-half years of work (6 credits) in the three years just before your death.</p>\n</section>\n<section>\n<h3>For Your Widow Or Widower</h3>\n<p>There are about five million widows and widowers receiving monthly Social Security benefits based on their deceased spouse's earnings record. And, for many of those survivors, particularly aged women, those benefits are keeping them out of poverty. </p>\n<p>Widows and widowers can receive:</p>\n<ul class=\"browser-default\">\n<li>reduced benefits as early as age 60 or full benefits at <a>full retirement age</a> or older.</li>\n<div>\n If widows or widowers qualify for retirement benefits on their own record, they can switch to their own retirement benefit as early as age 62.\n </div>\n<li>benefits as early as age 50 if they're disabled AND their disability started before or within seven years of your death.</li>\n<div>\n If a widow or widower who is caring for your children receives Social Security benefits, they're still eligible if their disability starts before those payments end or within seven years after they end.\n </div>\n<li>benefits at any age, if they have not remarried, and if they take care of your child who is under age 16 or disabled and receives benefits on your record.</li>\n<div>\n If a widow or widower remarries <strong>after they reach age 60</strong> (age 50 if disabled), the remarriage will not affect their eligibility for survivors benefits.\n </div>\n</ul>\n<div>\n Widows, widowers, and surviving divorced spouses cannot apply online for survivors benefits. They should <a>contact Social Security</a> at <nobr><strong>1-800-772-1213</strong></nobr> (TTY <nobr><strong>1-800-325-0778</strong>) to request an appointment.</nobr>\n</div>\n<p>If applying for disability benefits on a deceased worker s record, they can speed up the application process if they complete an <a>Adult Disability Report</a> and have it available at the time of their appointment.</p>\n<p>We use the same <a>definition of disability</a> for widows and widowers as we do for workers.</p>\n</section>\n<section>\n<h3>For Your Surviving Divorced Spouse</h3>\n<p>If you have a surviving divorced spouse, they could get the same benefits as your widow or widower provided that your marriage lasted 10 years or more.</p>\n<div>\n If your surviving divorced spouse qualifies for retirement benefits on their own record they can switch to their own retirement benefit as early as age 62.\n </div>\n<p>Benefits paid to a surviving divorced spouse won't affect the benefit amounts your other survivors will receive based on your earnings record.</p>\n<div>\n If your surviving divorced spouse remarries <strong>after they reach age 60</strong> (age 50 if disabled), the remarriage will not affect their eligibility for survivors benefits.\n </div>\n<p>If your former spouse is caring for your child who is under age 16 or disabled and gets benefits on your record, they will not have to meet the length-of-marriage rule. The child must be your natural or legally adopted child.</p>\n<div>\n However, if they qualify for benefits as a surviving divorced mother or father who is caring for your child, their benefits may affect the amount of benefits your other survivors will receive based on your earnings record.\n </div>\n</section>\n<section>\n<h3>For Your Children</h3>\n<p>Your unmarried children who are under 18 (up to age 19 if attending elementary or secondary school full time) can be eligible to receive Social Security benefits when you die.</p>\n<p>And your child can get benefits at any age if they were disabled before age 22 and remain disabled.</p>\n<p>Besides your natural children, your stepchildren, grandchildren, step grandchildren or adopted children may receive benefits under certain circumstances. For further information, view our <a>publication</a>.</p>\n</section>\n<section>\n<h3>For Your Parents</h3>\n<p>You must have been providing at least half of your parent s support and your parent must not be eligible to receive a retirement benefit that is higher than the benefit we could pay on your record. Generally, your parent also must not have married after your death; however, there are some exceptions.</p>\n<p>In addition to your natural parent, your stepparent or adoptive parent may receive benefits if they became your parent before you were age 16.</p>\n</section>\n<section>\n<h3>How Much Would Your Survivors Receive</h3>\n<p>How much your family could receive in benefits depends on your average lifetime earnings. The higher your earnings were, the higher their benefits would be. We calculate a basic amount as if you had reached full retirement age at the time you die.</p>\n<div>\n If you are already receiving reduced benefits when you die, survivors benefits are based on that amount.\n </div>\n<p>These are examples of monthly benefit payments:</p>\n<ul class=\"browser-default\">\n<li>Widow or widower, <a>full retirement age</a> or older 100 percent of your benefit amount;</li>\n<li>Widow or widower, age 60 to <a>full retirement age</a> 71 to 99 percent of your basic amount;</li>\n<li>Disabled widow or widower, age 50 through 59 71 percent;</li>\n<li>Widow or widower, any age, caring for a child under age 16 75 percent;</li>\n<li>A child under age 18 (19 if still in elementary or secondary school) or disabled 75 percent; and</li>\n<li>Your dependent parent(s), age 62 or older:\n <ul class=\"browser-default\">\n<li>One surviving parent 82 percent.</li>\n<li>Two surviving parents 75 percent to each parent.</li>\n</ul>\n</li>\n</ul>\n<p>Percentages for a surviving divorced spouse would be the same as above.</p>\n<p>There may also be a <a>special lump-sum death payment</a>.</p>\n<h3>Maximum Family Amount</h3>\n<p>There's a limit to the amount that family members can receive each month. <a>The limit varies</a>, but it is generally equal to between 150 and 180 percent of the basic benefit rate.</p>\n<p>If the sum of the benefits payable to family members is greater than this limit, the benefits will be reduced proportionately. (Any benefits paid to a surviving divorced spouse based on disability or age won't count toward this maximum amount.)</p>\n<p>Get your <a></a> online or check our <a>Benefit Calculators</a> for an estimate of the benefits your family could receive if you died right now.</p>\n<h3>Other Things You Need To Know</h3>\n<p>There are <a>limits on how much survivors may earn</a> while they receive benefits.</p>\n<p>Benefits for a widow, widower, or surviving divorced spouse may be affected by several additional factors:</p>\n<div>\n<a>If they remarry</a>\n<section>\n<p>If your widow, widower, or surviving divorced spouse remarries before they reach age 60 (age 50 if disabled), they cannot receive benefits as a surviving spouse while they're married.</p>\n<p>If your widow, widower, or surviving divorced spouse remarries after they reach age 60 (age 50 if disabled), they will continue to qualify for benefits on your Social Security record.</p>\n<p>However, if their current spouse is a Social Security beneficiary, they may want to apply for spouse's benefits on their record. If that amount is more than the widow's or widower's benefit on your record, they will receive a combination of benefits that equals the higher amount.</p>\n</section>\n</div>\n<div>\n<a>If they're eligible for retirement benefits on their own record</a>\n<section>\n<p>If your widow, widower, or surviving divorced spouse receives benefits on your record, they can switch to their own retirement benefit as early as age 62. This assumes they're eligible for retirement benefits and their retirement rate is higher than their rate as a widow, widower, or surviving divorced spouse.</p>\n<p>In many cases, a widow or widower can begin receiving one benefit at a reduced rate and then, at full retirement age, switch to the other benefit at an unreduced rate.</p>\n<div>\n<a>Full retirement age for retirement benefits</a> may not match full retirement age for survivors benefits.\n </div>\n</section>\n</div>\n<div>\n<a>If they will also receive a pension based on work not covered by Social Security</a>\n<section>\n<p>If your widow, widower, or surviving divorced spouse will also receive a pension based on work not covered by Social Security, such as government or foreign work, <a>their Social Security benefits as a survivor may be affected</a>.</p>\n</section>\n</div>\n</section>\n</article>\n</div>\n</section>\n</main>"
}
```
Sample data instance for `dialogue_domain` :
```
{
"dial_id": "8df07b7a98990db27c395cb1f68a962e",
"domain": "dmv",
"turns": [
{
"turn_id": 1,
"role": "user",
"da": "query_condition",
"references": [
{
"id_sp": "4",
"label": "precondition",
"doc_id": "Top 5 DMV Mistakes and How to Avoid Them#3_0"
}
],
"utterance": "Hello, I forgot o update my address, can you help me with that?"
},
{
"turn_id": 2,
"role": "agent",
"da": "respond_solution",
"references": [
{
"id_sp": "6",
"label": "solution",
"doc_id": "Top 5 DMV Mistakes and How to Avoid Them#3_0"
},
{
"id_sp": "7",
"label": "solution",
"doc_id": "Top 5 DMV Mistakes and How to Avoid Them#3_0"
}
],
"utterance": "hi, you have to report any change of address to DMV within 10 days after moving. You should do this both for the address associated with your license and all the addresses associated with all your vehicles."
},
{
"turn_id": 3,
"role": "user",
"da": "query_solution",
"references": [
{
"id_sp": "56",
"label": "solution",
"doc_id": "Top 5 DMV Mistakes and How to Avoid Them#3_0"
}
],
"utterance": "Can I do my DMV transactions online?"
}
]
}
```
### Data Fields
- `document_domain` contains the documents that are indexed by key `domain` and `doc_id` . Each document instance includes the following,
- `domain`: the domain of the document;
- `doc_id`: the ID of a document;
- `title`: the title of the document;
- `doc_text`: the text content of the document (without HTML markups);
- `spans`: key-value pairs of all spans in the document, with `id_sp` as key. Each span includes the following,
- `id_sp`: the id of a span as noted by `text_id` in `doc_html_ts`;
- `start_sp`/ `end_sp`: the start/end position of the text span in `doc_text`;
- `text_sp`: the text content of the span.
- `id_sec`: the id of the (sub)section (e.g. `<p>`) or title (`<h2>`) that contains the span.
- `start_sec` / `end_sec`: the start/end position of the (sub)section in `doc_text`.
- `text_sec`: the text of the (sub)section.
- `title`: the title of the (sub)section.
- `parent_titles`: the parent titles of the `title`.
- `doc_html_ts`: the document content with HTML markups and the annotated spans that are indicated by `text_id` attribute, which corresponds to `id_sp`.
- `doc_html_raw`: the document content with HTML markups and without span annotations.
- `dialogue_domain`
Each dialogue instance includes the following,
- `dial_id`: the ID of a dialogue;
- `domain`: the domain of the document;
- `turns`: a list of dialogue turns. Each turn includes,
- `turn_id`: the time order of the turn;
- `role`: either "agent" or "user";
- `da`: dialogue act;
- `references`: a list of spans with `id_sp` , `label` and `doc_id`. `references` is empty if a turn is for indicating previous user query not answerable or irrelevant to the document. **Note** that labels "*precondition*"/"*solution*" are fuzzy annotations that indicate whether a span is for describing a conditional context or a solution.
- `utterance`: the human-generated utterance based on the dialogue scene.
- `multidoc2dial`
Each dialogue instance includes the following,
- `id`: the ID of a QA instance
- `title`: the title of the relevant document;
- `context`: the text content of the relevant document (without HTML markups).
- `question`: user query;
- `da`: dialogue act;
- `answers`: the answers that are grounded in the associated document;
- `text`: the text content of the grounding span;
- `answer_start`: the start position of the grounding span in the associated document (context);
- `utterance`: the human-generated utterance based on the dialogue scene.
- `domain`: domain of the relevant document;
### Data Splits
Training, dev and test split for default configuration `multidoc2dial`, with respectively 21451, 4201 and 5 examples,
- Training & dev split for dialogue domain, with 3474 and 661 examples,
- Training split only for document domain, with 488 examples.
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
Song Feng, Siva Sankalp Patel, Hui Wan, Sachindra Joshi
### Licensing Information
Creative Commons Attribution 3.0 Unported
### Citation Information
```bibtex
@inproceedings{feng2021multidoc2dial,
title={MultiDoc2Dial: Modeling Dialogues Grounded in Multiple Documents},
author={Feng, Song and Patel, Siva Sankalp and Wan, Hui and Joshi, Sachindra},
booktitle={EMNLP},
year={2021}
}
```
### Contributions
Thanks to [@songfeng](https://github.com/songfeng) and [@sivasankalpp](https://github.com/sivasankalpp) for adding this dataset. |
BeIR/webis-touche2020 | 2022-10-23T06:03:23.000Z | [
"task_categories:text-retrieval",
"task_ids:entity-linking-retrieval",
"task_ids:fact-checking-retrieval",
"multilinguality:monolingual",
"language:en",
"license:cc-by-sa-4.0",
"region:us"
] | BeIR | null | null | null | 0 | 302 | ---
annotations_creators: []
language_creators: []
language:
- en
license:
- cc-by-sa-4.0
multilinguality:
- monolingual
paperswithcode_id: beir
pretty_name: BEIR Benchmark
size_categories:
msmarco:
- 1M<n<10M
trec-covid:
- 100k<n<1M
nfcorpus:
- 1K<n<10K
nq:
- 1M<n<10M
hotpotqa:
- 1M<n<10M
fiqa:
- 10K<n<100K
arguana:
- 1K<n<10K
touche-2020:
- 100K<n<1M
cqadupstack:
- 100K<n<1M
quora:
- 100K<n<1M
dbpedia:
- 1M<n<10M
scidocs:
- 10K<n<100K
fever:
- 1M<n<10M
climate-fever:
- 1M<n<10M
scifact:
- 1K<n<10K
source_datasets: []
task_categories:
- text-retrieval
- zero-shot-retrieval
- information-retrieval
- zero-shot-information-retrieval
task_ids:
- passage-retrieval
- entity-linking-retrieval
- fact-checking-retrieval
- tweet-retrieval
- citation-prediction-retrieval
- duplication-question-retrieval
- argument-retrieval
- news-retrieval
- biomedical-information-retrieval
- question-answering-retrieval
---
# Dataset Card for BEIR Benchmark
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://github.com/UKPLab/beir
- **Repository:** https://github.com/UKPLab/beir
- **Paper:** https://openreview.net/forum?id=wCu6T5xFjeJ
- **Leaderboard:** https://docs.google.com/spreadsheets/d/1L8aACyPaXrL8iEelJLGqlMqXKPX2oSP_R10pZoy77Ns
- **Point of Contact:** nandan.thakur@uwaterloo.ca
### Dataset Summary
BEIR is a heterogeneous benchmark that has been built from 18 diverse datasets representing 9 information retrieval tasks:
- Fact-checking: [FEVER](http://fever.ai), [Climate-FEVER](http://climatefever.ai), [SciFact](https://github.com/allenai/scifact)
- Question-Answering: [NQ](https://ai.google.com/research/NaturalQuestions), [HotpotQA](https://hotpotqa.github.io), [FiQA-2018](https://sites.google.com/view/fiqa/)
- Bio-Medical IR: [TREC-COVID](https://ir.nist.gov/covidSubmit/index.html), [BioASQ](http://bioasq.org), [NFCorpus](https://www.cl.uni-heidelberg.de/statnlpgroup/nfcorpus/)
- News Retrieval: [TREC-NEWS](https://trec.nist.gov/data/news2019.html), [Robust04](https://trec.nist.gov/data/robust/04.guidelines.html)
- Argument Retrieval: [Touche-2020](https://webis.de/events/touche-20/shared-task-1.html), [ArguAna](tp://argumentation.bplaced.net/arguana/data)
- Duplicate Question Retrieval: [Quora](https://www.quora.com/q/quoradata/First-Quora-Dataset-Release-Question-Pairs), [CqaDupstack](http://nlp.cis.unimelb.edu.au/resources/cqadupstack/)
- Citation-Prediction: [SCIDOCS](https://allenai.org/data/scidocs)
- Tweet Retrieval: [Signal-1M](https://research.signal-ai.com/datasets/signal1m-tweetir.html)
- Entity Retrieval: [DBPedia](https://github.com/iai-group/DBpedia-Entity/)
All these datasets have been preprocessed and can be used for your experiments.
```python
```
### Supported Tasks and Leaderboards
The dataset supports a leaderboard that evaluates models against task-specific metrics such as F1 or EM, as well as their ability to retrieve supporting information from Wikipedia.
The current best performing models can be found [here](https://eval.ai/web/challenges/challenge-page/689/leaderboard/).
### Languages
All tasks are in English (`en`).
## Dataset Structure
All BEIR datasets must contain a corpus, queries and qrels (relevance judgments file). They must be in the following format:
- `corpus` file: a `.jsonl` file (jsonlines) that contains a list of dictionaries, each with three fields `_id` with unique document identifier, `title` with document title (optional) and `text` with document paragraph or passage. For example: `{"_id": "doc1", "title": "Albert Einstein", "text": "Albert Einstein was a German-born...."}`
- `queries` file: a `.jsonl` file (jsonlines) that contains a list of dictionaries, each with two fields `_id` with unique query identifier and `text` with query text. For example: `{"_id": "q1", "text": "Who developed the mass-energy equivalence formula?"}`
- `qrels` file: a `.tsv` file (tab-seperated) that contains three columns, i.e. the `query-id`, `corpus-id` and `score` in this order. Keep 1st row as header. For example: `q1 doc1 1`
### Data Instances
A high level example of any beir dataset:
```python
corpus = {
"doc1" : {
"title": "Albert Einstein",
"text": "Albert Einstein was a German-born theoretical physicist. who developed the theory of relativity, \
one of the two pillars of modern physics (alongside quantum mechanics). His work is also known for \
its influence on the philosophy of science. He is best known to the general public for his mass–energy \
equivalence formula E = mc2, which has been dubbed 'the world's most famous equation'. He received the 1921 \
Nobel Prize in Physics 'for his services to theoretical physics, and especially for his discovery of the law \
of the photoelectric effect', a pivotal step in the development of quantum theory."
},
"doc2" : {
"title": "", # Keep title an empty string if not present
"text": "Wheat beer is a top-fermented beer which is brewed with a large proportion of wheat relative to the amount of \
malted barley. The two main varieties are German Weißbier and Belgian witbier; other types include Lambic (made\
with wild yeast), Berliner Weisse (a cloudy, sour beer), and Gose (a sour, salty beer)."
},
}
queries = {
"q1" : "Who developed the mass-energy equivalence formula?",
"q2" : "Which beer is brewed with a large proportion of wheat?"
}
qrels = {
"q1" : {"doc1": 1},
"q2" : {"doc2": 1},
}
```
### Data Fields
Examples from all configurations have the following features:
### Corpus
- `corpus`: a `dict` feature representing the document title and passage text, made up of:
- `_id`: a `string` feature representing the unique document id
- `title`: a `string` feature, denoting the title of the document.
- `text`: a `string` feature, denoting the text of the document.
### Queries
- `queries`: a `dict` feature representing the query, made up of:
- `_id`: a `string` feature representing the unique query id
- `text`: a `string` feature, denoting the text of the query.
### Qrels
- `qrels`: a `dict` feature representing the query document relevance judgements, made up of:
- `_id`: a `string` feature representing the query id
- `_id`: a `string` feature, denoting the document id.
- `score`: a `int32` feature, denoting the relevance judgement between query and document.
### Data Splits
| Dataset | Website| BEIR-Name | Type | Queries | Corpus | Rel D/Q | Down-load | md5 |
| -------- | -----| ---------| --------- | ----------- | ---------| ---------| :----------: | :------:|
| MSMARCO | [Homepage](https://microsoft.github.io/msmarco/)| ``msmarco`` | ``train``<br>``dev``<br>``test``| 6,980 | 8.84M | 1.1 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/msmarco.zip) | ``444067daf65d982533ea17ebd59501e4`` |
| TREC-COVID | [Homepage](https://ir.nist.gov/covidSubmit/index.html)| ``trec-covid``| ``test``| 50| 171K| 493.5 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/trec-covid.zip) | ``ce62140cb23feb9becf6270d0d1fe6d1`` |
| NFCorpus | [Homepage](https://www.cl.uni-heidelberg.de/statnlpgroup/nfcorpus/) | ``nfcorpus`` | ``train``<br>``dev``<br>``test``| 323 | 3.6K | 38.2 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/nfcorpus.zip) | ``a89dba18a62ef92f7d323ec890a0d38d`` |
| BioASQ | [Homepage](http://bioasq.org) | ``bioasq``| ``train``<br>``test`` | 500 | 14.91M | 8.05 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#2-bioasq) |
| NQ | [Homepage](https://ai.google.com/research/NaturalQuestions) | ``nq``| ``train``<br>``test``| 3,452 | 2.68M | 1.2 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/nq.zip) | ``d4d3d2e48787a744b6f6e691ff534307`` |
| HotpotQA | [Homepage](https://hotpotqa.github.io) | ``hotpotqa``| ``train``<br>``dev``<br>``test``| 7,405 | 5.23M | 2.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/hotpotqa.zip) | ``f412724f78b0d91183a0e86805e16114`` |
| FiQA-2018 | [Homepage](https://sites.google.com/view/fiqa/) | ``fiqa`` | ``train``<br>``dev``<br>``test``| 648 | 57K | 2.6 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/fiqa.zip) | ``17918ed23cd04fb15047f73e6c3bd9d9`` |
| Signal-1M(RT) | [Homepage](https://research.signal-ai.com/datasets/signal1m-tweetir.html)| ``signal1m`` | ``test``| 97 | 2.86M | 19.6 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#4-signal-1m) |
| TREC-NEWS | [Homepage](https://trec.nist.gov/data/news2019.html) | ``trec-news`` | ``test``| 57 | 595K | 19.6 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#1-trec-news) |
| ArguAna | [Homepage](http://argumentation.bplaced.net/arguana/data) | ``arguana``| ``test`` | 1,406 | 8.67K | 1.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/arguana.zip) | ``8ad3e3c2a5867cdced806d6503f29b99`` |
| Touche-2020| [Homepage](https://webis.de/events/touche-20/shared-task-1.html) | ``webis-touche2020``| ``test``| 49 | 382K | 19.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/webis-touche2020.zip) | ``46f650ba5a527fc69e0a6521c5a23563`` |
| CQADupstack| [Homepage](http://nlp.cis.unimelb.edu.au/resources/cqadupstack/) | ``cqadupstack``| ``test``| 13,145 | 457K | 1.4 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/cqadupstack.zip) | ``4e41456d7df8ee7760a7f866133bda78`` |
| Quora| [Homepage](https://www.quora.com/q/quoradata/First-Quora-Dataset-Release-Question-Pairs) | ``quora``| ``dev``<br>``test``| 10,000 | 523K | 1.6 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/quora.zip) | ``18fb154900ba42a600f84b839c173167`` |
| DBPedia | [Homepage](https://github.com/iai-group/DBpedia-Entity/) | ``dbpedia-entity``| ``dev``<br>``test``| 400 | 4.63M | 38.2 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/dbpedia-entity.zip) | ``c2a39eb420a3164af735795df012ac2c`` |
| SCIDOCS| [Homepage](https://allenai.org/data/scidocs) | ``scidocs``| ``test``| 1,000 | 25K | 4.9 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/scidocs.zip) | ``38121350fc3a4d2f48850f6aff52e4a9`` |
| FEVER | [Homepage](http://fever.ai) | ``fever``| ``train``<br>``dev``<br>``test``| 6,666 | 5.42M | 1.2| [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/fever.zip) | ``5a818580227bfb4b35bb6fa46d9b6c03`` |
| Climate-FEVER| [Homepage](http://climatefever.ai) | ``climate-fever``|``test``| 1,535 | 5.42M | 3.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/climate-fever.zip) | ``8b66f0a9126c521bae2bde127b4dc99d`` |
| SciFact| [Homepage](https://github.com/allenai/scifact) | ``scifact``| ``train``<br>``test``| 300 | 5K | 1.1 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/scifact.zip) | ``5f7d1de60b170fc8027bb7898e2efca1`` |
| Robust04 | [Homepage](https://trec.nist.gov/data/robust/04.guidelines.html) | ``robust04``| ``test``| 249 | 528K | 69.9 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#3-robust04) |
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
[Needs More Information]
## Considerations for Using the Data
### Social Impact of Dataset
[Needs More Information]
### Discussion of Biases
[Needs More Information]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
[Needs More Information]
### Licensing Information
[Needs More Information]
### Citation Information
Cite as:
```
@inproceedings{
thakur2021beir,
title={{BEIR}: A Heterogeneous Benchmark for Zero-shot Evaluation of Information Retrieval Models},
author={Nandan Thakur and Nils Reimers and Andreas R{\"u}ckl{\'e} and Abhishek Srivastava and Iryna Gurevych},
booktitle={Thirty-fifth Conference on Neural Information Processing Systems Datasets and Benchmarks Track (Round 2)},
year={2021},
url={https://openreview.net/forum?id=wCu6T5xFjeJ}
}
```
### Contributions
Thanks to [@Nthakur20](https://github.com/Nthakur20) for adding this dataset. |
psyche/common_crawl | 2023-09-15T00:50:38.000Z | [
"license:apache-2.0",
"region:us"
] | psyche | null | null | null | 2 | 302 | ---
license:
- apache-2.0
---
This dataset fit on the streaming mode.
The origin dataset link: https://data.commoncrawl.org/crawl-data/CC-MAIN-2022-27/warc.paths.gz
_Requirements: selectolax, warcio_
```
from datasets import load_dataset
# sub name is the number has string type e.g. "1", "2", ...(it depends on the dataset)
dataset = load_dataset("psyche/common_crawl", "1", streaming=True)
```
|
allenai/soda | 2023-01-04T09:24:32.000Z | [
"task_categories:conversational",
"task_ids:dialogue-generation",
"language_creators:machine-generated",
"multilinguality:monolingual",
"size_categories:1M<n<10M",
"source_datasets:original",
"source_datasets:extended|Atomic10x",
"language:en",
"license:cc-by-4.0",
"dialogue",
"narrative",
"commonsense",
"arxiv:2212.10465",
"region:us"
] | allenai | null | null | null | 97 | 302 | ---
language:
- en
language_creators:
- machine-generated
annotation_creators:
- machine-generated
license:
- cc-by-4.0
multilinguality:
- monolingual
pretty_name: SODA
size_categories:
- 1M<n<10M
splits:
- name: train
num_examples: 1191582
- name: valid
num_examples: 146346
- name: test
num_examples: 148968
dataset_size: 1486896
source_datasets:
- original
- extended|Atomic10x
tags:
- dialogue
- narrative
- commonsense
task_categories:
- conversational
task_ids:
- dialogue-generation
---
# Dataset Card for 🥤SODA
## Dataset Description
- **Repository:** [Code](https://github.com/skywalker023/sodaverse)
- **Paper:** [SODA: Million-scale Dialogue Distillation with Social Commonsense Contextualization](https://arxiv.org/abs/2212.10465)
- **Point of Contact:** [Hyunwoo Kim](mailto:hyunwook@allenai.org)
## Dataset Summary
🥤SODA is the first publicly available, million-scale, high-quality dialogue dataset covering a wide range of social interactions. Dialogues are distilled from a PLM (InstructGPT; Ouyang et al., 2022) by contextualizing social commonsense knowledge from a knowledge graph (Atomic10x; West et al., 2022). Human evaluation shows that dialogues in SODA are more consistent, specific, and (surprisingly) natural than prior human-authored datasets – e.g., DailyDialog (Li et al., 2017), BlendedSkillTalk (Smith et al., 2020). Also, since social commonsense knowledge encompasses emotional reactions (i.e., the xReact `relation`), SODA includes 385K conversations labeled with 1.7K unique emotions along with information about the experiencer and the cause – i.e., `PersonX` and the `head` event in the symbolic commonsense knowledge triple.
## Languages
English
## Dataset Structure
field | type | description
--- | --- | ---
`head` | str | the head event in the symbolic commonsense knowledge triple
`relation` | str | the relationship between `head` and `tail` events
`tail` | str | the tail event in the symbolic commonsense knowledge triple
`literal` | str | the symbolic commonsense knowledge in sentence-form
`narrative` | str | narrative based on the `literal`
`dialogue` | list of str | dialogue grounded in the `narrative`
`speakers` | list of str | the speakers for each turn in the `dialogue`
`PersonX` | str | the assigned name for PersonX in the commonsense knowledge triple
`PersonY` | str\|null | the assigned name for PersonY in the commonsense knowledge triple
`PersonZ` | str\|null | the assigned name for PersonZ in the commonsense knowledge triple
`original_index` | int | the original index from Atomic10x
`split` | str | the split information: {train, valid, test}
`head_answer` | str | the answer for whether the `head` is included in the `narrative`: {Yes, Unknown}
`pmi_head_answer` | str | the answer for whether the `head` is included in the `narrative` with point-wise mutual information applied: {Yes, No, Unknown}
`relation_tail_answer` | str | the answer for whether the `relation`-`tail` is included in the `dialogue`: {Yes, No, Unknown}
`pmi_relation_tail_answer` | str | the answer for whether the `relation`-`tail` is included in the `dialogue` with point-wise mutual information applied: {Yes, No, Unknown}
## Dataset Creation
To create 🥤SODA, we distill dialogues from InstructGPT by contextualizing social commonsense knowledge – i.e., adding context information in multiple steps: (1) Retrieve social commonsense from the symbolic commonsense knowledge graph, (2) convert it into sentence form, (3) generate a narrative from the sentence, (4) infer the speakers from the narrative, and finally (5) derive contentful conversation grounded in the narrative and speakers. Anchoring the PLM in commonsense knowledge for deriving conversations offers two key advantages: (1) minimizing nonsensical conversations and (2) maximizing diversity. For more details, please refer to our [paper](https://arxiv.org/abs/2212.10465).
### Further Details, Social Impacts, and Limitations
Please refer to our [paper](https://arxiv.org/abs/2212.10465).
## Trained Model
Using 🥤SODA, we train 🧑🏻🚀COSMO: a generalizable conversation agent outperforming previous best-performing agents on both in- and out-of-domain datasets. COSMO-3B is available [here](https://huggingface.co/allenai/cosmo-xl)!
## Additional Information
For a brief summary of our paper, please see this [tweet](https://twitter.com/hyunw__kim/status/1605400305126248448).
### Citation
Please cite our work if you find the resources in this repository useful:
```
@article{kim2022soda,
title={SODA: Million-scale Dialogue Distillation with Social Commonsense Contextualization},
author={Hyunwoo Kim and Jack Hessel and Liwei Jiang and Peter West and Ximing Lu and Youngjae Yu and Pei Zhou and Ronan Le Bras and Malihe Alikhani and Gunhee Kim and Maarten Sap and Yejin Choi},
journal={ArXiv},
year={2022},
volume={abs/2212.10465}
}
``` |
Muennighoff/xP3x | 2023-09-22T06:27:32.000Z | [
"task_categories:other",
"annotations_creators:expert-generated",
"annotations_creators:crowdsourced",
"multilinguality:multilingual",
"size_categories:100M<n<1B",
"language:af",
"language:ar",
"language:az",
"language:be",
"language:bg",
"language:bn",
"language:br",
"language:bs",
"language:ca",
"language:ch",
"language:cs",
"language:cv",
"language:cy",
"language:da",
"language:de",
"language:el",
"language:en",
"language:eo",
"language:es",
"language:et",
"language:eu",
"language:fa",
"language:fi",
"language:fo",
"language:fr",
"language:fy",
"language:ga",
"language:gd",
"language:gl",
"language:gn",
"language:he",
"language:hi",
"language:hr",
"language:hu",
"language:hy",
"language:ia",
"language:id",
"language:ie",
"language:io",
"language:is",
"language:it",
"language:ja",
"language:jv",
"language:ka",
"language:kk",
"language:km",
"language:ko",
"language:ku",
"language:kw",
"language:la",
"language:lb",
"language:lt",
"language:lv",
"language:mi",
"language:mk",
"language:ml",
"language:mn",
"language:mr",
"language:ms",
"language:mt",
"language:my",
"language:nb",
"language:nl",
"language:nn",
"language:no",
"language:oc",
"language:pl",
"language:pt",
"language:qu",
"language:rn",
"language:ro",
"language:ru",
"language:sh",
"language:sl",
"language:sq",
"language:sr",
"language:sv",
"language:sw",
"language:ta",
"language:te",
"language:th",
"language:tk",
"language:tl",
"language:tr",
"language:tt",
"language:ug",
"language:uk",
"language:ur",
"language:uz",
"language:vi",
"language:vo",
"language:yi",
"language:zh",
"language:ace",
"language:acm",
"language:acq",
"language:aeb",
"language:ajp",
"language:ak",
"language:als",
"language:am",
"language:apc",
"language:ars",
"language:ary",
"language:arz",
"language:as",
"language:ast",
"language:awa",
"language:ayr",
"language:azb",
"language:azj",
"language:ba",
"language:bm",
"language:ban",
"language:bem",
"language:bho",
"language:bjn",
"language:bo",
"language:bug",
"language:ceb",
"language:cjk",
"language:ckb",
"language:crh",
"language:dik",
"language:dyu",
"language:dz",
"language:ee",
"language:fj",
"language:fon",
"language:fur",
"language:fuv",
"language:gaz",
"language:gu",
"language:ht",
"language:ha",
"language:hne",
"language:ig",
"language:ilo",
"language:kab",
"language:kac",
"language:kam",
"language:kn",
"language:ks",
"language:kbp",
"language:kea",
"language:khk",
"language:ki",
"language:rw",
"language:ky",
"language:kmb",
"language:kmr",
"language:knc",
"language:kg",
"language:lo",
"language:lij",
"language:li",
"language:ln",
"language:lmo",
"language:ltg",
"language:lua",
"language:lg",
"language:luo",
"language:lus",
"language:lvs",
"language:mag",
"language:mai",
"language:mar",
"language:min",
"language:mni",
"language:mos",
"language:npi",
"language:nso",
"language:nus",
"language:ny",
"language:ory",
"language:pag",
"language:pa",
"language:pap",
"language:pbt",
"language:pes",
"language:plt",
"language:prs",
"language:quy",
"language:sg",
"language:sa",
"language:sat",
"language:scn",
"language:shn",
"language:si",
"language:sk",
"language:sm",
"language:sn",
"language:sd",
"language:so",
"language:st",
"language:sc",
"language:ss",
"language:su",
"language:swh",
"language:szl",
"language:taq",
"language:tg",
"language:ti",
"language:tpi",
"language:tn",
"language:ts",
"language:tum",
"language:tw",
"language:tzm",
"language:umb",
"language:uzn",
"language:vec",
"language:war",
"language:wo",
"language:xh",
"language:ydd",
"language:yo",
"language:yue",
"language:zsm",
"language:zu",
"license:apache-2.0",
"arxiv:2211.01786",
"region:us"
] | Muennighoff | A multilingual collection of Winograd Schemas in six languages that can be used for evaluation of cross-lingual commonsense reasoning capabilities. | @article{muennighoff2022crosslingual,
title={Crosslingual generalization through multitask finetuning},
author={Muennighoff, Niklas and Wang, Thomas and Sutawika, Lintang and Roberts, Adam and Biderman, Stella and Scao, Teven Le and Bari, M Saiful and Shen, Sheng and Yong, Zheng-Xin and Schoelkopf, Hailey and others},
journal={arXiv preprint arXiv:2211.01786},
year={2022}
} | null | 6 | 302 | ---
annotations_creators:
- expert-generated
- crowdsourced
language:
- af
- ar
- az
- be
- bg
- bn
- br
- bs
- ca
- ch
- cs
- cv
- cy
- da
- de
- el
- en
- eo
- es
- et
- eu
- fa
- fi
- fo
- fr
- fy
- ga
- gd
- gl
- gn
- he
- hi
- hr
- hu
- hy
- ia
- id
- ie
- io
- is
- it
- ja
- jv
- ka
- kk
- km
- ko
- ku
- kw
- la
- lb
- lt
- lv
- mi
- mk
- ml
- mn
- mr
- ms
- mt
- my
- nb
- nl
- nn
- 'no'
- oc
- pl
- pt
- qu
- rn
- ro
- ru
- sh
- sl
- sq
- sr
- sv
- sw
- ta
- te
- th
- tk
- tl
- tr
- tt
- ug
- uk
- ur
- uz
- vi
- vo
- yi
- zh
- ace
- acm
- acq
- aeb
- af
- ajp
- ak
- als
- am
- apc
- ar
- ars
- ary
- arz
- as
- ast
- awa
- ayr
- azb
- azj
- ba
- bm
- ban
- be
- bem
- bn
- bho
- bjn
- bo
- bs
- bug
- bg
- ca
- ceb
- cs
- cjk
- ckb
- crh
- cy
- da
- de
- dik
- dyu
- dz
- el
- en
- eo
- et
- eu
- ee
- fo
- fj
- fi
- fon
- fr
- fur
- fuv
- gaz
- gd
- ga
- gl
- gn
- gu
- ht
- ha
- he
- hi
- hne
- hr
- hu
- hy
- ig
- ilo
- id
- is
- it
- jv
- ja
- kab
- kac
- kam
- kn
- ks
- ka
- kk
- kbp
- kea
- khk
- km
- ki
- rw
- ky
- kmb
- kmr
- knc
- kg
- ko
- lo
- lij
- li
- ln
- lt
- lmo
- ltg
- lb
- lua
- lg
- luo
- lus
- lvs
- mag
- mai
- ml
- mar
- min
- mk
- mt
- mni
- mos
- mi
- my
- nl
- nn
- nb
- npi
- nso
- nus
- ny
- oc
- ory
- pag
- pa
- pap
- pbt
- pes
- plt
- pl
- pt
- prs
- quy
- ro
- rn
- ru
- sg
- sa
- sat
- scn
- shn
- si
- sk
- sl
- sm
- sn
- sd
- so
- st
- es
- sc
- sr
- ss
- su
- sv
- swh
- szl
- ta
- taq
- tt
- te
- tg
- tl
- th
- ti
- tpi
- tn
- ts
- tk
- tum
- tr
- tw
- tzm
- ug
- uk
- umb
- ur
- uzn
- vec
- vi
- war
- wo
- xh
- ydd
- yo
- yue
- zh
- zsm
- zu
programming_language:
- Java
- Python
- Jupyter-Notebook
license:
- apache-2.0
multilinguality:
- multilingual
pretty_name: xP3x
size_categories:
- 100M<n<1B
task_categories:
- other
---
# Dataset Card for xP3x
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Additional Information](#additional-information)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Repository:** https://github.com/bigscience-workshop/xmtf
- **Paper:** [Crosslingual Generalization through Multitask Finetuning](https://arxiv.org/abs/2211.01786)
- **Point of Contact:** [Niklas Muennighoff](mailto:n.muennighoff@gmail.com)
### Dataset Summary
> xP3x (Crosslingual Public Pool of Prompts eXtended) is a collection of prompts & datasets across 277 languages & 16 NLP tasks. It contains all of xP3 + much more! It is used for training future contenders of mT0 & BLOOMZ at project Aya @[C4AI](https://cohere.for.ai/) 🧡
>
- **Creation:** The dataset can be recreated using instructions available [here](https://github.com/bigscience-workshop/xmtf#create-xp3) together with the file in this repository named `xp3x_create.py`. We provide this version to save processing time.
- **Languages:** 277
- **xP3 Dataset Family:**
<table>
<tr>
<th>Name</th>
<th>Explanation</th>
<th>Example models</th>
</tr>
<tr>
<td><a href=https://huggingface.co/datasets/Muennighoff/xP3x>xP3x</a></t>
<td>Mixture of 17 tasks in 277 languages with English prompts</td>
<td>WIP - Join us at Project Aya @<a href=https://cohere.for.ai/>C4AI</a> to help!</td>
</tr>
<tr>
<td><a href=https://huggingface.co/datasets/bigscience/xP3>xP3</a></t>
<td>Mixture of 13 training tasks in 46 languages with English prompts</td>
<td><a href=https://huggingface.co/bigscience/bloomz>bloomz</a> & <a href=https://huggingface.co/bigscience/mt0-xxl>mt0-xxl</a></td>
</tr>
<tr>
<td><a href=https://huggingface.co/datasets/bigscience/xP3mt>xP3mt</a></t>
<td>Mixture of 13 training tasks in 46 languages with prompts in 20 languages (machine-translated from English)</td>
<td><a href=https://huggingface.co/bigscience/bloomz-mt>bloomz-mt</a> & <a href=https://huggingface.co/bigscience/mt0-xxl-mt>mt0-xxl-mt</a></td>
</tr>
<tr>
<td><a href=https://huggingface.co/datasets/bigscience/xP3all>xP3all</a></t>
<td>xP3 + evaluation datasets adding an additional 3 tasks for a total of 16 tasks in 46 languages with English prompts</td>
<td></td>
</tr>
<tr>
<td><a href=https://huggingface.co/datasets/bigscience/xP3megds>xP3megds</a></t>
<td><a href=https://github.com/bigscience-workshop/Megatron-DeepSpeed>Megatron-DeepSpeed</a> processed version of xP3</td>
<td><a href=https://huggingface.co/bigscience/bloomz>bloomz</a></td>
</tr>
<tr>
<td><a href=https://huggingface.co/datasets/Muennighoff/P3>P3</a></t>
<td>Repreprocessed version of the English-only <a href=https://huggingface.co/datasets/bigscience/P3>P3</a> with 8 training tasks</td>
<td><a href=https://huggingface.co/bigscience/bloomz-p3>bloomz-p3</a> & <a href=https://huggingface.co/bigscience/mt0-xxl-p3>mt0-xxl-p3</a></td>
</tr>
</table>
## Dataset Structure
### Data Instances
An example looks as follows:
```json
{
'inputs': '11月、遂にクロームはファイヤーフォックスを引き離し始めた。_はインターネットユーザーの評価が高まったのだ。\nReplace the _ in the above sentence with the correct option: \n- ファイヤーフォックス\n- クローム',
'targets': 'クローム',
'language': 'jpn_Jpan',
'split': 'test',
'template': 'Replace',
'dataset': 'Muennighoff/xwinograd',
'config': 'jp'
}
```
### Data Fields
The data fields are the same among all splits:
- `inputs`: the natural language input fed to the model
- `targets`: the natural language target that the model has to generate
- `language`: The language code. The codes are an extension of the FLORES-200 codes, where the first part is the language code and the second part the script code.
- `template`: The name of the prompt used.
- `dataset`: The Hugging Face dataset identifier of where the data stems from.
- `config`: The config of the Hugging Face dataset.
### Usage
The dataset has 680 gigabytes and 530 million samples. You may want to filter it and then deduplicate depending on your needs.
Loading by language:
```python
# pip install -q datasets
from datasets import load_dataset
ds = load_dataset("Muennighoff/xP3x", "zho_Hans", streaming=True) # Use streaming to not download all at once
for x in ds["train"]:
print(x)
break
```
You can then filter down by the data fields to e.g. only get certain configs or datasets.
As every dataset-config-template is its own jsonl file, you can also decide on the datasets, configs and templates you want and only download them.
For example, to download all Japanese xwinograd samples, you could do:
```python
# pip install -q datasets
from datasets import load_dataset
import multiprocessing
# pip install --upgrade huggingface-hub
from huggingface_hub import HfFileSystem, hf_hub_url
fs = HfFileSystem()
fps = fs.glob(f"datasets/Muennighoff/xP3x/data/jpn_Jpan/*xwinograd*")
resolved_paths = [fs.resolve_path(file) for file in fps]
data_files = [hf_hub_url(resolved_path.repo_id, resolved_path.path_in_repo, repo_type=resolved_path.repo_type) for resolved_path in resolved_paths]
ds = load_dataset("json", data_files=data_files, num_proc=8)["train"]
```
Sometimes it may be faster to clone the entire repo. To download all English files, you could do e.g.
```bash
GIT_LFS_SKIP_SMUDGE=1 git clone https://huggingface.co/datasets/Muennighoff/xP3x
cd xP3x
git lfs pull --include="xP3x/eng_Latn/*"
```
### Data Splits
|Language|Code|Kilobytes|%|Samples|%|
|--------|------:|------:|-:|---:|-:|
|Emilian|egl_Latn|104|0.0|402|0.0|
|Swiss German|gsw_Latn|104|0.0|408|0.0|
|Novial|nov_Latn|116|0.0|432|0.0|
|Ainu (Latin script)|ain_Latn|120|0.0|410|0.0|
|Chamorro|cha_Latn|120|0.0|452|0.0|
|Gothic|got_Goth|120|0.0|402|0.0|
|Prussian|prg_Latn|120|0.0|424|0.0|
|Picard|pcd_Latn|140|0.0|530|0.0|
|Northern Frisian|frr_Latn|156|0.0|554|0.0|
|Uzbek (Latin script)|uzb_Latn|156|0.0|600|0.0|
|Ottoman Turkish (Latin script)|ota_Latn|188|0.0|632|0.0|
|Swahili (macrolanguage)|swa_Latn|212|0.0|772|0.0|
|Talossan|tzl_Latn|220|0.0|836|0.0|
|Kven Finnish|fkv_Latn|260|0.0|910|0.0|
|Zaza|zza_Latn|260|0.0|1,056|0.0|
|Frisian|fry_Latn|268|0.0|956|0.0|
|Piemontese|pms_Latn|276|0.0|998|0.0|
|Kalmyk|xal_Cyrl|288|0.0|976|0.0|
|Hunsrik|hrx_Latn|352|0.0|1,380|0.0|
|Romany|rom_Latn|364|0.0|1,410|0.0|
|Ancient Greek (to 1453)|grc_Grek|392|0.0|1,226|0.0|
|Tase Naga|nst_Latn|424|0.0|1,608|0.0|
|Albanian|sqi_Latn|596|0.0|2,216|0.0|
|Guadeloupean Creole French|gcf_Latn|608|0.0|2,326|0.0|
|Yakut|sah_Cyrl|608|0.0|1,986|0.0|
|Ho (Latin script)|hoc_Latn|632|0.0|2,634|0.0|
|Khasi|kha_Latn|676|0.0|2,664|0.0|
|Algerian Arabic|arq_Arab|688|0.0|2,278|0.0|
|Lower Sorbian|dsb_Latn|692|0.0|2,596|0.0|
|Chuvash|chv_Cyrl|716|0.0|2,446|0.0|
|Old Russian|orv_Cyrl|752|0.0|2,586|0.0|
|Pampanga|pam_Latn|784|0.0|2,984|0.0|
|Kurdish (Latin script)|kur_Latn|796|0.0|3,050|0.0|
|Ottoman Turkish|ota_Arab|832|0.0|2,772|0.0|
|Kotava|avk_Latn|864|0.0|3,118|0.0|
|Upper Sorbian|hsb_Latn|900|0.0|3,474|0.0|
|Buryat|bua_Cyrl|924|0.0|3,218|0.0|
|Swabian|swg_Latn|996|0.0|3,366|0.0|
|Coastal Kadazan|kzj_Latn|1,136|0.0|3,766|0.0|
|Chavacano|cbk_Latn|1,352|0.0|4,994|0.0|
|Quechua|que_Latn|1,704|0.0|5,312|0.0|
|Lingua Franca Nova (Cyrillic script)|lfn_Cyrl|1,740|0.0|5,458|0.0|
|Gronings|gos_Latn|1,864|0.0|7,462|0.0|
|Volapük|vol_Latn|1,948|0.0|7,712|0.0|
|Yue Chinese (Simplified)|yue_Hans|2,300|0.0|7,872|0.0|
|Mari (Russia)|chm_Cyrl|2,540|0.0|7,496|0.0|
|Kadazan Dusun|dtp_Latn|2,548|0.0|8,892|0.0|
|Breton|bre_Latn|3,048|0.0|11,868|0.0|
|Ladino|lad_Latn|3,224|0.0|11,916|0.0|
|Cornish|cor_Latn|3,492|0.0|13,880|0.0|
|Interlingue|ile_Latn|3,700|0.0|14,468|0.0|
|Wu Chinese|wuu_Hans|3,784|0.0|13,062|0.0|
|Japanese (Katakana)|jpn_Kana|4,208|0.0|13,942|0.0|
|Ido|ido_Latn|6,180|0.0|23,742|0.0|
|Yiddishi|yid_Hebr|9,896|0.0|34,412|0.01|
|Klingon|tlh_Latn|11,716|0.0|46,010|0.01|
|Lingua Franca Nova|lfn_Latn|13,328|0.0|46,826|0.01|
|Lojban|jbo_Latn|17,468|0.0|66,694|0.01|
|Low German|nds_Latn|18,364|0.0|68,098|0.01|
|Interlingua (International Auxiliary Language Association)|ina_Latn|25,700|0.0|76,584|0.01|
|Java|java|25,904|0.0|13,551|0.0|
|Japanese (Kanji)|jpn_Hani|26,292|0.0|89,978|0.02|
|Norwegian|nor_Latn|26,724|0.0|93,116|0.02|
|Toki Pona|toki_Latn|26,808|0.0|97,170|0.02|
|Latin|lat_Latn|28,900|0.0|101,390|0.02|
|Serbo-Croatian|hbs_Latn|29,452|0.0|105,748|0.02|
|Nigerian Pidgin|pcm_Latn|145,872|0.02|88,992|0.02|
|Azerbaijani (South or North; Latin script)|aze_Latn|147,564|0.02|77,875|0.01|
|Serbian (Latin script)|srp_Latn|179,072|0.03|131,101|0.02|
|Japanese (Hiragana)|jpn_Hira|188,944|0.03|628,758|0.12|
|Berber (Latin script)|ber_Latn|201,464|0.03|693,602|0.13|
|Jupyter Notebook|jupyter_notebook|416,056|0.06|400,000|0.08|
|Yue Chinese|yue_Hant|613,352|0.09|1,227,429|0.23|
|Haitian Creole|hat_Latn|629,420|0.09|1,228,281|0.23|
|Mossi|mos_Latn|630,416|0.09|1,223,481|0.23|
|Pangasinan|pag_Latn|630,684|0.09|1,223,481|0.23|
|Twi|twi_Latn|631,172|0.09|1,223,481|0.23|
|Bosnian|bos_Latn|633,016|0.09|1,224,479|0.23|
|Ewe|ewe_Latn|633,292|0.09|1,223,481|0.23|
|Bambara|bam_Latn|634,520|0.09|1,223,481|0.23|
|Javanese|jav_Latn|635,248|0.09|1,224,003|0.23|
|Southwestern Dinka|dik_Latn|635,416|0.09|1,223,481|0.23|
|Kabuverdianu|kea_Latn|636,144|0.09|1,223,481|0.23|
|Dyula|dyu_Latn|636,464|0.09|1,223,481|0.23|
|Venetian|vec_Latn|637,412|0.09|1,223,481|0.23|
|Chokwe|cjk_Latn|637,532|0.09|1,223,481|0.23|
|Latgalian|ltg_Latn|637,612|0.09|1,223,481|0.23|
|Sundanese|sun_Latn|638,120|0.09|1,223,481|0.23|
|Asturian|ast_Latn|638,708|0.09|1,223,481|0.23|
|Akan|aka_Latn|639,648|0.09|1,223,481|0.23|
|Mizo|lus_Latn|639,680|0.09|1,223,481|0.23|
|Guarani|grn_Latn|641,540|0.09|1,225,647|0.23|
|Limburgish|lim_Latn|642,368|0.09|1,223,481|0.23|
|Faroese|fao_Latn|642,432|0.09|1,224,067|0.23|
|Buginese|bug_Latn|643,472|0.09|1,223,481|0.23|
|Sango|sag_Latn|643,596|0.09|1,223,481|0.23|
|Luba-Kasai|lua_Latn|643,640|0.09|1,223,481|0.23|
|Papiamento|pap_Latn|643,648|0.09|1,223,481|0.23|
|Silesian|szl_Latn|644,608|0.09|1,223,481|0.23|
|Sicilian|scn_Latn|645,636|0.1|1,223,481|0.23|
|Kimbundu|kmb_Latn|645,964|0.1|1,223,481|0.23|
|Basque|eus_Latn|646,084|0.1|1,246,877|0.23|
|Balinese|ban_Latn|646,408|0.1|1,223,481|0.23|
|Norwegian Nynorsk|nno_Latn|646,996|0.1|1,229,699|0.23|
|Central Aymara|ayr_Latn|647,236|0.1|1,223,481|0.23|
|Tamasheq (Latin script)|taq_Latn|648,656|0.1|1,223,481|0.23|
|Kikongo|kon_Latn|648,992|0.1|1,223,481|0.23|
|Friulian|fur_Latn|649,272|0.1|1,223,481|0.23|
|Ayacucho Quechua|quy_Latn|649,992|0.1|1,223,481|0.23|
|Maori|mri_Latn|650,336|0.1|1,224,211|0.23|
|Icelandic|isl_Latn|650,372|0.1|1,246,623|0.23|
|Galician|glg_Latn|652,088|0.1|1,233,291|0.23|
|Catalan|cat_Latn|652,116|0.1|1,241,381|0.23|
|Lombard|lmo_Latn|652,120|0.1|1,223,481|0.23|
|Banjar (Latin script)|bjn_Latn|652,372|0.1|1,223,481|0.23|
|Fijian|fij_Latn|652,796|0.1|1,223,481|0.23|
|Crimean Tatar|crh_Latn|653,920|0.1|1,223,895|0.23|
|Northern Kurdish|kmr_Latn|654,108|0.1|1,223,481|0.23|
|Ligurian|lij_Latn|654,432|0.1|1,223,481|0.23|
|Occitan|oci_Latn|655,676|0.1|1,227,945|0.23|
|Turkmen|tuk_Latn|658,672|0.1|1,241,205|0.23|
|Luxembourgish|ltz_Latn|658,768|0.1|1,225,339|0.23|
|Cebuano|ceb_Latn|659,124|0.1|1,226,039|0.23|
|Samoan|smo_Latn|659,704|0.1|1,223,481|0.23|
|Sardinian|srd_Latn|660,000|0.1|1,223,481|0.23|
|Bemba|bem_Latn|660,504|0.1|1,223,481|0.23|
|Minangkabau (Latin script)|min_Latn|660,672|0.1|1,223,481|0.23|
|Acehnese (Latin script)|ace_Latn|661,084|0.1|1,223,481|0.23|
|Ilocano|ilo_Latn|661,184|0.1|1,227,663|0.23|
|Irish|gle_Latn|661,660|0.1|1,227,357|0.23|
|Fon|fon_Latn|663,124|0.1|1,223,481|0.23|
|Waray|war_Latn|664,120|0.1|1,226,503|0.23|
|Norwegian Bokmål|nob_Latn|666,240|0.1|1,300,607|0.24|
|Tosk Albanian|als_Latn|666,692|0.1|1,223,481|0.23|
|Standard Malay|zsm_Latn|667,088|0.1|1,270,715|0.24|
|Southern Sotho|sot_Latn|667,728|0.1|1,223,481|0.23|
|Kabyle|kab_Latn|668,128|0.1|1,346,605|0.25|
|Jingpho|kac_Latn|669,464|0.1|1,223,481|0.23|
|Lingala|lin_Latn|670,428|0.1|1,323,481|0.25|
|Wolof|wol_Latn|670,568|0.1|1,373,481|0.26|
|Central Kanuri (Latin script)|knc_Latn|670,800|0.1|1,223,481|0.23|
|Kikuyu|kik_Latn|672,096|0.1|1,223,481|0.23|
|Tok Pisin|tpi_Latn|672,916|0.1|1,223,481|0.23|
|Nuer|nus_Latn|673,632|0.1|1,223,481|0.23|
|Tagalog|tgl_Latn|673,684|0.1|1,247,417|0.23|
|Tumbuka|tum_Latn|676,948|0.1|1,223,481|0.23|
|Plateau Malagasy|plt_Latn|677,852|0.1|1,223,481|0.23|
|Afrikaans|afr_Latn|679,164|0.1|1,337,091|0.25|
|North Azerbaijani|azj_Latn|679,820|0.1|1,223,481|0.23|
|Kabiyè|kbp_Latn|684,880|0.1|1,223,481|0.23|
|Modern Standard Arabic (Romanized)|arb_Latn|685,408|0.1|1,223,481|0.23|
|Scottish Gaelic|gla_Latn|708,620|0.1|1,243,627|0.23|
|Sindhi|snd_Arab|718,680|0.11|1,223,481|0.23|
|North Levantine Arabic|apc_Arab|720,048|0.11|1,223,481|0.23|
|Tunisian Arabic|aeb_Arab|720,360|0.11|1,223,481|0.23|
|South Levantine Arabic|ajp_Arab|720,488|0.11|1,223,481|0.23|
|Dari|prs_Arab|720,500|0.11|1,223,481|0.23|
|Moroccan Arabic|ary_Arab|722,904|0.11|1,223,481|0.23|
|Egyptian Arabic|arz_Arab|723,356|0.11|1,223,481|0.23|
|Najdi Arabic|ars_Arab|725,784|0.11|1,223,481|0.23|
|Acehnese (Arabic script)|ace_Arab|726,272|0.11|1,223,481|0.23|
|Mesopotamian Arabic|acm_Arab|728,472|0.11|1,223,481|0.23|
|Ta’izzi-Adeni Arabic|acq_Arab|734,780|0.11|1,223,481|0.23|
|South Azerbaijani|azb_Arab|735,728|0.11|1,223,481|0.23|
|Central Kanuri (Arabic script)|knc_Arab|746,936|0.11|1,223,481|0.23|
|Rundi|run_Latn|749,792|0.11|1,296,111|0.24|
|Banjar (Arabic script)|bjn_Arab|751,112|0.11|1,223,481|0.23|
|Central Kurdish|ckb_Arab|756,804|0.11|1,223,481|0.23|
|Bashkir|bak_Cyrl|758,816|0.11|1,223,481|0.23|
|Kashmiri (Arabic script)|kas_Arab|759,140|0.11|1,223,481|0.23|
|Tatar|tat_Cyrl|764,212|0.11|1,247,685|0.23|
|Minangkabau (Arabic script)|min_Arab|765,384|0.11|1,223,481|0.23|
|Kazakh|kaz_Cyrl|766,176|0.11|1,232,697|0.23|
|Halh Mongolian|khk_Cyrl|776,384|0.11|1,224,353|0.23|
|Tajik|tgk_Cyrl|780,452|0.11|1,223,481|0.23|
|Eastern Yiddish|ydd_Hebr|781,452|0.12|1,223,481|0.23|
|Uyghur|uig_Arab|785,444|0.12|1,256,999|0.24|
|Armenian|hye_Armn|789,952|0.12|1,228,171|0.23|
|Hebrew|heb_Hebr|793,144|0.12|1,604,365|0.3|
|Belarusian|bel_Cyrl|806,588|0.12|1,261,197|0.24|
|Macedonian|mkd_Cyrl|813,436|0.12|1,384,567|0.26|
|Welsh|cym_Latn|821,036|0.12|1,321,455|0.25|
|Northern Uzbek|uzn_Latn|835,560|0.12|1,273,404|0.24|
|Central Atlas Tamazight|tzm_Tfng|843,508|0.12|1,223,481|0.23|
|Tamasheq (Tifinagh script)|taq_Tfng|848,104|0.12|1,223,481|0.23|
|Magahi|mag_Deva|851,360|0.13|1,223,481|0.23|
|Bhojpuri|bho_Deva|854,848|0.13|1,223,481|0.23|
|Awadhi|awa_Deva|857,096|0.13|1,224,037|0.23|
|Chhattisgarhi|hne_Deva|859,332|0.13|1,223,481|0.23|
|Kyrgyz|kir_Cyrl|860,700|0.13|1,250,163|0.23|
|Maithili|mai_Deva|863,476|0.13|1,223,481|0.23|
|Assamese|asm_Beng|865,904|0.13|1,223,481|0.23|
|Kashmiri (Devanagari script)|kas_Deva|867,232|0.13|1,223,481|0.23|
|Sanskrit|san_Deva|879,236|0.13|1,223,481|0.23|
|Lao|lao_Laoo|888,240|0.13|1,223,481|0.23|
|Odia|ory_Orya|890,508|0.13|1,223,481|0.23|
|Santali|sat_Olck|902,300|0.13|1,223,481|0.23|
|Kannada|kan_Knda|909,260|0.13|1,223,481|0.23|
|Meitei (Bengali script)|mni_Beng|917,984|0.14|1,223,481|0.23|
|Georgian|kat_Geor|928,712|0.14|1,226,729|0.23|
|Kamba|kam_Latn|936,468|0.14|2,136,615|0.4|
|Tigrinya|tir_Ethi|949,608|0.14|1,276,536|0.24|
|Swati|ssw_Latn|950,564|0.14|2,195,002|0.41|
|Malayalam|mal_Mlym|953,984|0.14|1,225,083|0.23|
|Nigerian Fulfulde|fuv_Latn|956,328|0.14|2,126,652|0.4|
|Umbundu|umb_Latn|974,104|0.14|2,264,553|0.43|
|Ganda|lug_Latn|975,780|0.14|2,273,481|0.43|
|Northern Sotho|nso_Latn|978,484|0.14|2,250,971|0.42|
|Khmer|khm_Khmr|984,756|0.14|1,227,825|0.23|
|Luo|luo_Latn|993,068|0.15|2,249,242|0.42|
|Standard Tibetan|bod_Tibt|993,732|0.15|1,223,481|0.23|
|Tswana|tsn_Latn|1,009,328|0.15|2,323,481|0.44|
|Kinyarwanda|kin_Latn|1,010,752|0.15|2,273,481|0.43|
|Sinhala|sin_Sinh|1,012,012|0.15|1,256,582|0.24|
|Xhosa|xho_Latn|1,019,804|0.15|2,323,481|0.44|
|Shona|sna_Latn|1,026,320|0.15|2,273,481|0.43|
|Esperanto|epo_Latn|1,029,444|0.15|2,612,083|0.49|
|Tsonga|tso_Latn|1,031,856|0.15|2,323,481|0.44|
|Dzongkha|dzo_Tibt|1,033,552|0.15|1,223,481|0.23|
|Zulu|zul_Latn|1,039,296|0.15|2,323,481|0.44|
|Serbian|srp_Cyrl|1,040,024|0.15|1,362,598|0.26|
|Nyanja|nya_Latn|1,061,780|0.16|2,323,481|0.44|
|Shan|shn_Mymr|1,074,940|0.16|1,223,481|0.23|
|Igbo|ibo_Latn|1,095,300|0.16|2,282,301|0.43|
|Hausa|hau_Latn|1,112,272|0.16|2,335,738|0.44|
|West Central Oromo|gaz_Latn|1,115,600|0.16|2,343,260|0.44|
|Nepali|npi_Deva|1,144,676|0.17|1,281,430|0.24|
|Yoruba|yor_Latn|1,164,540|0.17|2,334,801|0.44|
|Southern Pashto|pbt_Arab|1,170,840|0.17|1,365,533|0.26|
|Somali|som_Latn|1,198,320|0.18|2,482,437|0.47|
|Burmese|mya_Mymr|1,228,196|0.18|1,279,882|0.24|
|Amharic|amh_Ethi|1,261,128|0.19|1,980,215|0.37|
|Eastern Panjabi|pan_Guru|1,305,636|0.19|1,307,897|0.25|
|Gujarati|guj_Gujr|1,331,780|0.2|1,317,314|0.25|
|Marathi|mar_Deva|1,494,024|0.22|1,443,950|0.27|
|Bengali|ben_Beng|1,650,272|0.24|1,411,514|0.27|
|Chinese (Traditional)|zho_Hant|1,778,736|0.26|1,956,189|0.37|
|Tamil|tam_Taml|1,833,328|0.27|1,394,473|0.26|
|Swahili|swh_Latn|1,970,784|0.29|4,185,608|0.79|
|Telugu|tel_Telu|2,224,480|0.33|1,573,325|0.3|
|Ukrainian|ukr_Cyrl|2,227,616|0.33|2,216,119|0.42|
|Western Persian|pes_Arab|2,389,340|0.35|1,811,121|0.34|
|Turkish|tur_Latn|3,106,600|0.46|4,146,153|0.78|
|Urdu|urd_Arab|3,553,960|0.52|3,513,218|0.66|
|Korean|kor_Hang|4,642,468|0.68|3,415,920|0.64|
|Python|python|4,728,504|0.7|3,142,962|0.59|
|Japanese|jpn_Jpan|5,079,788|0.75|4,193,570|0.79|
|Thai|tha_Thai|6,860,704|1.01|4,666,299|0.88|
|Chinese (Simplified)|zho_Hans|8,063,684|1.19|7,355,509|1.38|
|Vietnamese|vie_Latn|8,398,824|1.24|6,194,925|1.16|
|Indonesian|ind_Latn|9,380,144|1.38|5,301,812|1.0|
|Hindi|hin_Deva|9,914,328|1.46|5,612,176|1.05|
|Croatian|hrv_Latn|10,028,028|1.48|5,583,975|1.05|
|Modern Standard Arabic|arb_Arab|11,051,064|1.63|7,232,551|1.36|
|Romanian|ron_Latn|11,441,636|1.68|5,594,927|1.05|
|Maltese|mlt_Latn|11,614,488|1.71|5,513,885|1.04|
|Slovenian|slv_Latn|12,014,912|1.77|5,533,689|1.04|
|Estonian|est_Latn|12,126,212|1.79|5,584,057|1.05|
|Lithuanian|lit_Latn|12,253,976|1.8|5,603,047|1.05|
|Slovak|slk_Latn|12,286,300|1.81|5,513,481|1.04|
|Standard Latvian|lvs_Latn|12,298,584|1.81|5,517,287|1.04|
|Polish|pol_Latn|12,409,684|1.83|5,868,631|1.1|
|Hungarian|hun_Latn|12,607,420|1.86|6,086,621|1.14|
|Russian|rus_Cyrl|13,110,908|1.93|8,798,927|1.65|
|Czech|ces_Latn|14,316,052|2.11|6,418,462|1.21|
|Bulgarian|bul_Cyrl|14,615,468|2.15|7,265,885|1.37|
|Swedish|swe_Latn|14,646,656|2.16|5,634,363|1.06|
|Finnish|fin_Latn|15,011,464|2.21|6,077,501|1.14|
|Danish|dan_Latn|16,136,612|2.38|5,831,109|1.1|
|Dutch|nld_Latn|22,387,020|3.3|8,992,864|1.69|
|Greek|ell_Grek|23,144,296|3.41|7,224,001|1.36|
|Italian|ita_Latn|23,952,824|3.53|9,967,738|1.87|
|Portuguese|por_Latn|27,297,252|4.02|11,242,808|2.11|
|German|deu_Latn|27,909,808|4.11|15,806,969|2.97|
|French|fra_Latn|28,428,608|4.18|16,365,984|3.08|
|Spanish|spa_Latn|30,969,580|4.56|16,315,928|3.07|
|English|eng_Latn|69,530,384|10.24|53,015,690|9.96|
|Total|-|679,318,704|100|532,107,156|100|
#### Language specifics
- `Japanese`: Data in `jpn_Hira`, `jpn_Kana`, `jpn_Hani` is guaranteed to have Hiragana, Katakana or Kanji, respectively in each sample. However, they may still include other styles. So while all samples in `jpn_Kana` are guaranteed to have Katakana, there may still be Hiragana or Kanji.
## Dataset Creation
### Source Data
#### Training datasets
- Code Miscellaneous
- [CodeComplex](https://huggingface.co/datasets/codeparrot/codecomplex)
- [Docstring Corpus](https://huggingface.co/datasets/teven/code_docstring_corpus)
- [GreatCode](https://huggingface.co/datasets/great_code)
- [State Changes](https://huggingface.co/datasets/Fraser/python-state-changes)
- Closed-book QA
- [Hotpot QA](https://huggingface.co/datasets/hotpot_qa)
- [Trivia QA](https://huggingface.co/datasets/trivia_qa)
- [Web Questions](https://huggingface.co/datasets/web_questions)
- [Wiki QA](https://huggingface.co/datasets/wiki_qa)
- Extractive QA
- [Adversarial QA](https://huggingface.co/datasets/adversarial_qa)
- [CMRC2018](https://huggingface.co/datasets/cmrc2018)
- [DRCD](https://huggingface.co/datasets/clue)
- [DuoRC](https://huggingface.co/datasets/duorc)
- [MLQA](https://huggingface.co/datasets/mlqa)
- [Quoref](https://huggingface.co/datasets/quoref)
- [ReCoRD](https://huggingface.co/datasets/super_glue)
- [ROPES](https://huggingface.co/datasets/ropes)
- [SQuAD v2](https://huggingface.co/datasets/squad_v2)
- [xQuAD](https://huggingface.co/datasets/xquad)
- TyDI QA
- [Primary](https://huggingface.co/datasets/khalidalt/tydiqa-primary)
- [Goldp](https://huggingface.co/datasets/khalidalt/tydiqa-goldp)
- Multiple-Choice QA
- [ARC](https://huggingface.co/datasets/ai2_arc)
- [C3](https://huggingface.co/datasets/c3)
- [CoS-E](https://huggingface.co/datasets/cos_e)
- [Cosmos](https://huggingface.co/datasets/cosmos)
- [DREAM](https://huggingface.co/datasets/dream)
- [MultiRC](https://huggingface.co/datasets/super_glue)
- [OpenBookQA](https://huggingface.co/datasets/openbookqa)
- [PiQA](https://huggingface.co/datasets/piqa)
- [QUAIL](https://huggingface.co/datasets/quail)
- [QuaRel](https://huggingface.co/datasets/quarel)
- [QuaRTz](https://huggingface.co/datasets/quartz)
- [QASC](https://huggingface.co/datasets/qasc)
- [RACE](https://huggingface.co/datasets/race)
- [SciQ](https://huggingface.co/datasets/sciq)
- [Social IQA](https://huggingface.co/datasets/social_i_qa)
- [Wiki Hop](https://huggingface.co/datasets/wiki_hop)
- [WiQA](https://huggingface.co/datasets/wiqa)
- Paraphrase Identification
- [MRPC](https://huggingface.co/datasets/super_glue)
- [PAWS](https://huggingface.co/datasets/paws)
- [PAWS-X](https://huggingface.co/datasets/paws-x)
- [QQP](https://huggingface.co/datasets/qqp)
- Program Synthesis
- [APPS](https://huggingface.co/datasets/codeparrot/apps)
- [CodeContests](https://huggingface.co/datasets/teven/code_contests)
- [JupyterCodePairs](https://huggingface.co/datasets/codeparrot/github-jupyter-text-code-pairs)
- [MBPP](https://huggingface.co/datasets/Muennighoff/mbpp)
- [NeuralCodeSearch](https://huggingface.co/datasets/neural_code_search)
- [XLCoST](https://huggingface.co/datasets/codeparrot/xlcost-text-to-code)
- Structure-to-text
- [Common Gen](https://huggingface.co/datasets/common_gen)
- [Wiki Bio](https://huggingface.co/datasets/wiki_bio)
- Sentiment
- [Amazon](https://huggingface.co/datasets/amazon_polarity)
- [App Reviews](https://huggingface.co/datasets/app_reviews)
- [IMDB](https://huggingface.co/datasets/imdb)
- [Rotten Tomatoes](https://huggingface.co/datasets/rotten_tomatoes)
- [Yelp](https://huggingface.co/datasets/yelp_review_full)
- Simplification
- [BiSECT](https://huggingface.co/datasets/GEM/BiSECT)
- Summarization
- [CNN Daily Mail](https://huggingface.co/datasets/cnn_dailymail)
- [Gigaword](https://huggingface.co/datasets/gigaword)
- [MultiNews](https://huggingface.co/datasets/multi_news)
- [SamSum](https://huggingface.co/datasets/samsum)
- [Wiki-Lingua](https://huggingface.co/datasets/GEM/wiki_lingua)
- [XLSum](https://huggingface.co/datasets/GEM/xlsum)
- [XSum](https://huggingface.co/datasets/xsum)
- Topic Classification
- [AG News](https://huggingface.co/datasets/ag_news)
- [DBPedia](https://huggingface.co/datasets/dbpedia_14)
- [TNEWS](https://huggingface.co/datasets/clue)
- [TREC](https://huggingface.co/datasets/trec)
- [CSL](https://huggingface.co/datasets/clue)
- Translation
- [Flores-200](https://huggingface.co/datasets/Muennighoff/flores200)
- [Tatoeba](https://huggingface.co/datasets/Helsinki-NLP/tatoeba_mt)
- [MultiEURLEX](https://huggingface.co/datasets/multi_eurlex)
- Word Sense disambiguation
- [WiC](https://huggingface.co/datasets/super_glue)
- [XL-WiC](https://huggingface.co/datasets/pasinit/xlwic)
- Natural Language Inference (NLI)
- [ANLI](https://huggingface.co/datasets/anli)
- [CB](https://huggingface.co/datasets/super_glue)
- [RTE](https://huggingface.co/datasets/super_glue)
- [XNLI](https://huggingface.co/datasets/xnli)
- Coreference Resolution
- [Winogrande](https://huggingface.co/datasets/winogrande)
- [XWinograd](https://huggingface.co/datasets/Muennighoff/xwinograd)
- Sentence Completion
- [COPA](https://huggingface.co/datasets/super_glue)
- [Story Cloze](https://huggingface.co/datasets/story_cloze)
- [XCOPA](https://huggingface.co/datasets/xcopa)
- [XStoryCloze](https://huggingface.co/datasets/Muennighoff/xstory_cloze)
#### Dataset specifics
- Flores-200: There are three prompts for Flores: `continuation`, `question`, `command`, which represent three commonly used prompting styles, i.e. making a prompt seem like a natural continuation, turning it into a question or commanding the model to do something.
- tatoeba_mt: Contains duplicates. For example, it has data that is both classified as `jpn_Kana` and `jpn_Jpan`, so you may want to deduplicate.
## Additional Information
### Licensing Information
The dataset collection is released under Apache 2.0. Note that individual datasets may have different licenses.
### Citation Information
```bibtex
@article{muennighoff2022crosslingual,
title={Crosslingual generalization through multitask finetuning},
author={Muennighoff, Niklas and Wang, Thomas and Sutawika, Lintang and Roberts, Adam and Biderman, Stella and Scao, Teven Le and Bari, M Saiful and Shen, Sheng and Yong, Zheng-Xin and Schoelkopf, Hailey and others},
journal={arXiv preprint arXiv:2211.01786},
year={2022}
}
```
### Contributions
Thanks to the contributors of [promptsource](https://github.com/bigscience-workshop/promptsource/graphs/contributors) for adding many prompts used in this dataset.
Thanks to the Aya team @[C4AI](https://cohere.for.ai/) 🧡
|
vegaviazhang/Med_QQpairs | 2023-06-16T03:35:25.000Z | [
"license:cc0-1.0",
"region:us"
] | vegaviazhang | null | null | null | 3 | 302 | ---
license: cc0-1.0
---
|
Dodon/ChartQA_dataset | 2023-09-13T16:49:37.000Z | [
"task_categories:visual-question-answering",
"size_categories:10K<n<100K",
"language:en",
"license:gpl-3.0",
"region:us"
] | Dodon | ChartQA dataset
Chart images, tables, image annotations, questions, answers | @article{masry2022chartqa,
title={ChartQA: A benchmark for question answering about charts with visual and logical reasoning},
author={Masry, Ahmed and Long, Do Xuan and Tan, Jia Qing and Joty, Shafiq and Hoque, Enamul},
journal={arXiv preprint arXiv:2203.10244},
year={2022}
} | null | 3 | 302 | ---
license: gpl-3.0
task_categories:
- visual-question-answering
language:
- en
size_categories:
- 10K<n<100K
--- |
docred | 2023-06-14T14:07:55.000Z | [
"task_categories:text-retrieval",
"task_ids:entity-linking-retrieval",
"annotations_creators:expert-generated",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:original",
"language:en",
"license:mit",
"arxiv:1906.06127",
"region:us"
] | null | Multiple entities in a document generally exhibit complex inter-sentence relations, and cannot be well handled by existing relation extraction (RE) methods that typically focus on extracting intra-sentence relations for single entity pairs. In order to accelerate the research on document-level RE, we introduce DocRED, a new dataset constructed from Wikipedia and Wikidata with three features:
- DocRED annotates both named entities and relations, and is the largest human-annotated dataset for document-level RE from plain text.
- DocRED requires reading multiple sentences in a document to extract entities and infer their relations by synthesizing all information of the document.
- Along with the human-annotated data, we also offer large-scale distantly supervised data, which enables DocRED to be adopted for both supervised and weakly supervised scenarios. | @inproceedings{yao-etal-2019-docred,
title = "{D}oc{RED}: A Large-Scale Document-Level Relation Extraction Dataset",
author = "Yao, Yuan and
Ye, Deming and
Li, Peng and
Han, Xu and
Lin, Yankai and
Liu, Zhenghao and
Liu, Zhiyuan and
Huang, Lixin and
Zhou, Jie and
Sun, Maosong",
booktitle = "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
month = jul,
year = "2019",
address = "Florence, Italy",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/P19-1074",
doi = "10.18653/v1/P19-1074",
pages = "764--777",
} | null | 7 | 301 | ---
annotations_creators:
- expert-generated
language_creators:
- crowdsourced
language:
- en
license:
- mit
multilinguality:
- monolingual
paperswithcode_id: docred
pretty_name: DocRED
size_categories:
- 100K<n<1M
source_datasets:
- original
task_categories:
- text-retrieval
task_ids:
- entity-linking-retrieval
dataset_info:
features:
- name: title
dtype: string
- name: sents
sequence:
sequence: string
- name: vertexSet
list:
list:
- name: name
dtype: string
- name: sent_id
dtype: int32
- name: pos
sequence: int32
- name: type
dtype: string
- name: labels
sequence:
- name: head
dtype: int32
- name: tail
dtype: int32
- name: relation_id
dtype: string
- name: relation_text
dtype: string
- name: evidence
sequence: int32
splits:
- name: validation
num_bytes: 3425030
num_examples: 998
- name: test
num_bytes: 2843877
num_examples: 1000
- name: train_annotated
num_bytes: 10413156
num_examples: 3053
- name: train_distant
num_bytes: 346001876
num_examples: 101873
download_size: 458040413
dataset_size: 362683939
---
# Dataset Card for DocRED
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Repository:** [https://github.com/thunlp/DocRED](https://github.com/thunlp/DocRED)
- **Paper:** [DocRED: A Large-Scale Document-Level Relation Extraction Dataset](https://arxiv.org/abs/1906.06127)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of downloaded dataset files:** 21.00 MB
- **Size of the generated dataset:** 20.12 MB
- **Total amount of disk used:** 41.14 MB
### Dataset Summary
Multiple entities in a document generally exhibit complex inter-sentence relations, and cannot be well handled by existing relation extraction (RE) methods that typically focus on extracting intra-sentence relations for single entity pairs. In order to accelerate the research on document-level RE, we introduce DocRED, a new dataset constructed from Wikipedia and Wikidata with three features:
- DocRED annotates both named entities and relations, and is the largest human-annotated dataset for document-level RE from plain text.
- DocRED requires reading multiple sentences in a document to extract entities and infer their relations by synthesizing all information of the document.
- Along with the human-annotated data, we also offer large-scale distantly supervised data, which enables DocRED to be adopted for both supervised and weakly supervised scenarios.
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Dataset Structure
### Data Instances
#### default
- **Size of downloaded dataset files:** 21.00 MB
- **Size of the generated dataset:** 20.12 MB
- **Total amount of disk used:** 41.14 MB
An example of 'train_annotated' looks as follows.
```
{
"labels": {
"evidence": [[0]],
"head": [0],
"relation_id": ["P1"],
"relation_text": ["is_a"],
"tail": [0]
},
"sents": [["This", "is", "a", "sentence"], ["This", "is", "another", "sentence"]],
"title": "Title of the document",
"vertexSet": [[{
"name": "sentence",
"pos": [3],
"sent_id": 0,
"type": "NN"
}, {
"name": "sentence",
"pos": [3],
"sent_id": 1,
"type": "NN"
}], [{
"name": "This",
"pos": [0],
"sent_id": 0,
"type": "NN"
}]]
}
```
### Data Fields
The data fields are the same among all splits.
#### default
- `title`: a `string` feature.
- `sents`: a dictionary feature containing:
- `feature`: a `string` feature.
- `name`: a `string` feature.
- `sent_id`: a `int32` feature.
- `pos`: a `list` of `int32` features.
- `type`: a `string` feature.
- `labels`: a dictionary feature containing:
- `head`: a `int32` feature.
- `tail`: a `int32` feature.
- `relation_id`: a `string` feature.
- `relation_text`: a `string` feature.
- `evidence`: a `list` of `int32` features.
### Data Splits
| name |train_annotated|train_distant|validation|test|
|-------|--------------:|------------:|---------:|---:|
|default| 3053| 101873| 998|1000|
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@inproceedings{yao-etal-2019-docred,
title = "{D}oc{RED}: A Large-Scale Document-Level Relation Extraction Dataset",
author = "Yao, Yuan and
Ye, Deming and
Li, Peng and
Han, Xu and
Lin, Yankai and
Liu, Zhenghao and
Liu, Zhiyuan and
Huang, Lixin and
Zhou, Jie and
Sun, Maosong",
booktitle = "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
month = jul,
year = "2019",
address = "Florence, Italy",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/P19-1074",
doi = "10.18653/v1/P19-1074",
pages = "764--777",
}
```
### Contributions
Thanks to [@ghomasHudson](https://github.com/ghomasHudson), [@thomwolf](https://github.com/thomwolf), [@lhoestq](https://github.com/lhoestq) for adding this dataset. |
hf-internal-testing/fixtures_ocr | 2021-12-07T08:07:29.000Z | [
"region:us"
] | hf-internal-testing | \\n | \\n | null | 0 | 300 | This dataset includes 2 images: one of the [IAM Handwriting Database](https://fki.tic.heia-fr.ch/databases/iam-handwriting-database) and one of the [SRIOE](https://rrc.cvc.uab.es/?ch=13) dataset.
They are used for testing OCR models that are part of the HuggingFace Transformers library. See [here](https://github.com/huggingface/transformers/search?q=fixtures_ocr) for details.
More specifically, they are used inside `test_modeling_vision_encoder_decoder_model.py`, for testing the TrOCR models. |
neuclir/neuclir1 | 2023-01-12T18:43:52.000Z | [
"task_categories:text-retrieval",
"task_ids:document-retrieval",
"annotations_creators:no-annotation",
"language_creators:found",
"multilinguality:multilingual",
"size_categories:1M<n<10M",
"source_datasets:extended|c4",
"language:fa",
"language:ru",
"language:zh",
"license:odc-by",
"region:us"
] | neuclir | null | null | null | 1 | 300 | ---
annotations_creators:
- no-annotation
language:
- fa
- ru
- zh
language_creators:
- found
license:
- odc-by
multilinguality:
- multilingual
pretty_name: NeuCLIR1
size_categories:
- 1M<n<10M
source_datasets:
- extended|c4
tags: []
task_categories:
- text-retrieval
task_ids:
- document-retrieval
---
# Dataset Card for NeuCLIR1
## Dataset Description
- **Website:** https://neuclir.github.io/
- **Repository:** https://github.com/NeuCLIR/download-collection
### Dataset Summary
This is the dataset created for TREC 2022 NeuCLIR Track. The collection designed to be similar to HC4 and a large portion of documents from HC4 are ported to this collection.
The documents are Web pages from Common Crawl in Chinese, Persian, and Russian.
### Languages
- Chinese
- Persian
- Russian
## Dataset Structure
### Data Instances
| Split | Documents |
|-----------------|----------:|
| `fas` (Persian) | 2.2M |
| `rus` (Russian) | 4.6M |
| `zho` (Chinese) | 3.2M |
### Data Fields
- `id`: unique identifier for this document
- `cc_file`: source file from connon crawl
- `time`: extracted date/time from article
- `title`: title extracted from article
- `text`: extracted article body
- `url`: source URL
## Dataset Usage
Using 🤗 Datasets:
```python
from datasets import load_dataset
dataset = load_dataset('neuclir/neuclir1')
dataset['fas'] # Persian documents
dataset['rus'] # Russian documents
dataset['zho'] # Chinese documents
```
|
Tevatron/msmarco-passage-corpus | 2022-03-16T15:27:25.000Z | [
"region:us"
] | Tevatron | null | @misc{bajaj2018ms,
title={MS MARCO: A Human Generated MAchine Reading COmprehension Dataset},
author={Payal Bajaj and Daniel Campos and Nick Craswell and Li Deng and Jianfeng Gao and Xiaodong Liu
and Rangan Majumder and Andrew McNamara and Bhaskar Mitra and Tri Nguyen and Mir Rosenberg and Xia Song
and Alina Stoica and Saurabh Tiwary and Tong Wang},
year={2018},
eprint={1611.09268},
archivePrefix={arXiv},
primaryClass={cs.CL}
} | null | 1 | 299 | Entry not found |
mteb/stackexchange-clustering-p2p | 2022-09-27T19:14:52.000Z | [
"language:en",
"region:us"
] | mteb | null | null | null | 0 | 299 | ---
language:
- en
--- |
Feanix/sms_convos | 2023-09-12T18:03:26.000Z | [
"region:us"
] | Feanix | null | null | null | 0 | 299 | ---
configs:
- config_name: default
data_files:
- split: "2893149136"
path: "data/2893149136.parquet"
- split: "4162702577"
path: "data/4162702577.parquet"
- split: "4162774414"
path: "data/4162774414.parquet"
- split: "4164173989"
path: "data/4164173989.parquet"
- split: "4164736343"
path: "data/4164736343.parquet"
- split: "4165272818"
path: "data/4165272818.parquet"
- split: "4165284840"
path: "data/4165284840.parquet"
- split: "4165796634"
path: "data/4165796634.parquet"
- split: "4167054500"
path: "data/4167054500.parquet"
- split: "4168035459"
path: "data/4168035459.parquet"
- split: "4168207224"
path: "data/4168207224.parquet"
- split: "4168982667"
path: "data/4168982667.parquet"
- split: "4376844772"
path: "data/4376844772.parquet"
- split: "5148067528"
path: "data/5148067528.parquet"
- split: "5192007528"
path: "data/5192007528.parquet"
- split: "6473036801"
path: "data/6473036801.parquet"
- split: "6473852319"
path: "data/6473852319.parquet"
- split: "6474051995"
path: "data/6474051995.parquet"
- split: "6474084977"
path: "data/6474084977.parquet"
- split: "6474462582"
path: "data/6474462582.parquet"
- split: "6474827838"
path: "data/6474827838.parquet"
- split: "6475240601"
path: "data/6475240601.parquet"
- split: "6475299135"
path: "data/6475299135.parquet"
- split: "6475677019"
path: "data/6475677019.parquet"
- split: "6475692539"
path: "data/6475692539.parquet"
- split: "6476222943"
path: "data/6476222943.parquet"
- split: "6476946326"
path: "data/6476946326.parquet"
- split: "6477176145"
path: "data/6477176145.parquet"
- split: "6478245826"
path: "data/6478245826.parquet"
- split: "6478385496"
path: "data/6478385496.parquet"
- split: "6478614240"
path: "data/6478614240.parquet"
- split: "6478618498"
path: "data/6478618498.parquet"
- split: "6478845216"
path: "data/6478845216.parquet"
- split: "6479065591"
path: "data/6479065591.parquet"
- split: "6479168193"
path: "data/6479168193.parquet"
- split: "6479289430"
path: "data/6479289430.parquet"
- split: "6479690125"
path: "data/6479690125.parquet"
- split: "6479933258"
path: "data/6479933258.parquet"
---
# Dataset Card for Dataset Name
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1).
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] |
stas/wmt14-en-de-pre-processed | 2021-02-16T04:41:04.000Z | [
"region:us"
] | stas | null | @InProceedings{huggingface:dataset,
title = {WMT14 English-German Translation Data with further preprocessing},
authors={},
year={2016}
} | null | 1 | 297 | # WMT14 English-German Translation Data w/ further preprocessing
The original pre-processing script is [here](https://github.com/pytorch/fairseq/blob/master/examples/translation/prepare-wmt14en2de.sh).
This pre-processed dataset was created by running:
```
git clone https://github.com/pytorch/fairseq
cd fairseq
cd examples/translation/
./prepare-wmt14en2de.sh
```
It was originally used by `transformers` [`finetune_trainer.py`](https://github.com/huggingface/transformers/blob/641f418e102218c4bf16fcd3124bfebed6217ef6/examples/seq2seq/finetune_trainer.py)
The data itself resides at https://cdn-datasets.huggingface.co/translation/wmt_en_de.tgz
|
ostapeno/flanv2_100k_2 | 2023-08-16T15:42:38.000Z | [
"license:apache-2.0",
"region:us"
] | ostapeno | null | null | null | 0 | 296 | ---
license: apache-2.0
dataset_info:
features:
- name: id
dtype: int64
- name: user
dtype: string
- name: assistant
dtype: string
splits:
- name: train
num_bytes: 143307369
num_examples: 100000
download_size: 85860910
dataset_size: 143307369
---
|
harouzie/vi_question_generation | 2023-09-04T05:02:36.000Z | [
"task_categories:question-answering",
"task_categories:text2text-generation",
"size_categories:100K<n<1M",
"language:vi",
"license:mit",
"region:us"
] | harouzie | null | null | null | 0 | 296 | ---
license: mit
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
- split: valid
path: data/valid-*
dataset_info:
features:
- name: context
dtype: string
- name: question
dtype: string
- name: answers
dtype: string
- name: id
dtype: string
splits:
- name: train
num_bytes: 211814961.2307449
num_examples: 174499
- name: test
num_bytes: 26477628.80776531
num_examples: 21813
- name: valid
num_bytes: 26476414.961489797
num_examples: 21812
download_size: 142790671
dataset_size: 264769005
task_categories:
- question-answering
- text2text-generation
language:
- vi
pretty_name: Vietnamese Dataset for Extractive Question Answering and Question Generation
size_categories:
- 100K<n<1M
--- |
SetFit/qnli | 2022-02-28T13:29:16.000Z | [
"region:us"
] | SetFit | null | null | null | 0 | 295 | # Glue QNLI
This dataset is a port of the official [`qnli` dataset](https://huggingface.co/datasets/glue/viewer/qnli/train) on the Hub.
Note that the question and sentence columns have been renamed to text1 and text2 respectively.
Also, the test split is not labeled; the label column values are always -1.
|
HuggingFaceH4/databricks_dolly_15k | 2023-04-12T17:11:41.000Z | [
"license:cc-by-3.0",
"arxiv:2203.02155",
"region:us"
] | HuggingFaceH4 | null | null | null | 17 | 295 | ---
license: cc-by-3.0
dataset_info:
features:
- name: category
dtype: string
- name: instruction
dtype: string
- name: input
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 12326332
num_examples: 15015
download_size: 0
dataset_size: 12326332
---
# Dataset Card for Dolly_15K
# Summary
`databricks-dolly-15k` is an open source dataset of instruction-following records generated by thousands of Databricks employees in several of the behavioral categories outlined in the [InstructGPT](https://arxiv.org/abs/2203.02155) paper, including brainstorming, classification, closed QA, generation, information extraction, open QA, and summarization.
This dataset can be used for any purpose, whether academic or commercial, under the terms of the [Creative Commons Attribution-ShareAlike 3.0 Unported License](https://creativecommons.org/licenses/by-sa/3.0/legalcode).
Supported Tasks:
- Training LLMs
- Synthetic Data Generation
- Data Augmentation
Languages: English
Version: 1.0
**Owner: Databricks, Inc.**
# Dataset Overview
`databricks-dolly-15k` is a corpus of more than 15,000 records generated by thousands of Databricks employees to enable large language
models to exhibit the magical interactivity of ChatGPT. Databricks employees were invited to create prompt / response pairs in each of eight different instruction categories, including the seven outlined in the InstructGPT paper, as well as an open-ended free-form category. The contributors were instructed to avoid using information from any source on the web with the exception of Wikipedia (for particular subsets of instruction categories), and explicitly instructed to avoid using generative AI in formulating instructions or responses. Examples of each behavior were provided to motivate the
types of questions and instructions appropriate to each category.
Halfway through the data generation process, contributors were given the option of answering questions posed by other contributors. They were asked to rephrase the original question and only select questions they could be reasonably expected to answer correctly.
For certain categories contributors were asked to provide reference texts copied from Wikipedia. Reference text (indicated by the `context` field in the actual dataset) may contain bracketed Wikipedia citation numbers (e.g. `[42]`) which we recommend users remove for downstream applications.
# Intended Uses
While immediately valuable for instruction fine tuning large language models, as a corpus of human-generated instruction prompts, this dataset also presents a valuable opportunity for synthetic data generation in the methods outlined in the Self-Instruct paper. For example, contributor--generated prompts could be submitted as few-shot examples to a large open language model to generate a corpus of millions of examples of instructions in each of the respective InstructGPT categories.
Likewise, both the instructions and responses present fertile ground for data augmentation. A paraphrasing model might be used to restate each prompt or short responses, with the resulting text associated to the respective ground-truth sample. Such an approach might provide a form of regularization on the dataset that could allow for more robust instruction-following behavior in models derived from these synthetic datasets.
# Dataset
## Purpose of Collection
As part of our continuing commitment to open source, Databricks developed what is, to the best of our knowledge, the first open source, human-generated instruction corpus specifically designed to enable large language models to exhibit the magical interactivity of ChatGPT. Unlike other datasets that are limited to non-commercial use, this dataset can be used, modified, and extended for any purpose, including academic or commercial applications.
## Sources
- **Human-generated data**: Databricks employees were invited to create prompt / response pairs in each of eight different instruction categories.
- **Wikipedia**: For instruction categories that require an annotator to consult a reference text (information extraction, closed QA, summarization) contributors selected passages from Wikipedia for particular subsets of instruction categories. No guidance was given to annotators as to how to select the target passages.
## Annotator Guidelines
To create a record, employees were given a brief description of the annotation task as well as examples of the types of prompts typical of each annotation task. Guidelines were succinct by design so as to encourage a high task completion rate, possibly at the cost of rigorous compliance to an annotation rubric that concretely and reliably operationalizes the specific task. Caveat emptor.
The annotation guidelines for each of the categories are as follows:
- **Creative Writing**: Write a question or instruction that requires a creative, open-ended written response. The instruction should be reasonable to ask of a person with general world knowledge and should not require searching. In this task, your prompt should give very specific instructions to follow. Constraints, instructions, guidelines, or requirements all work, and the more of them the better.
- **Closed QA**: Write a question or instruction that requires factually correct response based on a passage of text from Wikipedia. The question can be complex and can involve human-level reasoning capabilities, but should not require special knowledge. To create a question for this task include both the text of the question as well as the reference text in the form.
- **Open QA**: Write a question that can be answered using general world knowledge or at most a single search. This task asks for opinions and facts about the world at large and does not provide any reference text for consultation.
- **Summarization**: Give a summary of a paragraph from Wikipedia. Please don't ask questions that will require more than 3-5 minutes to answer. To create a question for this task include both the text of the question as well as the reference text in the form.
- **Information Extraction**: These questions involve reading a paragraph from Wikipedia and extracting information from the passage. Everything required to produce an answer (e.g. a list, keywords etc) should be included in the passages. To create a question for this task include both the text of the question as well as the reference text in the form.
- **Classification**: These prompts contain lists or examples of entities to be classified, e.g. movie reviews, products, etc. In this task the text or list of entities under consideration is contained in the prompt (e.g. there is no reference text.). You can choose any categories for classification you like, the more diverse the better.
- **Brainstorming**: Think up lots of examples in response to a question asking to brainstorm ideas.
## Personal or Sensitive Data
This dataset contains public information (e.g., some information from Wikipedia). To our knowledge, there are no private person’s personal identifiers or sensitive information.
## Language
American English
# Known Limitations
- Wikipedia is a crowdsourced corpus and the contents of this dataset may reflect the bias, factual errors and topical focus found in Wikipedia
- Some annotators may not be native English speakers
- Annotator demographics and subject matter may reflect the makeup of Databricks employees
# License/Attribution
**Copyright (2023) Databricks, Inc.**
This dataset was developed at Databricks (https://www.databricks.com) and its use is subject to the CC BY-SA 3.0 license.
Certain categories of material in the dataset include materials from the following sources, licensed under the CC BY-SA 3.0 license:
Wikipedia (various pages) - https://www.wikipedia.org/
Copyright © Wikipedia editors and contributors. |
indonesian-nlp/mc4-id | 2022-10-25T11:52:34.000Z | [
"task_categories:text-generation",
"task_ids:language-modeling",
"annotations_creators:no-annotation",
"language_creators:found",
"multilinguality:monolingual",
"source_datasets:extended",
"language:id",
"license:odc-by",
"arxiv:1910.10683",
"region:us"
] | indonesian-nlp | A thoroughly cleaned version of the Italian portion of the multilingual
colossal, cleaned version of Common Crawl's web crawl corpus (mC4) by AllenAI.
Based on Common Crawl dataset: "https://commoncrawl.org".
This is the processed version of Google's mC4 dataset by AllenAI, with further cleaning
detailed in the repository README file. | @article{JMLR:v21:20-074,
author = {Colin Raffel and Noam Shazeer and Adam Roberts and Katherine Lee and Sharan Narang and Michael Matena and Yanqi Zhou and Wei Li and Peter J. Liu},
title = {Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer},
journal = {Journal of Machine Learning Research},
year = {2020},
volume = {21},
number = {140},
pages = {1-67},
url = {http://jmlr.org/papers/v21/20-074.html}
} | null | 3 | 294 | ---
annotations_creators:
- no-annotation
language_creators:
- found
language:
- id
license:
- odc-by
multilinguality:
- monolingual
size_categories:
tiny:
- 1M<n<10M
small:
- 10M<n<100M
medium:
- 10M<n<100M
large:
- 10M<n<100M
full:
- 100M<n<1B
source_datasets:
- extended
task_categories:
- text-generation
task_ids:
- language-modeling
paperswithcode_id: mc4
pretty_name: mC4-id
---
# Dataset Card for Clean(maybe) Indonesia mC4
## Dataset Description
- **Original Homepage:** [HF Hub](https://huggingface.co/datasets/allenai/c4)
- **Paper:** [ArXiv](https://arxiv.org/abs/1910.10683)
### Dataset Summary
A thoroughly cleaned version of the Indonesia split of the multilingual colossal, cleaned version of Common Crawl's web crawl corpus (mC4). Based on the [Common Crawl dataset](https://commoncrawl.org). The original version was prepared by [AllenAI](https://allenai.org/), hosted at the address [https://huggingface.co/datasets/allenai/c4](https://huggingface.co/datasets/allenai/c4).
### Data Fields
The data contains the following fields:
- `url`: url of the source as a string
- `text`: text content as a string
- `timestamp`: timestamp of extraction as a string
### Data Splits
You can load any subset like this:
```python
from datasets import load_dataset
mc4_id_tiny = load_dataset("munggok/mc4-id", "tiny")
```
Since splits are quite large, you may want to traverse them using the streaming mode available starting from 🤗 Datasets v1.9.0:
```python
from datasets import load_dataset
mc4_id_full_stream = load_dataset("munggok/mc4-id", "full", split='train', streaming=True)
print(next(iter(mc4_id_full_stream))) # Prints the example presented above
```
## Dataset Creation
Refer to the original paper for more considerations regarding the choice of sources and the scraping process for creating `mC4`.
## Considerations for Using the Data
### Discussion of Biases
Despite the cleaning procedure aimed at removing vulgarity and profanity, it must be considered that model trained on this scraped corpus will inevitably reflect biases present in blog articles and comments on the Internet. This makes the corpus especially interesting in the context of studying data biases and how to limit their impacts.
## Additional Information
### Dataset Curators
Authors at AllenAI are the original curators for the `mc4` corpus.
### Licensing Information
AllenAI are releasing this dataset under the terms of ODC-BY. By using this, you are also bound by the Common Crawl terms of use in respect of the content contained in the dataset.
### Citation Information
If you use this dataset in your work, please cite us and the original mC4 authors as:
```
@inproceedings{xue-etal-2021-mt5,
title = "m{T}5: A Massively Multilingual Pre-trained Text-to-Text Transformer",
author = "Xue, Linting and
Constant, Noah and
Roberts, Adam and
Kale, Mihir and
Al-Rfou, Rami and
Siddhant, Aditya and
Barua, Aditya and
Raffel, Colin",
booktitle = "Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
month = jun,
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.naacl-main.41",
doi = "10.18653/v1/2021.naacl-main.41",
pages = "483--498",
}
```
### Contributions
Thanks to [@dirkgr](https://github.com/dirkgr) and [@lhoestq](https://github.com/lhoestq) for adding this dataset.
|
ArmelR/the-pile-splitted | 2023-09-06T09:53:16.000Z | [
"arxiv:2101.00027",
"arxiv:2201.07311",
"region:us"
] | ArmelR | null | null | null | 1 | 294 | ---
configs:
- config_name: all
data_files:
- split: train
path:
- "data/ArXiv/train/*.arrow"
- "data/BookCorpus2/train/*.arrow"
- "data/Books3/train/*.arrow"
- "data/DM Mathematics/train/*.arrow"
- "data/Enron Emails/train/*.arrow"
- "data/EuroParl/train/*.arrow"
- "data/FreeLaw/train/*.arrow"
- "data/Github/train/*.arrow"
- "data/Gutenberg (PG-19)/train/*.arrow"
- "data/HackerNews/train/*.arrow"
- "data/NIH ExPorter/train/*.arrow"
- "data/OpenSubtitles/train/*.arrow"
- "data/OpenWebText2/train/*.arrow"
- "data/PhilPapers/train/*.arrow"
- "data/Pile-CC/train/*.arrow"
- "data/PubMed Abstracts/train/*.arrow"
- "data/PubMed Central/train/*.arrow"
- "data/StackExchange/train/*.arrow"
- "data/UPSTO Backgrounds/train/*.arrow"
- "data/Ubuntu IRC/train/*.arrow"
- "data/Wikipedia (en)/train/*.arrow"
- "data/YoutubeSubtitles/train/*.arrow"
- split: test
path:
- "data/ArXiv/test/*.arrow"
- "data/BookCorpus2/test/*.arrow"
- "data/Books3/test/*.arrow"
- "data/DM Mathematics/test/*.arrow"
- "data/Enron Emails/test/*.arrow"
- "data/EuroParl/test/*.arrow"
- "data/FreeLaw/test/*.arrow"
- "data/Github/test/*.arrow"
- "data/Gutenberg (PG-19)/test/*.arrow"
- "data/HackerNews/test/*.arrow"
- "data/NIH ExPorter/test/*.arrow"
- "data/OpenSubtitles/test/*.arrow"
- "data/OpenWebText2/test/*.arrow"
- "data/PhilPapers/test/*.arrow"
- "data/Pile-CC/test/*.arrow"
- "data/PubMed Abstracts/test/*.arrow"
- "data/PubMed Central/test/*.arrow"
- "data/StackExchange/test/*.arrow"
- "data/UPSTO Backgrounds/test/*.arrow"
- "data/Ubuntu IRC/test/*.arrow"
- "data/Wikipedia (en)/test/*.arrow"
- "data/YoutubeSubtitles/test/*.arrow"
default: true
- config_name: ArXiv
data_files:
- split: train
path: "data/ArXiv/train/*.arrow"
- split: test
path: "data/ArXiv/test/*.arrow"
- config_name: BookCorpus2
data_files:
- split: train
path: "data/BookCorpus2/train/*.arrow"
- split: test
path: "data/BookCorpus2/test/*.arrow"
- config_name: Books3
data_files:
- split: train
path: "data/Books3/train/*.arrow"
- split: test
path: "data/Books3/test/*.arrow"
- config_name: DM Mathematics
data_files:
- split: train
path: "data/DM Mathematics/train/*.arrow"
- split: test
path: "data/DM Mathematics/test/*.arrow"
- config_name: Enron Emails
data_files:
- split: train
path: "data/Enron Emails/train/*.arrow"
- split: test
path: "data/Enron Emails/test/*.arrow"
- config_name: EuroParl
data_files:
- split: train
path: "data/EuroParl/train/*.arrow"
- split: test
path: "data/EuroParl/test/*.arrow"
- config_name: FreeLaw
data_files:
- split: train
path: "data/FreeLaw/train/*.arrow"
- split: test
path: "data/FreeLaw/test/*.arrow"
- config_name: Github
data_files:
- split: train
path: "data/Github/train/*.arrow"
- split: test
path: "data/Github/test/*.arrow"
- config_name: Gutenberg (PG-19)
data_files:
- split: train
path: "data/Gutenberg (PG-19)/train/*.arrow"
- split: test
path: "data/Gutenberg (PG-19)/test/*.arrow"
- config_name: HackerNews
data_files:
- split: train
path: "data/HackerNews/train/*.arrow"
- split: test
path: "data/HackerNews/test/*.arrow"
- config_name: NIH ExPorter
data_files:
- split: train
path: "data/NIH ExPorter/train/*.arrow"
- split: test
path: "data/NIH ExPorter/test/*.arrow"
- config_name: OpenSubtitles
data_files:
- split: train
path: "data/OpenSubtitles/train/*.arrow"
- split: test
path: "data/OpenSubtitles/test/*.arrow"
- config_name: OpenWebText2
data_files:
- split: train
path: "data/OpenWebText2/train/*.arrow"
- split: test
path: "data/OpenWebText2/test/*.arrow"
- config_name: PhilPapers
data_files:
- split: train
path: "data/PhilPapers/train/*.arrow"
- split: test
path: "data/PhilPapers/test/*.arrow"
- config_name: Pile-CC
data_files:
- split: train
path: "data/Pile-CC/train/*.arrow"
- split: test
path: "data/Pile-CC/test/*.arrow"
- config_name: PubMed Abstracts
data_files:
- split: train
path: "data/PubMed Abstracts/train/*.arrow"
- split: test
path: "data/PubMed Abstracts/test/*.arrow"
- config_name: PubMed Central
data_files:
- split: train
path: "data/PubMed Central/train/*.arrow"
- split: test
path: "data/PubMed Central/test/*.arrow"
- config_name: StackExchange
data_files:
- split: train
path: "data/StackExchange/train/*.arrow"
- split: test
path: "data/StackExchange/test/*.arrow"
- config_name: UPSTO Backgrounds
data_files:
- split: train
path: "data/UPSTO Backgrounds/train/*.arrow"
- split: test
path: "data/UPSTO Backgrounds/test/*.arrow"
- config_name: Ubuntu IRC
data_files:
- split: train
path: "data/Ubuntu IRC/train/*.arrow"
- split: test
path: "data/Ubuntu IRC/test/*.arrow"
- config_name: Wikipedia (en)
data_files:
- split: train
path: "data/Wikipedia (en)/train/*.arrow"
- split: test
path: "data/Wikipedia (en)/test/*.arrow"
- config_name: YoutubeSubtitles
data_files:
- split: train
path: "data/YoutubeSubtitles/train/*.arrow"
- split: test
path: "data/YoutubeSubtitles/test/*.arrow"
---
# Dataset description
[The pile](https://arxiv.org/abs/2101.00027) is an 800GB dataset of english text
designed by EleutherAI to train large-scale language models. The original version of
the dataset can be found [here](https://huggingface.co/datasets/EleutherAI/pile).
The dataset is divided into 22 smaller high-quality datasets. For more information
each of them, please refer to [the datasheet for the pile](https://arxiv.org/abs/2201.07311).
However, the current version of the dataset, available on the Hub, is not splitted accordingly.
We had to solve this problem in order to improve the user experience when it comes to deal with
the pile via the hub.
Here is an instance of the pile
```
{
'meta': {'pile_set_name': 'Pile-CC'},
'text': 'It is done, and submitted. You can play “Survival of the Tastiest” on Android, and on the web. Playing on...'
}
```
We used the `meta` column to properly divide the dataset in subsets. Each instance `example` belongs to the subset
`domain` and `domain = example['meta']['pile_set_name']`. By doing this, we were able to create a [new version of the pile](https://huggingface.co/datasets/ArmelR/sharded-pile)
that is properly divided, each instance having a new column `domain`.
We further splitted each subset in train/test (97%/3%) to build the current dataset which the following structure
```
data
ArXiv
train
test
BookCorpus2
train
test
Books3
train
test
```
# Usage
```python
from datasets import load_dataset
dataset = load_dataset(
"ArmelR/the-pile-splitted",
subset_of_interest,
num_proc=8
)
```
Using `subset_of_interest = "default"` will load the whole dataset.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.