id stringlengths 2 115 | lastModified stringlengths 24 24 | tags list | author stringlengths 2 42 ⌀ | description stringlengths 0 68.7k ⌀ | citation stringlengths 0 10.7k ⌀ | cardData null | likes int64 0 3.55k | downloads int64 0 10.1M | card stringlengths 0 1.01M |
|---|---|---|---|---|---|---|---|---|---|
Fraser/python-state-changes | 2022-10-11T17:04:35.000Z | [
"language:code",
"region:us"
] | Fraser | Python state changes from a single line of code. | null | null | 6 | 116 | ---
language:
- code
---
# Python State Changes
State changes from the execution of single lines of Python code.
All code was taken from Python HackerRank solutions.
Scraped from my dataset of traced HackerRank solutions. https://www.kaggle.com/frasergreenlee/ran-hackerrank-solutions
```json
{"start": "g = 100; i = 1; l = [100, 100, 0, 0, -100, -100]", "code": "g += l[i]", "end": "g = 200; i = 1; l = [100, 100, 0, 0, -100, -100]"}
{"start": "a = 1; b = 2; d = 4; i = 3; j = 2", "code": "i, j = a + (j - b), b + (d - (i - a))", "end": "a = 1; b = 2; d = 4; i = 1; j = 4"}
{"start": "b = 15", "code": "b = b // 2", "end": "b = 7"}
```
## Get an overview of the dataset from seeing the frequency of different ASTs.
👉 https://observablehq.com/@frasergreenlee/python-lines-dataset#chart |
juletxara/xquad_xtreme | 2022-10-12T08:43:41.000Z | [
"task_categories:question-answering",
"task_ids:extractive-qa",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:multilingual",
"size_categories:unknown",
"source_datasets:extended|squad",
"language:en",
"language:es",
"language:de",
"language:el",
"language:hi",
"language:th",
"language:ru",
"language:tr",
"language:ar",
"language:vi",
"language:zh",
"language:ro",
"license:cc-by-sa-4.0",
"arxiv:1910.11856",
"region:us"
] | juletxara | XQuAD (Cross-lingual Question Answering Dataset) is a benchmark dataset for evaluating cross-lingual question answering
performance. The dataset consists of a subset of 240 paragraphs and 1190 question-answer pairs from the development set
of SQuAD v1.1 (Rajpurkar et al., 2016) together with their professional translations into ten languages: Spanish, German,
Greek, Russian, Turkish, Arabic, Vietnamese, Thai, Chinese, Hindi and Romanian. Consequently, the dataset is entirely parallel
across 12 languages.
We also include "translate-train", "translate-dev", and "translate-test" splits for each non-English language from XTREME (Hu et al., 2020). These can be used to run XQuAD in the "translate-train" or "translate-test" settings. | @article{Artetxe:etal:2019,
author = {Mikel Artetxe and Sebastian Ruder and Dani Yogatama},
title = {On the cross-lingual transferability of monolingual representations},
journal = {CoRR},
volume = {abs/1910.11856},
year = {2019},
archivePrefix = {arXiv},
eprint = {1910.11856}
} | null | 5 | 116 | ---
pretty_name: XQuAD-XTREME
annotations_creators:
- expert-generated
language_creators:
- expert-generated
language:
- en
- es
- de
- el
- hi
- th
- ru
- tr
- ar
- vi
- zh
- ro
license:
- cc-by-sa-4.0
multilinguality:
- multilingual
size_categories:
- unknown
source_datasets:
- extended|squad
task_categories:
- question-answering
task_ids:
- extractive-qa
paperswithcode_id: xquad
---
# Dataset Card for XQuAD-XTREME
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://github.com/deepmind/xquad](https://github.com/deepmind/xquad)
- **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of downloaded dataset files:** 139.53 MB
- **Size of the generated dataset:** 18.09 MB
- **Total amount of disk used:** 157.62 MB
### Dataset Summary
XQuAD (Cross-lingual Question Answering Dataset) is a benchmark dataset for evaluating cross-lingual question answering
performance. The dataset consists of a subset of 240 paragraphs and 1190 question-answer pairs from the development set
of SQuAD v1.1 (Rajpurkar et al., 2016) together with their professional translations into ten language: Spanish, German,
Greek, Russian, Turkish, Arabic, Vietnamese, Thai, Chinese, Hindi and Romanian. Consequently, the dataset is entirely parallel across 12 languages.
We also include "translate-train", "translate-dev", and "translate-test"
splits for each non-English language from XTREME (Hu et al., 2020). These can be used to run XQuAD in the "translate-train" or "translate-test" settings. https://proceedings.mlr.press/v119/hu20b/hu20b.pdf
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Dataset Structure
### Data Instances
#### ar
- **Size of downloaded dataset files:** 12.68 MB
- **Size of the generated dataset:** 1.64 MB
- **Total amount of disk used:** 14.33 MB
An example of 'test' looks as follows.
```
This example was too long and was cropped:
{
"answers": {
"answer_start": [527],
"text": ["136"]
},
"context": "\"Die Verteidigung der Panthers gab nur 308 Punkte ab und belegte den sechsten Platz in der Liga, während sie die NFL mit 24 Inte...",
"id": "56beb4343aeaaa14008c925c",
"question": "Wie viele Sacks erzielte Jared Allen in seiner Karriere?"
}
```
#### de
- **Size of downloaded dataset files:** 12.68 MB
- **Size of the generated dataset:** 1.23 MB
- **Total amount of disk used:** 13.91 MB
An example of 'test' looks as follows.
```
This example was too long and was cropped:
{
"answers": {
"answer_start": [527],
"text": ["136"]
},
"context": "\"Die Verteidigung der Panthers gab nur 308 Punkte ab und belegte den sechsten Platz in der Liga, während sie die NFL mit 24 Inte...",
"id": "56beb4343aeaaa14008c925c",
"question": "Wie viele Sacks erzielte Jared Allen in seiner Karriere?"
}
```
#### el
- **Size of downloaded dataset files:** 12.68 MB
- **Size of the generated dataset:** 2.11 MB
- **Total amount of disk used:** 14.79 MB
An example of 'test' looks as follows.
```
This example was too long and was cropped:
{
"answers": {
"answer_start": [527],
"text": ["136"]
},
"context": "\"Die Verteidigung der Panthers gab nur 308 Punkte ab und belegte den sechsten Platz in der Liga, während sie die NFL mit 24 Inte...",
"id": "56beb4343aeaaa14008c925c",
"question": "Wie viele Sacks erzielte Jared Allen in seiner Karriere?"
}
```
#### en
- **Size of downloaded dataset files:** 12.68 MB
- **Size of the generated dataset:** 1.07 MB
- **Total amount of disk used:** 13.75 MB
An example of 'test' looks as follows.
```
This example was too long and was cropped:
{
"answers": {
"answer_start": [527],
"text": ["136"]
},
"context": "\"Die Verteidigung der Panthers gab nur 308 Punkte ab und belegte den sechsten Platz in der Liga, während sie die NFL mit 24 Inte...",
"id": "56beb4343aeaaa14008c925c",
"question": "Wie viele Sacks erzielte Jared Allen in seiner Karriere?"
}
```
#### es
- **Size of downloaded dataset files:** 12.68 MB
- **Size of the generated dataset:** 1.22 MB
- **Total amount of disk used:** 13.90 MB
An example of 'test' looks as follows.
```
This example was too long and was cropped:
{
"answers": {
"answer_start": [527],
"text": ["136"]
},
"context": "\"Die Verteidigung der Panthers gab nur 308 Punkte ab und belegte den sechsten Platz in der Liga, während sie die NFL mit 24 Inte...",
"id": "56beb4343aeaaa14008c925c",
"question": "Wie viele Sacks erzielte Jared Allen in seiner Karriere?"
}
```
### Data Fields
The data fields are the same among all splits.
#### ar
- `id`: a `string` feature.
- `context`: a `string` feature.
- `question`: a `string` feature.
- `answers`: a dictionary feature containing:
- `text`: a `string` feature.
- `answer_start`: a `int32` feature.
#### de
- `id`: a `string` feature.
- `context`: a `string` feature.
- `question`: a `string` feature.
- `answers`: a dictionary feature containing:
- `text`: a `string` feature.
- `answer_start`: a `int32` feature.
#### el
- `id`: a `string` feature.
- `context`: a `string` feature.
- `question`: a `string` feature.
- `answers`: a dictionary feature containing:
- `text`: a `string` feature.
- `answer_start`: a `int32` feature.
#### en
- `id`: a `string` feature.
- `context`: a `string` feature.
- `question`: a `string` feature.
- `answers`: a dictionary feature containing:
- `text`: a `string` feature.
- `answer_start`: a `int32` feature.
#### es
- `id`: a `string` feature.
- `context`: a `string` feature.
- `question`: a `string` feature.
- `answers`: a dictionary feature containing:
- `text`: a `string` feature.
- `answer_start`: a `int32` feature.
### Data Splits
| name | validation |
| -------- | ---------: |
| ar | 1190 |
| de | 1190 |
| el | 1190 |
| en | 1190 |
| es | 1190 |
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@article{Artetxe:etal:2019,
author = {Mikel Artetxe and Sebastian Ruder and Dani Yogatama},
title = {On the cross-lingual transferability of monolingual representations},
journal = {CoRR},
volume = {abs/1910.11856},
year = {2019},
archivePrefix = {arXiv},
eprint = {1910.11856}
}
```
### Contributions
Thanks to [@lewtun](https://github.com/lewtun), [@patrickvonplaten](https://github.com/patrickvonplaten), [@thomwolf](https://github.com/thomwolf) for adding this dataset. |
MMInstruction/M3IT_ML | 2023-10-10T03:37:29.000Z | [
"license:cc-by-4.0",
"region:us"
] | MMInstruction | Multi-modal Bi-lingual Instruction Dataset | null | null | 1 | 116 | ---
license: cc-by-4.0
---
|
luisroque/instruct-python-llama2-20k | 2023-08-18T09:44:00.000Z | [
"task_categories:text-generation",
"size_categories:10K<n<100K",
"language:en",
"license:cc-by-sa-3.0",
"region:us"
] | luisroque | null | null | null | 0 | 116 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 34661192.7
num_examples: 19000
- name: test
num_bytes: 1824273.3
num_examples: 1000
download_size: 19060329
dataset_size: 36485466
license: cc-by-sa-3.0
task_categories:
- text-generation
language:
- en
pretty_name: Instruct Python 500k
size_categories:
- 10K<n<100K
---
# Fine-tuning Instruct Llama2 Stack Overflow Python Q&A
## Transformed Dataset
### Objective
The transformed dataset is designed for fine-tuning LLMs to improve Python coding assistance by focusing on high-quality content from Stack Overflow. It has around 20k instructions.
### Structure
- **Question-Answer Pairing**: Questions and answers are paired using the `ParentId` linkage.
- **Quality Focus**: Only top-rated answers for each question are retained.
- **HTML Tag Removal**: All HTML tags in the content are removed.
- **Combined Question Field**: Each question's title and body are merged.
- **Filtering**: Entries with negative scores or those not containing Python code structures are excluded.
Final columns:
- `score_question`
- `score_answer`
- `question`
- `answer`
### Llama2 Transformation
The dataset has been transformed to match the Llama2 prompt structure, which is relevant for the model's fine-tuning. The format is the following:
`<s>[INST] <<SYS>> {{ system_prompt }} <</SYS>> {{ user_message }} [/INST]`
Where:
- `system_prompt` gives context or instructions to the model.
- `user_message` is the user's query following the system prompt, expecting a particular response from the model.
This structure ensures the training aligns with Llama2's expectations, optimizing the fine-tuning quality.
## Original Dataset
The dataset contains questions and answers from Stack Overflow with the `python` tag, covering the period from August 2, 2008, to October 19, 2016.
## License
All contributions are under the [CC-BY-SA 3.0](https://creativecommons.org/licenses/by-sa/3.0/). Attribution is required. The original dataset was posted [here](https://www.kaggle.com/datasets/stackoverflow/pythonquestions).
Keep in touch: [LinkedIn](https://www.linkedin.com/in/luisbrasroque/) |
vhtran/uniq-id-en | 2023-09-06T13:42:25.000Z | [
"license:cc-by-4.0",
"region:us"
] | vhtran | null | null | null | 0 | 116 | ---
license: cc-by-4.0
---
For translating Indonesian to English |
HAERAE-HUB/HAE_RAE_BENCH | 2023-09-28T02:27:35.000Z | [
"task_categories:multiple-choice",
"language:ko",
"license:cc-by-nc-nd-4.0",
"arxiv:2309.02706",
"region:us"
] | HAERAE-HUB | HAE-RAE Bench | @article{son2023hae,
title={HAE-RAE Bench: Evaluation of Korean Knowledge in Language Models},
author={Son, Guijin and Lee, Hanwool and Kim, Suwan and Lee, Jaecheol and Yeom, Je Won and Jung, Jihyu and Kim, Jung Woo and Kim, Songseong},
journal={arXiv preprint arXiv:2309.02706},
year={2023}
} | null | 0 | 116 | ---
license: cc-by-nc-nd-4.0
extra_gated_prompt: >-
To request access to the dataset, please fill out this form, and we'll review
and let you know if your use case is approved.
extra_gated_fields:
First Name: text
Last Name: text
Institution: text
Intended Use: text
I agree to use this dataset for non-commercial research ONLY: checkbox
task_categories:
- multiple-choice
language:
- ko
---
HAE-RAE Bench is an evaluation suite specifically curated to challenge models that lack Korean cultural and contextual depth.
For a comprehensive overview, refer to our [paper](https://arxiv.org/abs/2309.02706).
The HAE-RAE Bench team is constantly working to broaden its coverage and regularly introduces new tasks to the benchmark.
For detailed information on the tasks included, please refer to our release notes.
### Release Notes
__2023.09.28__: [LM-Eval-Harness](https://github.com/EleutherAI/lm-evaluation-harness) support added for the following 8 tasks:
Loan Words, Rare Words, Standard Nomenclature, History, General Knowledge,correct_definition_matching, date_understanding,reading_comprehension.
Refer to the following [document](https://github.com/guijinSON/HAE-RAE-Bench.v2/blob/main/HAE_RAE_Bench_Evaluation.ipynb) to run the evaluation yourself.
__2023.09.16__: 10 tasks added, 5 from original HAE-RAE Bench(Loan Words, Rare Words, Standard Nomenclature, History, General Knowledge),
5 new tasks (correct_definition_matching, date_understanding, lyrics_denoising, proverbs_denoising, reading_comprehension) |
persiannlp/parsinlu_translation_en_fa | 2022-10-24T16:50:37.000Z | [
"task_categories:translation",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:fa",
"multilinguality:en",
"size_categories:1K<n<10K",
"source_datasets:extended",
"language:fa",
"license:cc-by-nc-sa-4.0",
"arxiv:2012.06154",
"region:us"
] | persiannlp | A Persian translation dataset (English -> Persian). | @article{huggingface:dataset,
title = {ParsiNLU: A Suite of Language Understanding Challenges for Persian},
authors = {Khashabi, Daniel and Cohan, Arman and Shakeri, Siamak and Hosseini, Pedram and Pezeshkpour, Pouya and Alikhani, Malihe and Aminnaseri, Moin and Bitaab, Marzieh and Brahman, Faeze and Ghazarian, Sarik and others},
year={2020}
journal = {arXiv e-prints},
eprint = {2012.06154},
} | null | 1 | 115 | ---
annotations_creators:
- expert-generated
language_creators:
- expert-generated
language:
- fa
license:
- cc-by-nc-sa-4.0
multilinguality:
- fa
- en
size_categories:
- 1K<n<10K
source_datasets:
- extended
task_categories:
- translation
task_ids:
- translation
---
# Dataset Card for PersiNLU (Machine Translation)
## Table of Contents
- [Dataset Card for PersiNLU (Machine Translation)](#dataset-card-for-persi_nlu_machine_translation)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Initial Data Collection and Normalization](#initial-data-collection-and-normalization)
- [Who are the source language producers?](#who-are-the-source-language-producers)
- [Annotations](#annotations)
- [Annotation process](#annotation-process)
- [Who are the annotators?](#who-are-the-annotators)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [Github](https://github.com/persiannlp/parsinlu/)
- **Repository:** [Github](https://github.com/persiannlp/parsinlu/)
- **Paper:** [Arxiv](https://arxiv.org/abs/2012.06154)
- **Leaderboard:**
- **Point of Contact:** d.khashabi@gmail.com
### Dataset Summary
A Persian translation dataset (English -> Persian).
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
The text dataset is in Persian (`fa`) and English (`en`).
## Dataset Structure
### Data Instances
Here is an example from the dataset:
```json
{
"source": "how toil to raise funds, propagate reforms, initiate institutions!",
"targets": ["چه زحمتها که بکشد تا منابع مالی را تامین کند اصطلاحات را ترویج کند نهادهایی به راه اندازد."],
"category": "mizan_dev_en_fa"
}
```
### Data Fields
- `source`: the input sentences, in English.
- `targets`: the list of gold target translations in Persian.
- `category`: the source from which the dataset is mined.
### Data Splits
The train/de/test split contains 1,621,666/2,138/48,360 samples.
## Dataset Creation
### Curation Rationale
For details, check [the corresponding draft](https://arxiv.org/abs/2012.06154).
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
CC BY-NC-SA 4.0 License
### Citation Information
```bibtex
@article{huggingface:dataset,
title = {ParsiNLU: A Suite of Language Understanding Challenges for Persian},
authors = {Khashabi, Daniel and Cohan, Arman and Shakeri, Siamak and Hosseini, Pedram and Pezeshkpour, Pouya and Alikhani, Malihe and Aminnaseri, Moin and Bitaab, Marzieh and Brahman, Faeze and Ghazarian, Sarik and others},
year={2020}
journal = {arXiv e-prints},
eprint = {2012.06154},
}
```
### Contributions
Thanks to [@danyaljj](https://github.com/danyaljj) for adding this dataset.
|
sdadas/8tags | 2022-12-29T11:40:52.000Z | [
"task_categories:text-classification",
"task_ids:topic-classification",
"task_ids:multi-class-classification",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"language:pl",
"license:cc-by-nc-sa-4.0",
"region:us"
] | sdadas | null | null | null | 0 | 115 | ---
language:
- pl
license:
- cc-by-nc-sa-4.0
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
task_categories:
- text-classification
task_ids:
- topic-classification
- multi-class-classification
pretty_name: 8TAGS
dataset_info:
features:
- name: sentence
dtype: string
- name: label
dtype:
class_label:
names:
0: film
1: history
2: food
3: medicine
4: motorization
5: work
6: sport
7: technology
splits:
- name: train
- name: validation
- name: test
---
# 8TAGS
### Dataset Summary
A Polish topic classification dataset consisting of headlines from social media posts. It contains about 50,000 sentences annotated with 8 topic labels: film, history, food, medicine, motorization, work, sport and technology. This dataset was created automatically by extracting sentences from headlines and short descriptions of articles posted on Polish social networking site **wykop.pl**. The service allows users to annotate articles with one or more tags (categories). Dataset represents a selection of article sentences from 8 popular categories. The resulting corpus contains cleaned and tokenized, unambiguous sentences (tagged with only one of the selected categories), and longer than 30 characters.
### Data Instances
Example instance:
```
{
"sentence": "Kierowca był nieco zdziwiony że podróżując sporo ponad 200 km / h zatrzymali go policjanci.",
"label": "4"
}
```
### Data Fields
- sentence: sentence text
- label: label identifier corresponding to one of 8 topics
### Citation Information
```
@inproceedings{dadas-etal-2020-evaluation,
title = "Evaluation of Sentence Representations in {P}olish",
author = "Dadas, Slawomir and Pere{\l}kiewicz, Micha{\l} and Po{\'s}wiata, Rafa{\l}",
booktitle = "Proceedings of the 12th Language Resources and Evaluation Conference",
month = may,
year = "2020",
address = "Marseille, France",
publisher = "European Language Resources Association",
url = "https://aclanthology.org/2020.lrec-1.207",
pages = "1674--1680",
language = "English",
ISBN = "979-10-95546-34-4",
}
```
|
clarin-pl/poquad | 2023-07-04T10:50:43.000Z | [
"task_categories:question-answering",
"task_ids:extractive-qa",
"task_ids:open-domain-qa",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:pl",
"license:cc-by-4.0",
"region:us"
] | clarin-pl | PoQuaD description | Tuora, R., Zawadzka-Paluektau, N., Klamra, C., Zwierzchowska, A., Kobyliński, Ł. (2022).
Towards a Polish Question Answering Dataset (PoQuAD).
In: Tseng, YH., Katsurai, M., Nguyen, H.N. (eds) From Born-Physical to Born-Virtual: Augmenting Intelligence in Digital Libraries. ICADL 2022.
Lecture Notes in Computer Science, vol 13636. Springer, Cham.
https://doi.org/10.1007/978-3-031-21756-2_16 | null | 1 | 115 | ---
annotations_creators:
- expert-generated
language_creators:
- found
language:
- pl
license:
- cc-by-4.0
multilinguality:
- monolingual
pretty_name: PoQuaD
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- question-answering
task_ids:
- extractive-qa
- open-domain-qa
---
PoQuaD dataset |
metaeval/reclor | 2023-05-31T09:59:42.000Z | [
"language:en",
"license:other",
"region:us"
] | metaeval | null | null | null | 2 | 115 | ---
license: other
language:
- en
---
https://whyu.me/reclor/
```bib
@inproceedings{yu2020reclor,
author = {Yu, Weihao and Jiang, Zihang and Dong, Yanfei and Feng, Jiashi},
title = {ReClor: A Reading Comprehension Dataset Requiring Logical Reasoning},
booktitle = {International Conference on Learning Representations (ICLR)},
month = {April},
year = {2020}
}
``` |
vietgpt/the_pile_openwebtext2 | 2023-07-15T09:20:18.000Z | [
"language:en",
"region:us"
] | vietgpt | null | null | null | 1 | 115 | ---
language: en
dataset_info:
features:
- name: title
dtype: string
- name: text
dtype: string
- name: reddit_scores
sequence: int32
splits:
- name: train
num_bytes: 68786199155
num_examples: 17103059
download_size: 42444568964
dataset_size: 68786199155
---
# Dataset Card for "the_pile_openwebtext2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
jxu124/llava_detail_23k | 2023-05-20T18:47:30.000Z | [
"region:us"
] | jxu124 | null | null | null | 0 | 115 | ---
dataset_info:
features:
- name: global_image_id
dtype: string
- name: image_path
dtype: string
- name: dialog
sequence:
sequence: string
- name: anns_id
dtype: string
splits:
- name: train
num_bytes: 17698232
num_examples: 23240
download_size: 7640667
dataset_size: 17698232
---
# Dataset Card for "llava_detail_23k"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
ChilleD/StrategyQA | 2023-08-26T03:18:40.000Z | [
"license:mit",
"region:us"
] | ChilleD | null | null | null | 0 | 115 | ---
license: mit
dataset_info:
features:
- name: qid
dtype: string
- name: term
dtype: string
- name: description
dtype: string
- name: question
dtype: string
- name: answer
dtype: bool
- name: facts
dtype: string
splits:
- name: train
num_bytes: 524456
num_examples: 1603
- name: test
num_bytes: 226237
num_examples: 687
download_size: 530106
dataset_size: 750693
---
|
approach0/mathy-phase2 | 2023-08-24T00:25:38.000Z | [
"region:us"
] | approach0 | null | null | null | 0 | 115 | ---
dataset_info:
features:
- name: problem
dtype: string
- name: query
dtype: string
- name: prompt
dtype: string
- name: solution
dtype: string
- name: ground_truth
dtype: 'null'
- name: judge_buffer
dtype: 'null'
- name: manual_query
dtype: 'null'
- name: manual_rating
dtype: int64
- name: args
dtype: string
splits:
- name: train
num_bytes: 470590.71186440677
num_examples: 114
- name: test
num_bytes: 260063.28813559323
num_examples: 63
download_size: 0
dataset_size: 730654.0
---
# Dataset Card for "mathy-phase2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
corbt/unlabeled-recipes | 2023-08-23T23:43:57.000Z | [
"region:us"
] | corbt | null | null | null | 0 | 115 | ---
dataset_info:
features:
- name: recipe
dtype: string
splits:
- name: train
num_bytes: 2793853
num_examples: 5000
download_size: 1465640
dataset_size: 2793853
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "unlabeled-recipes"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Alignment-Lab-AI/agentcode | 2023-09-08T08:27:16.000Z | [
"region:us"
] | Alignment-Lab-AI | null | null | null | 6 | 115 | Entry not found |
p1atdev/instruction_qa | 2023-09-27T04:57:47.000Z | [
"task_categories:text-generation",
"task_categories:question-answering",
"size_categories:100K<n<1M",
"language:ja",
"language:en",
"region:us"
] | p1atdev | null | null | null | 0 | 115 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: system
dtype: string
- name: question
dtype: string
- name: input
dtype: string
- name: output
dtype: string
- name: source
dtype: string
splits:
- name: train
num_bytes: 201688797.5894788
num_examples: 137343
- name: test
num_bytes: 22413782.410521213
num_examples: 15263
download_size: 108872688
dataset_size: 224102580
task_categories:
- text-generation
- question-answering
language:
- ja
- en
size_categories:
- 100K<n<1M
---
# Dataset Card for "instruction_qa"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
code_x_glue_cc_code_to_code_trans | 2023-07-27T14:11:43.000Z | [
"task_categories:translation",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:other-programming-languages",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:code",
"license:c-uda",
"code-to-code",
"arxiv:2102.04664",
"region:us"
] | null | The dataset is collected from several public repos, including Lucene(http://lucene.apache.org/), POI(http://poi.apache.org/), JGit(https://github.com/eclipse/jgit/) and Antlr(https://github.com/antlr/).
We collect both the Java and C# versions of the codes and find the parallel functions. After removing duplicates and functions with the empty body, we split the whole dataset into training, validation and test sets. | @article{DBLP:journals/corr/abs-2102-04664,
author = {Shuai Lu and
Daya Guo and
Shuo Ren and
Junjie Huang and
Alexey Svyatkovskiy and
Ambrosio Blanco and
Colin B. Clement and
Dawn Drain and
Daxin Jiang and
Duyu Tang and
Ge Li and
Lidong Zhou and
Linjun Shou and
Long Zhou and
Michele Tufano and
Ming Gong and
Ming Zhou and
Nan Duan and
Neel Sundaresan and
Shao Kun Deng and
Shengyu Fu and
Shujie Liu},
title = {CodeXGLUE: {A} Machine Learning Benchmark Dataset for Code Understanding
and Generation},
journal = {CoRR},
volume = {abs/2102.04664},
year = {2021}
} | null | 3 | 114 | ---
annotations_creators:
- expert-generated
language_creators:
- found
language:
- code
license:
- c-uda
multilinguality:
- other-programming-languages
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- translation
task_ids: []
pretty_name: CodeXGlueCcCodeToCodeTrans
tags:
- code-to-code
dataset_info:
features:
- name: id
dtype: int32
- name: java
dtype: string
- name: cs
dtype: string
splits:
- name: train
num_bytes: 4372657
num_examples: 10300
- name: validation
num_bytes: 226415
num_examples: 500
- name: test
num_bytes: 418595
num_examples: 1000
download_size: 4876035
dataset_size: 5017667
---
# Dataset Card for "code_x_glue_cc_code_to_code_trans"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits-sample-size)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://github.com/microsoft/CodeXGLUE/tree/main/Code-Code/code-to-code-trans
- **Paper:** https://arxiv.org/abs/2102.04664
### Dataset Summary
CodeXGLUE code-to-code-trans dataset, available at https://github.com/microsoft/CodeXGLUE/tree/main/Code-Code/code-to-code-trans
The dataset is collected from several public repos, including Lucene(http://lucene.apache.org/), POI(http://poi.apache.org/), JGit(https://github.com/eclipse/jgit/) and Antlr(https://github.com/antlr/).
We collect both the Java and C# versions of the codes and find the parallel functions. After removing duplicates and functions with the empty body, we split the whole dataset into training, validation and test sets.
### Supported Tasks and Leaderboards
- `machine-translation`: The dataset can be used to train a model for translating code in Java to C# and vice versa.
### Languages
- Java **programming** language
- C# **programming** language
## Dataset Structure
### Data Instances
An example of 'validation' looks as follows.
```
{
"cs": "public DVRecord(RecordInputStream in1){_option_flags = in1.ReadInt();_promptTitle = ReadUnicodeString(in1);_errorTitle = ReadUnicodeString(in1);_promptText = ReadUnicodeString(in1);_errorText = ReadUnicodeString(in1);int field_size_first_formula = in1.ReadUShort();_not_used_1 = in1.ReadShort();_formula1 = NPOI.SS.Formula.Formula.Read(field_size_first_formula, in1);int field_size_sec_formula = in1.ReadUShort();_not_used_2 = in1.ReadShort();_formula2 = NPOI.SS.Formula.Formula.Read(field_size_sec_formula, in1);_regions = new CellRangeAddressList(in1);}\n",
"id": 0,
"java": "public DVRecord(RecordInputStream in) {_option_flags = in.readInt();_promptTitle = readUnicodeString(in);_errorTitle = readUnicodeString(in);_promptText = readUnicodeString(in);_errorText = readUnicodeString(in);int field_size_first_formula = in.readUShort();_not_used_1 = in.readShort();_formula1 = Formula.read(field_size_first_formula, in);int field_size_sec_formula = in.readUShort();_not_used_2 = in.readShort();_formula2 = Formula.read(field_size_sec_formula, in);_regions = new CellRangeAddressList(in);}\n"
}
```
### Data Fields
In the following each data field in go is explained for each config. The data fields are the same among all splits.
#### default
|field name| type | description |
|----------|------|-----------------------------|
|id |int32 | Index of the sample |
|java |string| The java version of the code|
|cs |string| The C# version of the code |
### Data Splits
| name |train|validation|test|
|-------|----:|---------:|---:|
|default|10300| 500|1000|
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
https://github.com/microsoft, https://github.com/madlag
### Licensing Information
Computational Use of Data Agreement (C-UDA) License.
### Citation Information
```
@article{DBLP:journals/corr/abs-2102-04664,
author = {Shuai Lu and
Daya Guo and
Shuo Ren and
Junjie Huang and
Alexey Svyatkovskiy and
Ambrosio Blanco and
Colin B. Clement and
Dawn Drain and
Daxin Jiang and
Duyu Tang and
Ge Li and
Lidong Zhou and
Linjun Shou and
Long Zhou and
Michele Tufano and
Ming Gong and
Ming Zhou and
Nan Duan and
Neel Sundaresan and
Shao Kun Deng and
Shengyu Fu and
Shujie Liu},
title = {CodeXGLUE: {A} Machine Learning Benchmark Dataset for Code Understanding
and Generation},
journal = {CoRR},
volume = {abs/2102.04664},
year = {2021}
}
```
### Contributions
Thanks to @madlag (and partly also @ncoop57) for adding this dataset. |
GroNLP/divemt | 2023-02-10T11:04:33.000Z | [
"task_categories:translation",
"annotations_creators:machine-generated",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:translation",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:en",
"language:it",
"language:vi",
"language:nl",
"language:uk",
"language:tr",
"language:ar",
"license:gpl-3.0",
"arxiv:2205.12215",
"region:us"
] | GroNLP | DivEMT is the first publicly available post-editing study of Neural Machine Translation (NMT) over a typologically diverse set of target languages. Using a strictly controlled setup, 18 professional translators were instructed to translate or post-edit the same set of English documents into Arabic, Dutch, Italian, Turkish, Ukrainian, and Vietnamese. During the process, their edits, keystrokes, editing times, pauses, and perceived effort were logged, enabling an in-depth, cross-lingual evaluation of NMT quality and its post-editing process. | @inproceedings{sarti-etal-2022-divemt,
title = "{D}iv{EMT}: Neural Machine Translation Post-Editing Effort Across Typologically Diverse Languages",
author = "Sarti, Gabriele and Bisazza, Arianna and Guerberof Arenas, Ana and Toral, Antonio",
booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2022",
address = "Abu Dhabi, United Arab Emirates",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2022.emnlp-main.532",
pages = "7795--7816",
} | null | 2 | 114 | ---
annotations_creators:
- machine-generated
- expert-generated
language_creators:
- found
language:
- en
- it
- vi
- nl
- uk
- tr
- ar
license:
- gpl-3.0
multilinguality:
- translation
pretty_name: divemt
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- translation
---
# Dataset Card for DivEMT
*For more details on DivEMT, see our [EMNLP 2022 Paper](https://arxiv.org/abs/2205.12215) and our [Github repository](https://github.com/gsarti/divemt)*
## Dataset Description
- **Source:** [Github](https://github.com/gsarti/divemt)
- **Paper:** [Arxiv](https://arxiv.org/abs/2205.12215)
- **Point of Contact:** [Gabriele Sarti](mailto:g.sarti@rug.nl)
[Gabriele Sarti](https://gsarti.com) • [Arianna Bisazza](https://www.cs.rug.nl/~bisazza/) • [Ana Guerberof Arenas](https://scholar.google.com/citations?user=i6bqaTsAAAAJ) • [Antonio Toral](https://antoniotor.al/)
<img src="https://huggingface.co/datasets/GroNLP/divemt/resolve/main/divemt.png" alt="DivEMT annotation pipeline" width="600"/>
>We introduce DivEMT, the first publicly available post-editing study of Neural Machine Translation (NMT) over a typologically diverse set of target languages. Using a strictly controlled setup, 18 professional translators were instructed to translate or post-edit the same set of English documents into Arabic, Dutch, Italian, Turkish, Ukrainian, and Vietnamese. During the process, their edits, keystrokes, editing times and pauses were recorded, enabling an in-depth, cross-lingual evaluation of NMT quality and post-editing effectiveness. Using this new dataset, we assess the impact of two state-of-the-art NMT systems, Google Translate and the multilingual mBART-50 model, on translation productivity. We find that post-editing is consistently faster than translation from scratch. However, the magnitude of productivity gains varies widely across systems and languages, highlighting major disparities in post-editing effectiveness for languages at different degrees of typological relatedness to English, even when controlling for system architecture and training data size. We publicly release the complete dataset including all collected behavioral data, to foster new research on the translation capabilities of NMT systems for typologically diverse languages.
### Dataset Summary
This dataset contains the processed `warmup` and `main` splits of the DivEMT dataset. A sample of documents extracted from the Flores-101 corpus were either translated from scratch or post-edited from an existing automatic translation by a total of 18 professional translators across six typologically diverse languages (Arabic, Dutch, Italian, Turkish, Ukrainian, Vietnamese). During the translation, behavioral data (keystrokes, pauses, editing times) were collected using the [PET](https://github.com/wilkeraziz/PET) platform.
We publicly release the processed dataset including all collected behavioural data, to foster new research on the ability of state-of-the-art NMT systems to generate text in typologically diverse languages.
### News 🎉
**February, 2023**: The DivEMT dataset now contains linguistic annotations (`*_annotations` fields) computed with Stanza and word-level quality estimation tags (`src_wmt22_qe`, `mt_wmt22_qe`) obtained using the same scripts adopted for the WMT22 QE Task 2.
### Languages
The language data of DivEMT is in English (BCP-47 `en`), Italian (BCP-47 `it`), Dutch (BCP-47 `nl`), Arabic (BCP-47 `ar`), Turkish (BCP-47 `tr`), Ukrainian (BCP-47 `uk`) and Vietnamese (BCP-47 `vi`)
## Dataset Structure
### Data Instances
The dataset contains two configurations: `main` and `warmup`. `main` contains the full data collected during the main task and analyzed during our experiments. `warmup` contains the data collected in the verification phase, before the main task begins.
### Data Fields
The following fields are contained in the training set:
|Field|Description|
|-----|-----------|
|`unit_id` | The full entry identifier. Format: `flores101-{config}-{lang}-{doc_id}-{modality}-{sent_in_doc_num}` |
|`flores_id` | Index of the sentence in the original [Flores-101](https://huggingface.co/datasets/gsarti/flores_101) dataset |
|`item_id` | The sentence identifier. The first digits of the number represent the document containing the sentence, while the last digit of the number represents the sentence position inside the document. Documents can contain from 3 to 5 contiguous sentences each. |
|`subject_id` | The identifier for the translator performing the translation from scratch or post-editing task. Values: `t1`, `t2` or `t3`. |
|`lang_id` | Language identifier for the sentence, using Flores-101 three-letter format (e.g. `ara`, `nld`)|
|`doc_id` | Document identifier for the sentence |
|`task_type` | The modality of the translation task. Values: `ht` (translation from scratch), `pe1` (post-editing Google Translate translations), `pe2` (post-editing [mBART 1-to-50](https://huggingface.co/facebook/mbart-large-50-one-to-many-mmt) translations). |
|`translation_type` | Either `ht` for from scratch or `pe` for post-editing |
|`src_len_chr` | Length of the English source text in number of characters |
|`mt_len_chr` | Length of the machine translation in number of characters (NaN for ht) |
|`tgt_len_chr` | Length of the target text in number of characters |
|`src_len_wrd` | Length of the English source text in number of words |
|`mt_len_wrd` | Length of the machine translation in number of words (NaN for ht) |
|`tgt_len_wrd` | Length of the target text in number of words |
|`edit_time` | Total editing time for the translation in seconds. |
|`k_total` | Total number of keystrokes for the translation. |
|`k_letter` | Total number of letter keystrokes for the translation. |
|`k_digit` | Total number of digit keystrokes for the translation. |
|`k_white` | Total number of whitespace keystrokes for the translation. |
|`k_symbol` | Total number of symbol (punctuation, etc.) keystrokes for the translation. |
|`k_nav` | Total number of navigation keystrokes (left-right arrows, mouse clicks) for the translation. |
|`k_erase` | Total number of erase keystrokes (backspace, cancel) for the translation. |
|`k_copy` | Total number of copy (Ctrl + C) actions during the translation. |
|`k_cut` | Total number of cut (Ctrl + X) actions during the translation. |
|`k_paste` | Total number of paste (Ctrl + V) actions during the translation. |
|`k_do` | Total number of Enter actions during the translation. |
|`n_pause_geq_300` | Number of pauses of 300ms or more during the translation. |
|`len_pause_geq_300` | Total duration of pauses of 300ms or more, in milliseconds. |
|`n_pause_geq_1000` | Number of pauses of 1s or more during the translation. |
|`len_pause_geq_1000` | Total duration of pauses of 1000ms or more, in milliseconds. |
|`event_time` | Total time summed across all translation events, should be comparable to `edit_time` in most cases. |
|`num_annotations` | Number of times the translator focused the textbox for performing the translation of the sentence during the translation session. E.g. 1 means the translation was performed once and never revised. |
|`n_insert` | Number of post-editing insertions (empty for modality `ht`) computed using the [tercom](https://github.com/jhclark/tercom) library. |
|`n_delete` | Number of post-editing deletions (empty for modality `ht`) computed using the [tercom](https://github.com/jhclark/tercom) library. |
|`n_substitute` | Number of post-editing substitutions (empty for modality `ht`) computed using the [tercom](https://github.com/jhclark/tercom) library. |
|`n_shift` | Number of post-editing shifts (empty for modality `ht`) computed using the [tercom](https://github.com/jhclark/tercom) library. |
|`tot_shifted_words` | Total amount of shifted words from all shifts present in the sentence. |
|`tot_edits` | Total of all edit types for the sentence. |
|`hter` | Human-mediated Translation Edit Rate score computed between MT and post-edited TGT (empty for modality `ht`) using the [tercom](https://github.com/jhclark/tercom) library. |
|`cer` | Character-level HTER score computed between MT and post-edited TGT (empty for modality `ht`) using [CharacTER](https://github.com/rwth-i6/CharacTER).
|`bleu` | Sentence-level BLEU score between MT and post-edited TGT (empty for modality `ht`) computed using the [SacreBLEU](https://github.com/mjpost/sacrebleu) library with default parameters. |
|`chrf` | Sentence-level chrF score between MT and post-edited TGT (empty for modality `ht`) computed using the [SacreBLEU](https://github.com/mjpost/sacrebleu) library with default parameters. |
|`time_s` | Edit time expressed in seconds. |
|`time_m` | Edit time expressed in minutes. |
|`time_h` | Edit time expressed in hours. |
|`time_per_char` | Edit time per source character, expressed in seconds. |
|`time_per_word` | Edit time per source word, expressed in seconds. |
|`key_per_char` | Proportion of keys per character needed to perform the translation. |
|`words_per_hour` | Amount of source words translated or post-edited per hour. |
|`words_per_minute` | Amount of source words translated or post-edited per minute. |
|`per_subject_visit_order` | Id denoting the order in which the translator accessed documents. 1 correspond to the first accessed document. |
|`src_text` | The original source sentence extracted from Wikinews, wikibooks or wikivoyage. |
|`mt_text` | Missing if tasktype is `ht`. Otherwise, contains the automatically-translated sentence before post-editing. |
|`tgt_text` | Final sentence produced by the translator (either via translation from scratch of `sl_text` or post-editing `mt_text`) |
|`aligned_edit` | Aligned visual representation of REF (`mt_text`), HYP (`tl_text`) and edit operations (I = Insertion, D = Deletion, S = Substitution) performed on the field. Replace `\\n` with `\n` to show the three aligned rows.|
|`src_tokens` | List of tokens obtained tokenizing `src_text` with Stanza using default params. |
|`src_annotations` | List of lists (one per `src_tokens` token) containing dictionaries (one per word, >1 for mwt) with pos, ner and other info parsed by Stanza |
|`mt_tokens` | List of tokens obtained tokenizing `mt_text` with Stanza using default params. |
|`mt_annotations` | List of lists (one per `mt_tokens` token) containing dictionaries (one per word, >1 for mwt) with pos, ner and other info parsed by Stanza |
|`tgt_tokens` | List of tokens obtained tokenizing `tgt_text` with Stanza using default params. |
|`tgt_annotations` | List of lists (one per `tgt_tokens` token) containing dictionaries (one per word, >1 for mwt) with pos, ner and other info parsed by Stanza |
### Data Splits
| config | train|
|-------:|-----:|
|`main` | 7740 (107 docs i.e. 430 sents x 18 translators) |
|`warmup`| 360 (5 docs i.e. 20 sents x 18 translators) |
#### Train Split
The `train` split contains the totality of triplets (or pairs, when translation from scratch is performed) annotated with behavioral data produced during the translation.
The following is an example of the subject `t1` post-editing a machine translation produced by Google Translate (task_type `pe1`) taken from the `train` split for Turkish. The field `aligned_edit` is showed over three lines to provide a visual understanding of its contents.
```json
{
'unit_id': 'flores101-main-tur-46-pe1-3',
'flores_id': 871,
'item_id': 'flores101-main-463',
'subject_id': 'tur_t1',
'task_type': 'pe1',
'translation_type': 'pe',
'src_len_chr': 109,
'mt_len_chr': 129.0,
'tgt_len_chr': 120,
'src_len_wrd': 17,
'mt_len_wrd': 15.0,
'tgt_len_wrd': 13,
'edit_time': 11.762999534606934,
'k_total': 31,
'k_letter': 9,
'k_digit': 0,
'k_white': 0,
'k_symbol': 0,
'k_nav': 20,
'k_erase': 2,
'k_copy': 0,
'k_cut': 0,
'k_paste': 0,
'k_do': 0,
'n_pause_geq_300': 2,
'len_pause_geq_300': 4986,
'n_pause_geq_1000': 1,
'len_pause_geq_1000': 4490,
'event_time': 11763,
'num_annotations': 2,
'last_modification_time': 1643569484,
'n_insert': 0.0,
'n_delete': 2.0,
'n_substitute': 1.0,
'n_shift': 0.0,
'tot_shifted_words': 0.0,
'tot_edits': 3.0,
'hter': 20.0,
'cer': 0.10,
'bleu': 0.0,
'chrf': 2.569999933242798,
'lang_id': 'tur',
'doc_id': 46,
'time_s': 11.762999534606934,
'time_m': 0.1960500031709671,
'time_h': 0.0032675000838935375,
'time_per_char': 0.1079174280166626,
'time_per_word': 0.6919412016868591,
'key_per_char': 0.2844036817550659,
'words_per_hour': 5202.75439453125,
'words_per_minute': 86.71257019042969,
'per_subject_visit_order': 201,
'src_text': 'As one example, American citizens in the Middle East might face different situations from Europeans or Arabs.',
'mt_text': "Bir örnek olarak, Orta Doğu'daki Amerikan vatandaşları, Avrupalılardan veya Araplardan farklı durumlarla karşı karşıya kalabilir.",
'tgt_text': "Örneğin, Orta Doğu'daki Amerikan vatandaşları, Avrupalılardan veya Araplardan farklı durumlarla karşı karşıya kalabilir.",
'aligned_edit': "REF: bir örnek olarak, orta doğu'daki amerikan vatandaşları, avrupalılardan veya araplardan farklı durumlarla karşı karşıya kalabilir.\\n
HYP: *** ***** örneğin, orta doğu'daki amerikan vatandaşları, avrupalılardan veya araplardan farklı durumlarla karşı karşıya kalabilir.\\n
EVAL: D D S"
}
```
The text is provided as-is, without further preprocessing or tokenization.
### Dataset Creation
The dataset was parsed from PET XML files into CSV format using the scripts available in the [DivEMT Github repository](https://github.com/gsarti/divemt).
Those are adapted from the ones by [Antonio Toral](https://research.rug.nl/en/persons/antonio-toral-ruiz) found at the following link: [https://github.com/antot/postediting_novel_frontiers](https://github.com/antot/postediting_novel_frontiers).
## Additional Information
### Dataset Curators
For problems related to this 🤗 Datasets version, please contact me at [g.sarti@rug.nl](mailto:g.sarti@rug.nl).
### Citation Information
```bibtex
@inproceedings{sarti-etal-2022-divemt,
title = "{D}iv{EMT}: Neural Machine Translation Post-Editing Effort Across Typologically Diverse Languages",
author = "Sarti, Gabriele and
Bisazza, Arianna and
Guerberof-Arenas, Ana and
Toral, Antonio",
booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2022",
address = "Abu Dhabi, United Arab Emirates",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2022.emnlp-main.532",
pages = "7795--7816",
}
``` |
rungalileo/medical_transcription_4 | 2022-08-04T04:58:36.000Z | [
"region:us"
] | rungalileo | null | null | null | 1 | 114 | Entry not found |
csebuetnlp/BanglaNMT | 2023-02-24T14:46:55.000Z | [
"task_categories:translation",
"annotations_creators:other",
"language_creators:found",
"multilinguality:translation",
"size_categories:1M<n<10M",
"language:bn",
"language:en",
"license:cc-by-nc-sa-4.0",
"bengali",
"BanglaNMT",
"region:us"
] | csebuetnlp | This is the largest Machine Translation (MT) dataset for Bengali-English, introduced in the paper
`Not Low-Resource Anymore: Aligner Ensembling, Batch Filtering, and New Datasets for Bengali-English Machine Translation`. | @inproceedings{hasan-etal-2020-low,
title = "Not Low-Resource Anymore: Aligner Ensembling, Batch Filtering, and New Datasets for {B}engali-{E}nglish Machine Translation",
author = "Hasan, Tahmid and
Bhattacharjee, Abhik and
Samin, Kazi and
Hasan, Masum and
Basak, Madhusudan and
Rahman, M. Sohel and
Shahriyar, Rifat",
booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
month = nov,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2020.emnlp-main.207",
doi = "10.18653/v1/2020.emnlp-main.207",
pages = "2612--2623",
abstract = "Despite being the seventh most widely spoken language in the world, Bengali has received much less attention in machine translation literature due to being low in resources. Most publicly available parallel corpora for Bengali are not large enough; and have rather poor quality, mostly because of incorrect sentence alignments resulting from erroneous sentence segmentation, and also because of a high volume of noise present in them. In this work, we build a customized sentence segmenter for Bengali and propose two novel methods for parallel corpus creation on low-resource setups: aligner ensembling and batch filtering. With the segmenter and the two methods combined, we compile a high-quality Bengali-English parallel corpus comprising of 2.75 million sentence pairs, more than 2 million of which were not available before. Training on neural models, we achieve an improvement of more than 9 BLEU score over previous approaches to Bengali-English machine translation. We also evaluate on a new test set of 1000 pairs made with extensive quality control. We release the segmenter, parallel corpus, and the evaluation set, thus elevating Bengali from its low-resource status. To the best of our knowledge, this is the first ever large scale study on Bengali-English machine translation. We believe our study will pave the way for future research on Bengali-English machine translation as well as other low-resource languages. Our data and code are available at https://github.com/csebuetnlp/banglanmt.",
} | null | 0 | 114 | ---
annotations_creators:
- other
language:
- bn
- en
language_creators:
- found
license:
- cc-by-nc-sa-4.0
multilinguality:
- translation
pretty_name: BanglaNMT
size_categories:
- 1M<n<10M
source_datasets: []
tags:
- bengali
- BanglaNMT
task_categories:
- translation
---
# Dataset Card for `BanglaNMT`
## Table of Contents
- [Dataset Card for `BanglaNMT`](#dataset-card-for-BanglaNMT)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Usage](#usage)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Initial Data Collection and Normalization](#initial-data-collection-and-normalization)
- [Who are the source language producers?](#who-are-the-source-language-producers)
- [Annotations](#annotations)
- [Annotation process](#annotation-process)
- [Who are the annotators?](#who-are-the-annotators)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Repository:** [https://github.com/csebuetnlp/banglanmt](https://github.com/csebuetnlp/banglanmt)
- **Paper:** [**"Not Low-Resource Anymore: Aligner Ensembling, Batch Filtering, and New Datasets for Bengali-English Machine Translation"**](https://www.aclweb.org/anthology/2020.emnlp-main.207)
- **Point of Contact:** [Tahmid Hasan](mailto:tahmidhasan@cse.buet.ac.bd)
### Dataset Summary
This is the largest Machine Translation (MT) dataset for Bengali-English, curated using novel sentence alignment methods introduced **[here](https://aclanthology.org/2020.emnlp-main.207/).**
**Note:** This is a filtered version of the original dataset that the authors used for NMT training. For the complete set, refer to the offical [repository](https://github.com/csebuetnlp/banglanmt)
### Supported Tasks and Leaderboards
[More information needed](https://github.com/csebuetnlp/banglanmt)
### Languages
- `Bengali`
- `English`
### Usage
```python
from datasets import load_dataset
dataset = load_dataset("csebuetnlp/BanglaNMT")
```
## Dataset Structure
### Data Instances
One example from the dataset is given below in JSON format.
```
{
'bn': 'বিমানবন্দরে যুক্তরাজ্যে নিযুক্ত বাংলাদেশ হাইকমিশনার সাঈদা মুনা তাসনীম ও লন্ডনে বাংলাদেশ মিশনের জ্যেষ্ঠ কর্মকর্তারা তাকে বিদায় জানান।',
'en': 'Bangladesh High Commissioner to the United Kingdom Saida Muna Tasneen and senior officials of Bangladesh Mission in London saw him off at the airport.'
}
```
### Data Fields
The data fields are as follows:
- `bn`: a `string` feature indicating the Bengali sentence.
- `en`: a `string` feature indicating the English translation.
### Data Splits
| split |count |
|----------|--------|
|`train`| 2379749 |
|`validation`| 597 |
|`test`| 1000 |
## Dataset Creation
[More information needed](https://github.com/csebuetnlp/banglanmt)
### Curation Rationale
[More information needed](https://github.com/csebuetnlp/banglanmt)
### Source Data
[More information needed](https://github.com/csebuetnlp/banglanmt)
#### Initial Data Collection and Normalization
[More information needed](https://github.com/csebuetnlp/banglanmt)
#### Who are the source language producers?
[More information needed](https://github.com/csebuetnlp/banglanmt)
### Annotations
[More information needed](https://github.com/csebuetnlp/banglanmt)
#### Annotation process
[More information needed](https://github.com/csebuetnlp/banglanmt)
#### Who are the annotators?
[More information needed](https://github.com/csebuetnlp/banglanmt)
### Personal and Sensitive Information
[More information needed](https://github.com/csebuetnlp/banglanmt)
## Considerations for Using the Data
### Social Impact of Dataset
[More information needed](https://github.com/csebuetnlp/banglanmt)
### Discussion of Biases
[More information needed](https://github.com/csebuetnlp/banglanmt)
### Other Known Limitations
[More information needed](https://github.com/csebuetnlp/banglanmt)
## Additional Information
### Dataset Curators
[More information needed](https://github.com/csebuetnlp/banglanmt)
### Licensing Information
Contents of this repository are restricted to only non-commercial research purposes under the [Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License (CC BY-NC-SA 4.0)](https://creativecommons.org/licenses/by-nc-sa/4.0/). Copyright of the dataset contents belongs to the original copyright holders.
### Citation Information
If you use the dataset, please cite the following paper:
```
@inproceedings{hasan-etal-2020-low,
title = "Not Low-Resource Anymore: Aligner Ensembling, Batch Filtering, and New Datasets for {B}engali-{E}nglish Machine Translation",
author = "Hasan, Tahmid and
Bhattacharjee, Abhik and
Samin, Kazi and
Hasan, Masum and
Basak, Madhusudan and
Rahman, M. Sohel and
Shahriyar, Rifat",
booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
month = nov,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2020.emnlp-main.207",
doi = "10.18653/v1/2020.emnlp-main.207",
pages = "2612--2623",
abstract = "Despite being the seventh most widely spoken language in the world, Bengali has received much less attention in machine translation literature due to being low in resources. Most publicly available parallel corpora for Bengali are not large enough; and have rather poor quality, mostly because of incorrect sentence alignments resulting from erroneous sentence segmentation, and also because of a high volume of noise present in them. In this work, we build a customized sentence segmenter for Bengali and propose two novel methods for parallel corpus creation on low-resource setups: aligner ensembling and batch filtering. With the segmenter and the two methods combined, we compile a high-quality Bengali-English parallel corpus comprising of 2.75 million sentence pairs, more than 2 million of which were not available before. Training on neural models, we achieve an improvement of more than 9 BLEU score over previous approaches to Bengali-English machine translation. We also evaluate on a new test set of 1000 pairs made with extensive quality control. We release the segmenter, parallel corpus, and the evaluation set, thus elevating Bengali from its low-resource status. To the best of our knowledge, this is the first ever large scale study on Bengali-English machine translation. We believe our study will pave the way for future research on Bengali-English machine translation as well as other low-resource languages. Our data and code are available at https://github.com/csebuetnlp/banglanmt.",
}
```
### Contributions
Thanks to [@abhik1505040](https://github.com/abhik1505040) and [@Tahmid](https://github.com/Tahmid04) for adding this dataset. |
keremberke/indoor-scene-classification | 2023-01-16T21:04:18.000Z | [
"task_categories:image-classification",
"roboflow",
"roboflow2huggingface",
"Retail",
"Pest Control",
"Benchmark",
"region:us"
] | keremberke | null | \ | null | 0 | 114 | ---
task_categories:
- image-classification
tags:
- roboflow
- roboflow2huggingface
- Retail
- Pest Control
- Benchmark
---
<div align="center">
<img width="640" alt="keremberke/indoor-scene-classification" src="https://huggingface.co/datasets/keremberke/indoor-scene-classification/resolve/main/thumbnail.jpg">
</div>
### Dataset Labels
```
['meeting_room', 'cloister', 'stairscase', 'restaurant', 'hairsalon', 'children_room', 'dining_room', 'lobby', 'museum', 'laundromat', 'computerroom', 'grocerystore', 'hospitalroom', 'buffet', 'office', 'warehouse', 'garage', 'bookstore', 'florist', 'locker_room', 'inside_bus', 'subway', 'fastfood_restaurant', 'auditorium', 'studiomusic', 'airport_inside', 'pantry', 'restaurant_kitchen', 'casino', 'movietheater', 'kitchen', 'waitingroom', 'artstudio', 'toystore', 'kindergarden', 'trainstation', 'bedroom', 'mall', 'corridor', 'bar', 'classroom', 'shoeshop', 'dentaloffice', 'videostore', 'laboratorywet', 'tv_studio', 'church_inside', 'operating_room', 'jewelleryshop', 'bathroom', 'clothingstore', 'closet', 'winecellar', 'livingroom', 'nursery', 'gameroom', 'inside_subway', 'deli', 'bakery', 'library', 'prisoncell', 'gym', 'concert_hall', 'greenhouse', 'elevator', 'poolinside', 'bowling']
```
### Number of Images
```json
{'train': 10885, 'test': 1558, 'valid': 3128}
```
### How to Use
- Install [datasets](https://pypi.org/project/datasets/):
```bash
pip install datasets
```
- Load the dataset:
```python
from datasets import load_dataset
ds = load_dataset("keremberke/indoor-scene-classification", name="full")
example = ds['train'][0]
```
### Roboflow Dataset Page
[https://universe.roboflow.com/popular-benchmarks/mit-indoor-scene-recognition/dataset/5](https://universe.roboflow.com/popular-benchmarks/mit-indoor-scene-recognition/dataset/5?ref=roboflow2huggingface)
### Citation
```
```
### License
MIT
### Dataset Summary
This dataset was exported via roboflow.com on October 24, 2022 at 4:09 AM GMT
Roboflow is an end-to-end computer vision platform that helps you
* collaborate with your team on computer vision projects
* collect & organize images
* understand unstructured image data
* annotate, and create datasets
* export, train, and deploy computer vision models
* use active learning to improve your dataset over time
It includes 15571 images.
Indoor-scenes are annotated in folder format.
The following pre-processing was applied to each image:
* Auto-orientation of pixel data (with EXIF-orientation stripping)
* Resize to 416x416 (Stretch)
No image augmentation techniques were applied.
|
ArmelR/stack-exchange-sample10000 | 2023-04-06T13:19:45.000Z | [
"region:us"
] | ArmelR | null | null | null | 2 | 114 | ---
dataset_info:
features:
- name: qid
dtype: int64
- name: question
dtype: string
- name: date
dtype: string
- name: metadata
sequence: string
- name: response_j
dtype: string
- name: response_k
dtype: string
splits:
- name: train
num_bytes: 27983797.447734267
num_examples: 10000
download_size: 15522939
dataset_size: 27983797.447734267
---
# Dataset Card for "stack-exchange-sample10000"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Docugami/dfm-csl-large-benchmark | 2023-10-04T08:41:01.000Z | [
"task_categories:text2text-generation",
"task_categories:text-generation",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:en",
"license:mit",
"docugami",
"dfm-csl",
"xml-knowledge-graphs",
"region:us"
] | Docugami | null | null | null | 4 | 114 | ---
license: mit
language:
- en
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- text2text-generation
- text-generation
dataset_info:
features:
- name: Text
dtype: string
- name: Ground Truth
dtype: string
- name: docugami/dfm-csl-large
dtype: string
splits:
- name: eval
num_bytes: 1137328
num_examples: 1088
- name: train
num_bytes: 83236
num_examples: 104
download_size: 572546
dataset_size: 1220564
tags:
- docugami
- dfm-csl
- xml-knowledge-graphs
pretty_name: Contextual Semantic Lables (Large)
---
# Contextual Semantic Labels (Large) Benchmark Dataset
Please see [https://github.com/docugami/DFM-benchmarks](https://github.com/docugami/DFM-benchmarks) for more details, eval code, and current scores for different models.
# Using Dataset
Please refer to standard huggingface documentation to use this dataset: [https://huggingface.co/docs/datasets/index](https://huggingface.co/docs/datasets/index)
The [explore.ipynb](./explore.ipynb) notebook has some reference code. |
llm-book/jawiki-sentences | 2023-06-03T03:03:22.000Z | [
"size_categories:10M<n<100M",
"language:ja",
"license:cc-by-sa-3.0",
"license:gfdl",
"region:us"
] | llm-book | null | null | null | 1 | 114 | ---
language:
- ja
size_categories:
- 10M<n<100M
license:
- cc-by-sa-3.0
- gfdl
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 3569619848
num_examples: 24387500
download_size: 1297833377
dataset_size: 3569619848
---
# Dataset Card for llm-book/jawiki-sentences
書籍『大規模言語モデル入門』で使用する Wikipedia 文のデータセットです。
GitHub リポジトリ [singletongue/wikipedia-utils](https://github.com/singletongue/wikipedia-utils) で公開されているデータセットを利用しています。
## Licence
本データセットで使用している Wikipedia のコンテンツは、[クリエイティブ・コモンズ表示・継承ライセンス 3.0 (CC BY-SA 3.0)](https://creativecommons.org/licenses/by-sa/3.0/deed.ja) および [GNU 自由文書ライセンス (GFDL)](https://www.gnu.org/licenses/fdl.html) の下に配布されているものです。
|
wtcherr/unsplash_20k | 2023-06-11T23:49:45.000Z | [
"region:us"
] | wtcherr | null | null | null | 0 | 114 | ---
dataset_info:
features:
- name: image
dtype: image
- name: prompt
dtype: string
splits:
- name: train
num_bytes: 2560499324.351
num_examples: 19999
download_size: 440556200
dataset_size: 2560499324.351
---
# Dataset Card for "unsplash_20k"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
SpeedOfMagic/trivia_qa_tiny | 2023-09-08T16:39:19.000Z | [
"size_categories:n<1K",
"language:en",
"region:us"
] | SpeedOfMagic | null | null | null | 0 | 114 | ---
language:
- en
size_categories:
- n<1K
---
# Dataset Card for Dataset Name
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
This dataset contains 100 samples from [trivia_qa](https://huggingface.co/datasets/trivia_qa) dataset. It is used mainly for testing purposes.
### Languages
English.
## Dataset Structure
### Data Instances
Total data size: 8Kb.
### Data Fields
- `question`: string feature, containing question to be answered.
- `answer: string feature, answer to the question.
### Data Splits
Only `test` split, that contains 100 rows, is supported.
|
approach0/MATH-no-asy | 2023-09-13T01:47:49.000Z | [
"region:us"
] | approach0 | null | null | null | 0 | 114 | ---
dataset_info:
features:
- name: src_path
dtype: string
- name: instruction
dtype: string
- name: input
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 5157479.0
num_examples: 6217
- name: test
num_bytes: 3381766.0
num_examples: 4212
download_size: 3505684
dataset_size: 8539245.0
---
# Dataset Card for "MATH"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
karan4d/instruct_machiavellian_textbooks | 2023-10-03T16:30:54.000Z | [
"license:apache-2.0",
"region:us"
] | karan4d | null | null | null | 0 | 114 | ---
license: apache-2.0
---
credits: shoutout @vikp for his textbook_quality GH repo this was created with
dataset info: a bunch of bad boy data for Machiavellian LLMs |
turkish_ner | 2023-01-25T14:54:39.000Z | [
"task_categories:token-classification",
"task_ids:named-entity-recognition",
"annotations_creators:machine-generated",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:original",
"language:tr",
"license:cc-by-4.0",
"arxiv:1702.02363",
"region:us"
] | null | Turkish Wikipedia Named-Entity Recognition and Text Categorization
(TWNERTC) dataset is a collection of automatically categorized and annotated
sentences obtained from Wikipedia. The authors constructed large-scale
gazetteers by using a graph crawler algorithm to extract
relevant entity and domain information
from a semantic knowledge base, Freebase.
The constructed gazetteers contains approximately
300K entities with thousands of fine-grained entity types
under 77 different domains. | @InProceedings@article{DBLP:journals/corr/SahinTYES17,
author = {H. Bahadir Sahin and
Caglar Tirkaz and
Eray Yildiz and
Mustafa Tolga Eren and
Omer Ozan Sonmez},
title = {Automatically Annotated Turkish Corpus for Named Entity Recognition
and Text Categorization using Large-Scale Gazetteers},
journal = {CoRR},
volume = {abs/1702.02363},
year = {2017},
url = {http://arxiv.org/abs/1702.02363},
archivePrefix = {arXiv},
eprint = {1702.02363},
timestamp = {Mon, 13 Aug 2018 16:46:36 +0200},
biburl = {https://dblp.org/rec/journals/corr/SahinTYES17.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
} | null | 5 | 113 | ---
annotations_creators:
- machine-generated
language_creators:
- expert-generated
language:
- tr
license:
- cc-by-4.0
multilinguality:
- monolingual
size_categories:
- 100K<n<1M
source_datasets:
- original
task_categories:
- token-classification
task_ids:
- named-entity-recognition
pretty_name: TurkishNer
dataset_info:
features:
- name: id
dtype: string
- name: tokens
sequence: string
- name: domain
dtype:
class_label:
names:
'0': architecture
'1': basketball
'2': book
'3': business
'4': education
'5': fictional_universe
'6': film
'7': food
'8': geography
'9': government
'10': law
'11': location
'12': military
'13': music
'14': opera
'15': organization
'16': people
'17': religion
'18': royalty
'19': soccer
'20': sports
'21': theater
'22': time
'23': travel
'24': tv
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PERSON
'2': I-PERSON
'3': B-ORGANIZATION
'4': I-ORGANIZATION
'5': B-LOCATION
'6': I-LOCATION
'7': B-MISC
'8': I-MISC
splits:
- name: train
num_bytes: 177658278
num_examples: 532629
download_size: 204393976
dataset_size: 177658278
---
# Dataset Card for turkish_ner
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** http://arxiv.org/abs/1702.02363
- **Repository:** [Needs More Information]
- **Paper:** http://arxiv.org/abs/1702.02363
- **Leaderboard:** [Needs More Information]
- **Point of Contact:** erayyildiz@ktu.edu.tr
### Dataset Summary
Automatically annotated Turkish corpus for named entity recognition and text categorization using large-scale gazetteers. The constructed gazetteers contains approximately 300K entities with thousands of fine-grained entity types under 25 different domains.
### Supported Tasks and Leaderboards
[Needs More Information]
### Languages
Turkish
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
There's only the training set.
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
H. Bahadir Sahin, Caglar Tirkaz, Eray Yildiz, Mustafa Tolga Eren and Omer Ozan Sonmez
### Licensing Information
Creative Commons Attribution 4.0 International
### Citation Information
@InProceedings@article{DBLP:journals/corr/SahinTYES17,
author = {H. Bahadir Sahin and
Caglar Tirkaz and
Eray Yildiz and
Mustafa Tolga Eren and
Omer Ozan Sonmez},
title = {Automatically Annotated Turkish Corpus for Named Entity Recognition
and Text Categorization using Large-Scale Gazetteers},
journal = {CoRR},
volume = {abs/1702.02363},
year = {2017},
url = {http://arxiv.org/abs/1702.02363},
archivePrefix = {arXiv},
eprint = {1702.02363},
timestamp = {Mon, 13 Aug 2018 16:46:36 +0200},
biburl = {https://dblp.org/rec/journals/corr/SahinTYES17.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
### Contributions
Thanks to [@merveenoyan](https://github.com/merveenoyan) for adding this dataset. |
carolina-c4ai/corpus-carolina | 2023-03-23T19:46:16.000Z | [
"task_categories:fill-mask",
"task_categories:text-generation",
"task_ids:masked-language-modeling",
"task_ids:language-modeling",
"annotations_creators:no-annotation",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:1B<n<10B",
"source_datasets:original",
"language:pt",
"license:cc-by-nc-sa-4.0",
"region:us"
] | carolina-c4ai | Carolina is an Open Corpus for Linguistics and Artificial Intelligence with a
robust volume of texts of varied typology in contemporary Brazilian Portuguese
(1970-2021). | null | null | 12 | 113 | ---
annotations_creators:
- no-annotation
language_creators:
- crowdsourced
language:
- pt
license:
- cc-by-nc-sa-4.0
multilinguality:
- monolingual
size_categories:
- 1B<n<10B
source_datasets:
- original
task_categories:
- fill-mask
- text-generation
task_ids:
- masked-language-modeling
- language-modeling
pretty_name: Carolina
language_bcp47:
- pt-BR
---
# Dataset Card for Corpus Carolina
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** [sites.usp.br/corpuscarolina](https://sites.usp.br/corpuscarolina/)
- **Current Version:** 1.2 (Ada)
- **Point of Contact:** [LaViHD](mailto:lavihd@usp.br)
### Dataset Summary
Carolina is an Open Corpus for Linguistics and Artificial Intelligence with a
robust volume of texts of varied typology in contemporary Brazilian Portuguese
(1970-2021). This corpus contains documents and texts extracted from the web
and includes information (metadata) about its provenance and tipology.
The documents are clustered into taxonomies and the corpus can be loaded in complete or taxonomy modes. To load a single taxonomy, it is possible to pass a code as a parameter to the loading script (see the example bellow). Codes are 3-letters string and possible values are:
- `dat` : datasets and other corpora;
- `jud` : judicial branch;
- `leg` : legislative branch;
- `pub` : public domain works;
- `soc` : social media;
- `uni` : university domains;
- `wik` : wikis.
Dataset Vesioning:
The Carolina Corpus is under continuous development resulting in multiple vesions. The current version is v1.2, but v1.1 is also available. You can access diferent vesions of the corpus using the `revision` parameter on `load_dataset`.
Usage Example:
```python
from datasets import load_dataset
# to load all taxonomies
corpus_carolina = load_dataset("carolina-c4ai/corpus-carolina")
# to load social media documents
social_media = load_dataset("carolina-c4ai/corpus-carolina", taxonomy="soc")
# to load previous version
corpus_carolina = load_dataset("carolina-c4ai/corpus-carolina", revision="v1.1")
```
### Supported Tasks
Carolina corpus was compiled for academic purposes,
namely linguistic and computational analysis.
### Languages
Contemporary Brazilian Portuguese (1970-2021).
## Dataset Structure
Files are stored inside `corpus` folder with a subfolder
for each taxonomy. Every file folows a XML structure
(TEI P5) and contains multiple extracted documents. For
each document, the text and metadata are exposed as
`text` and `meta` features, respectively.
### Data Instances
Every instance have the following structure.
```
{
"meta": datasets.Value("string"),
"text": datasets.Value("string")
}
```
| Code | Taxonomy | Instances | Size |
|:----:|:---------------------------|----------:|-------:|
| | **Total** | 2107045 | 11 GB |
| dat | Datasets and other Corpora | 1102049 | 4.4 GB |
| wik | Wikis | 960139 | 5.2 GB |
| jud | Judicial Branch | 40464 | 1.5 GB |
| leg | Legislative Branch | 13 | 25 MB |
| soc | Social Media | 3413 | 17 MB |
| uni | University Domains | 941 | 10 MB |
| pub | Public Domain Works | 26 | 4.5 MB |
||
### Data Fields
- `meta`: a XML string with a TEI conformant `teiHeader` tag. It is exposed as text and needs to be parsed in order to access the actual metada;
- `text`: a string containing the extracted document.
### Data Splits
As a general corpus, Carolina does not have splits. In order to load the dataset, it is used `corpus` as its single split.
## Additional Information
### Dataset Curators
The Corpus Carolina is developed by a multidisciplinary
team of linguists and computer scientists, members of the
Virtual Laboratory of Digital Humanities - LaViHD and the Artificial Intelligence Center of the University of São Paulo - C4AI.
### Licensing Information
The Open Corpus for Linguistics and Artificial Intelligence (Carolina) was
compiled for academic purposes, namely linguistic and computational analysis.
It is composed of texts assembled in various digital repositories, whose
licenses are multiple and therefore should be observed when making use of the
corpus. The Carolina headers are licensed under Creative Commons
Attribution-NonCommercial-ShareAlike 4.0 International."
### Citation Information
```
@misc{corpusCarolinaV1.1,
title={
Carolina:
The Open Corpus for Linguistics and Artificial Intelligence
},
author={
Finger, Marcelo and
Paixão de Sousa, Maria Clara and
Namiuti, Cristiane and
Martins do Monte, Vanessa and
Costa, Aline Silva and
Serras, Felipe Ribas and
Sturzeneker, Mariana Lourenço and
Guets, Raquel de Paula and
Mesquita, Renata Morais and
Mello, Guilherme Lamartine de and
Crespo, Maria Clara Ramos Morales and
Rocha, Maria Lina de Souza Jeannine and
Brasil, Patrícia and
Silva, Mariana Marques da and
Palma, Mayara Feliciano
},
howpublished={\url{
https://sites.usp.br/corpuscarolina/corpus}},
year={2022},
note={Version 1.1 (Ada)},
}
```
|
Jzuluaga/atcosim_corpus | 2022-12-05T11:14:57.000Z | [
"task_categories:automatic-speech-recognition",
"multilinguality:monolingual",
"language:en",
"audio",
"automatic-speech-recognition",
"en-atc",
"en",
"robust-speech-recognition",
"noisy-speech-recognition",
"speech-recognition",
"arxiv:2203.16822",
"region:us"
] | Jzuluaga | null | null | null | 0 | 113 | ---
dataset_info:
features:
- name: id
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: text
dtype: string
- name: segment_start_time
dtype: float32
- name: segment_end_time
dtype: float32
- name: duration
dtype: float32
splits:
- name: test
num_bytes: 471628915.76
num_examples: 1901
- name: train
num_bytes: 1934757106.88
num_examples: 7638
download_size: 0
dataset_size: 2406386022.6400003
tags:
- audio
- automatic-speech-recognition
- en-atc
- en
- robust-speech-recognition
- noisy-speech-recognition
- speech-recognition
task_categories:
- automatic-speech-recognition
language:
- en
multilinguality:
- monolingual
---
# Dataset Card for ATCOSIM corpus
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages and Other Details](#languages-and-other-details)
- [Dataset Structure](#dataset-structure)
- [Data Fields](#data-fields)
- [Additional Information](#additional-information)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** [ATCOSIM homepage](https://www.spsc.tugraz.at/databases-and-tools/atcosim-air-traffic-control-simulation-speech-corpus.html)
- **Repository:** [GitHub repository (used in research)](https://github.com/idiap/w2v2-air-traffic)
- **Paper:** [The ATCOSIM Corpus of Non-Prompted Clean Air Traffic Control Speech](https://aclanthology.org/L08-1507/)
- **Paper of this research:** [How Does Pre-trained Wav2Vec 2.0 Perform on Domain Shifted ASR? An Extensive Benchmark on Air Traffic Control Communications](https://arxiv.org/abs/2203.16822)
### Dataset Summary
The ATCOSIM Air Traffic Control Simulation Speech corpus is a speech database of air traffic control (ATC) operator speech, provided by Graz University of Technology (TUG) and Eurocontrol Experimental Centre (EEC). It consists of ten hours of speech data, which were recorded during ATC real-time simulations using a close-talk headset microphone. The utterances are in English language and pronounced by ten non-native speakers. The database includes orthographic transcriptions and additional information on speakers and recording sessions. It was recorded and annotated by Konrad Hofbauer ([description here](https://www.spsc.tugraz.at/databases-and-tools/atcosim-air-traffic-control-simulation-speech-corpus.html)).
### Supported Tasks and Leaderboards
- `automatic-speech-recognition`. Already adapted/fine-tuned models are available here --> [XLS-R-300m](https://huggingface.co/Jzuluaga/wav2vec2-large-960h-lv60-self-en-atc-atcosim).
### Languages and other details
The text and the recordings are in English. The participating controllers were all actively employed air traffic controllers and possessed professional experience in the simulated sectors. The six male and four female controllers were of either German or Swiss nationality and had German, Swiss German or Swiss French native tongue. The controllers had agreed to the recording of their voice for the purpose of language analysis as well as for research and development in speech technologies, and were asked to show their normal working behaviour.
## Dataset Structure
### Data Fields
- `id (string)`: a string of recording identifier for each example, corresponding to its.
- `audio (audio)`: audio data for the given ID
- `text (string)`: transcript of the file already normalized. Follow these repositories for more details [w2v2-air-traffic](https://github.com/idiap/w2v2-air-traffic) and [bert-text-diarization-atc](https://github.com/idiap/bert-text-diarization-atc)
- `segment_start_time (float32)`: segment start time (normally 0)
- `segment_end_time (float32): segment end time
- `duration (float32)`: duration of the recording, compute as segment_end_time - segment_start_time
## Additional Information
### Licensing Information
The licensing status of the dataset hinges on the legal status of the [ATCOSIM corpus](https://www.spsc.tugraz.at/databases-and-tools/atcosim-air-traffic-control-simulation-speech-corpus.html) creators.
### Citation Information
Contributors who prepared, processed, normalized and uploaded the dataset in HuggingFace:
```
@article{zuluaga2022how,
title={How Does Pre-trained Wav2Vec2. 0 Perform on Domain Shifted ASR? An Extensive Benchmark on Air Traffic Control Communications},
author={Zuluaga-Gomez, Juan and Prasad, Amrutha and Nigmatulina, Iuliia and Sarfjoo, Saeed and others},
journal={IEEE Spoken Language Technology Workshop (SLT), Doha, Qatar},
year={2022}
}
@article{zuluaga2022bertraffic,
title={BERTraffic: BERT-based Joint Speaker Role and Speaker Change Detection for Air Traffic Control Communications},
author={Zuluaga-Gomez, Juan and Sarfjoo, Seyyed Saeed and Prasad, Amrutha and others},
journal={IEEE Spoken Language Technology Workshop (SLT), Doha, Qatar},
year={2022}
}
@article{zuluaga2022atco2,
title={ATCO2 corpus: A Large-Scale Dataset for Research on Automatic Speech Recognition and Natural Language Understanding of Air Traffic Control Communications},
author={Zuluaga-Gomez, Juan and Vesel{\`y}, Karel and Sz{\"o}ke, Igor and Motlicek, Petr and others},
journal={arXiv preprint arXiv:2211.04054},
year={2022}
}
```
Authors of the dataset:
```
@inproceedings{hofbauer-etal-2008-atcosim,
title = "The {ATCOSIM} Corpus of Non-Prompted Clean Air Traffic Control Speech",
author = "Hofbauer, Konrad and
Petrik, Stefan and
Hering, Horst",
booktitle = "Proceedings of the Sixth International Conference on Language Resources and Evaluation ({LREC}'08)",
month = may,
year = "2008",
address = "Marrakech, Morocco",
publisher = "European Language Resources Association (ELRA)",
url = "http://www.lrec-conf.org/proceedings/lrec2008/pdf/545_paper.pdf",
}
```
|
cooleel/xfund_de | 2022-12-02T03:12:40.000Z | [
"license:mit",
"region:us"
] | cooleel | https://github.com/doc-analysis/XFUND | @inproceedings{xu-etal-2022-xfund,
title = "{XFUND}: A Benchmark Dataset for Multilingual Visually Rich Form Understanding",
author = "Xu, Yiheng and
Lv, Tengchao and
Cui, Lei and
Wang, Guoxin and
Lu, Yijuan and
Florencio, Dinei and
Zhang, Cha and
Wei, Furu",
booktitle = "Findings of the Association for Computational Linguistics: ACL 2022",
month = may,
year = "2022",
address = "Dublin, Ireland",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2022.findings-acl.253",
doi = "10.18653/v1/2022.findings-acl.253",
pages = "3214--3224",
abstract = "Multimodal pre-training with text, layout, and image has achieved SOTA performance for visually rich document understanding tasks recently, which demonstrates the great potential for joint learning across different modalities. However, the existed research work has focused only on the English domain while neglecting the importance of multilingual generalization. In this paper, we introduce a human-annotated multilingual form understanding benchmark dataset named XFUND, which includes form understanding samples in 7 languages (Chinese, Japanese, Spanish, French, Italian, German, Portuguese). Meanwhile, we present LayoutXLM, a multimodal pre-trained model for multilingual document understanding, which aims to bridge the language barriers for visually rich document understanding. Experimental results show that the LayoutXLM model has significantly outperformed the existing SOTA cross-lingual pre-trained models on the XFUND dataset. The XFUND dataset and the pre-trained LayoutXLM model have been publicly available at https://aka.ms/layoutxlm.",
} | null | 0 | 113 | ---
license: mit
---
The xfund dataset with annotations at the word level.
The original XFUND dataset
see more detail at [this](https://github.com/doc-analysis/XFUND)
#### Citation Information
``` latex
@inproceedings{xu-etal-2022-xfund,
title = "{XFUND}: A Benchmark Dataset for Multilingual Visually Rich Form Understanding",
author = "Xu, Yiheng and
Lv, Tengchao and
Cui, Lei and
Wang, Guoxin and
Lu, Yijuan and
Florencio, Dinei and
Zhang, Cha and
Wei, Furu",
booktitle = "Findings of the Association for Computational Linguistics: ACL 2022",
month = may,
year = "2022",
address = "Dublin, Ireland",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2022.findings-acl.253",
doi = "10.18653/v1/2022.findings-acl.253",
pages = "3214--3224",
abstract = "Multimodal pre-training with text, layout, and image has achieved SOTA performance for visually rich document understanding tasks recently, which demonstrates the great potential for joint learning across different modalities. However, the existed research work has focused only on the English domain while neglecting the importance of multilingual generalization. In this paper, we introduce a human-annotated multilingual form understanding benchmark dataset named XFUND, which includes form understanding samples in 7 languages (Chinese, Japanese, Spanish, French, Italian, German, Portuguese). Meanwhile, we present LayoutXLM, a multimodal pre-trained model for multilingual document understanding, which aims to bridge the language barriers for visually rich document understanding. Experimental results show that the LayoutXLM model has significantly outperformed the existing SOTA cross-lingual pre-trained models on the XFUND dataset. The XFUND dataset and the pre-trained LayoutXLM model have been publicly available at https://aka.ms/layoutxlm.",
}
```
|
sanchit-gandhi/concatenated_librispeech | 2023-01-26T11:45:39.000Z | [
"region:us"
] | sanchit-gandhi | null | null | null | 0 | 113 | ---
dataset_info:
features:
- name: audio
dtype: audio
splits:
- name: train
num_bytes: 707889.0
num_examples: 1
download_size: 0
dataset_size: 707889.0
---
# Dataset Card for "concatenated_librispeech"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
lucadiliello/newsqa | 2023-06-06T08:36:25.000Z | [
"region:us"
] | lucadiliello | null | null | null | 2 | 113 | ---
dataset_info:
features:
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence: string
- name: key
dtype: string
- name: labels
list:
- name: end
sequence: int64
- name: start
sequence: int64
splits:
- name: train
num_bytes: 234711053
num_examples: 74160
- name: validation
num_bytes: 13234782
num_examples: 4212
download_size: 31328809
dataset_size: 247945835
---
# Dataset Card for "newsqa"
Split taken from the MRQA 2019 Shared Task, formatted and filtered for Question Answering. For the original dataset, have a look [here](https://huggingface.co/datasets/mrqa). |
theodor1289/imagenet-1k_tiny | 2023-03-23T08:14:11.000Z | [
"region:us"
] | theodor1289 | null | null | null | 1 | 113 | ---
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': tench, Tinca tinca
'1': goldfish, Carassius auratus
'2': great white shark, white shark, man-eater, man-eating shark, Carcharodon
carcharias
'3': tiger shark, Galeocerdo cuvieri
'4': hammerhead, hammerhead shark
'5': electric ray, crampfish, numbfish, torpedo
'6': stingray
'7': cock
'8': hen
'9': ostrich, Struthio camelus
'10': brambling, Fringilla montifringilla
'11': goldfinch, Carduelis carduelis
'12': house finch, linnet, Carpodacus mexicanus
'13': junco, snowbird
'14': indigo bunting, indigo finch, indigo bird, Passerina cyanea
'15': robin, American robin, Turdus migratorius
'16': bulbul
'17': jay
'18': magpie
'19': chickadee
'20': water ouzel, dipper
'21': kite
'22': bald eagle, American eagle, Haliaeetus leucocephalus
'23': vulture
'24': great grey owl, great gray owl, Strix nebulosa
'25': European fire salamander, Salamandra salamandra
'26': common newt, Triturus vulgaris
'27': eft
'28': spotted salamander, Ambystoma maculatum
'29': axolotl, mud puppy, Ambystoma mexicanum
'30': bullfrog, Rana catesbeiana
'31': tree frog, tree-frog
'32': tailed frog, bell toad, ribbed toad, tailed toad, Ascaphus trui
'33': loggerhead, loggerhead turtle, Caretta caretta
'34': leatherback turtle, leatherback, leathery turtle, Dermochelys coriacea
'35': mud turtle
'36': terrapin
'37': box turtle, box tortoise
'38': banded gecko
'39': common iguana, iguana, Iguana iguana
'40': American chameleon, anole, Anolis carolinensis
'41': whiptail, whiptail lizard
'42': agama
'43': frilled lizard, Chlamydosaurus kingi
'44': alligator lizard
'45': Gila monster, Heloderma suspectum
'46': green lizard, Lacerta viridis
'47': African chameleon, Chamaeleo chamaeleon
'48': Komodo dragon, Komodo lizard, dragon lizard, giant lizard, Varanus
komodoensis
'49': African crocodile, Nile crocodile, Crocodylus niloticus
'50': American alligator, Alligator mississipiensis
'51': triceratops
'52': thunder snake, worm snake, Carphophis amoenus
'53': ringneck snake, ring-necked snake, ring snake
'54': hognose snake, puff adder, sand viper
'55': green snake, grass snake
'56': king snake, kingsnake
'57': garter snake, grass snake
'58': water snake
'59': vine snake
'60': night snake, Hypsiglena torquata
'61': boa constrictor, Constrictor constrictor
'62': rock python, rock snake, Python sebae
'63': Indian cobra, Naja naja
'64': green mamba
'65': sea snake
'66': horned viper, cerastes, sand viper, horned asp, Cerastes cornutus
'67': diamondback, diamondback rattlesnake, Crotalus adamanteus
'68': sidewinder, horned rattlesnake, Crotalus cerastes
'69': trilobite
'70': harvestman, daddy longlegs, Phalangium opilio
'71': scorpion
'72': black and gold garden spider, Argiope aurantia
'73': barn spider, Araneus cavaticus
'74': garden spider, Aranea diademata
'75': black widow, Latrodectus mactans
'76': tarantula
'77': wolf spider, hunting spider
'78': tick
'79': centipede
'80': black grouse
'81': ptarmigan
'82': ruffed grouse, partridge, Bonasa umbellus
'83': prairie chicken, prairie grouse, prairie fowl
'84': peacock
'85': quail
'86': partridge
'87': African grey, African gray, Psittacus erithacus
'88': macaw
'89': sulphur-crested cockatoo, Kakatoe galerita, Cacatua galerita
'90': lorikeet
'91': coucal
'92': bee eater
'93': hornbill
'94': hummingbird
'95': jacamar
'96': toucan
'97': drake
'98': red-breasted merganser, Mergus serrator
'99': goose
'100': black swan, Cygnus atratus
'101': tusker
'102': echidna, spiny anteater, anteater
'103': platypus, duckbill, duckbilled platypus, duck-billed platypus, Ornithorhynchus
anatinus
'104': wallaby, brush kangaroo
'105': koala, koala bear, kangaroo bear, native bear, Phascolarctos cinereus
'106': wombat
'107': jellyfish
'108': sea anemone, anemone
'109': brain coral
'110': flatworm, platyhelminth
'111': nematode, nematode worm, roundworm
'112': conch
'113': snail
'114': slug
'115': sea slug, nudibranch
'116': chiton, coat-of-mail shell, sea cradle, polyplacophore
'117': chambered nautilus, pearly nautilus, nautilus
'118': Dungeness crab, Cancer magister
'119': rock crab, Cancer irroratus
'120': fiddler crab
'121': king crab, Alaska crab, Alaskan king crab, Alaska king crab, Paralithodes
camtschatica
'122': American lobster, Northern lobster, Maine lobster, Homarus americanus
'123': spiny lobster, langouste, rock lobster, crawfish, crayfish, sea crawfish
'124': crayfish, crawfish, crawdad, crawdaddy
'125': hermit crab
'126': isopod
'127': white stork, Ciconia ciconia
'128': black stork, Ciconia nigra
'129': spoonbill
'130': flamingo
'131': little blue heron, Egretta caerulea
'132': American egret, great white heron, Egretta albus
'133': bittern
'134': crane
'135': limpkin, Aramus pictus
'136': European gallinule, Porphyrio porphyrio
'137': American coot, marsh hen, mud hen, water hen, Fulica americana
'138': bustard
'139': ruddy turnstone, Arenaria interpres
'140': red-backed sandpiper, dunlin, Erolia alpina
'141': redshank, Tringa totanus
'142': dowitcher
'143': oystercatcher, oyster catcher
'144': pelican
'145': king penguin, Aptenodytes patagonica
'146': albatross, mollymawk
'147': grey whale, gray whale, devilfish, Eschrichtius gibbosus, Eschrichtius
robustus
'148': killer whale, killer, orca, grampus, sea wolf, Orcinus orca
'149': dugong, Dugong dugon
'150': sea lion
'151': Chihuahua
'152': Japanese spaniel
'153': Maltese dog, Maltese terrier, Maltese
'154': Pekinese, Pekingese, Peke
'155': Shih-Tzu
'156': Blenheim spaniel
'157': papillon
'158': toy terrier
'159': Rhodesian ridgeback
'160': Afghan hound, Afghan
'161': basset, basset hound
'162': beagle
'163': bloodhound, sleuthhound
'164': bluetick
'165': black-and-tan coonhound
'166': Walker hound, Walker foxhound
'167': English foxhound
'168': redbone
'169': borzoi, Russian wolfhound
'170': Irish wolfhound
'171': Italian greyhound
'172': whippet
'173': Ibizan hound, Ibizan Podenco
'174': Norwegian elkhound, elkhound
'175': otterhound, otter hound
'176': Saluki, gazelle hound
'177': Scottish deerhound, deerhound
'178': Weimaraner
'179': Staffordshire bullterrier, Staffordshire bull terrier
'180': American Staffordshire terrier, Staffordshire terrier, American pit
bull terrier, pit bull terrier
'181': Bedlington terrier
'182': Border terrier
'183': Kerry blue terrier
'184': Irish terrier
'185': Norfolk terrier
'186': Norwich terrier
'187': Yorkshire terrier
'188': wire-haired fox terrier
'189': Lakeland terrier
'190': Sealyham terrier, Sealyham
'191': Airedale, Airedale terrier
'192': cairn, cairn terrier
'193': Australian terrier
'194': Dandie Dinmont, Dandie Dinmont terrier
'195': Boston bull, Boston terrier
'196': miniature schnauzer
'197': giant schnauzer
'198': standard schnauzer
'199': Scotch terrier, Scottish terrier, Scottie
'200': Tibetan terrier, chrysanthemum dog
'201': silky terrier, Sydney silky
'202': soft-coated wheaten terrier
'203': West Highland white terrier
'204': Lhasa, Lhasa apso
'205': flat-coated retriever
'206': curly-coated retriever
'207': golden retriever
'208': Labrador retriever
'209': Chesapeake Bay retriever
'210': German short-haired pointer
'211': vizsla, Hungarian pointer
'212': English setter
'213': Irish setter, red setter
'214': Gordon setter
'215': Brittany spaniel
'216': clumber, clumber spaniel
'217': English springer, English springer spaniel
'218': Welsh springer spaniel
'219': cocker spaniel, English cocker spaniel, cocker
'220': Sussex spaniel
'221': Irish water spaniel
'222': kuvasz
'223': schipperke
'224': groenendael
'225': malinois
'226': briard
'227': kelpie
'228': komondor
'229': Old English sheepdog, bobtail
'230': Shetland sheepdog, Shetland sheep dog, Shetland
'231': collie
'232': Border collie
'233': Bouvier des Flandres, Bouviers des Flandres
'234': Rottweiler
'235': German shepherd, German shepherd dog, German police dog, alsatian
'236': Doberman, Doberman pinscher
'237': miniature pinscher
'238': Greater Swiss Mountain dog
'239': Bernese mountain dog
'240': Appenzeller
'241': EntleBucher
'242': boxer
'243': bull mastiff
'244': Tibetan mastiff
'245': French bulldog
'246': Great Dane
'247': Saint Bernard, St Bernard
'248': Eskimo dog, husky
'249': malamute, malemute, Alaskan malamute
'250': Siberian husky
'251': dalmatian, coach dog, carriage dog
'252': affenpinscher, monkey pinscher, monkey dog
'253': basenji
'254': pug, pug-dog
'255': Leonberg
'256': Newfoundland, Newfoundland dog
'257': Great Pyrenees
'258': Samoyed, Samoyede
'259': Pomeranian
'260': chow, chow chow
'261': keeshond
'262': Brabancon griffon
'263': Pembroke, Pembroke Welsh corgi
'264': Cardigan, Cardigan Welsh corgi
'265': toy poodle
'266': miniature poodle
'267': standard poodle
'268': Mexican hairless
'269': timber wolf, grey wolf, gray wolf, Canis lupus
'270': white wolf, Arctic wolf, Canis lupus tundrarum
'271': red wolf, maned wolf, Canis rufus, Canis niger
'272': coyote, prairie wolf, brush wolf, Canis latrans
'273': dingo, warrigal, warragal, Canis dingo
'274': dhole, Cuon alpinus
'275': African hunting dog, hyena dog, Cape hunting dog, Lycaon pictus
'276': hyena, hyaena
'277': red fox, Vulpes vulpes
'278': kit fox, Vulpes macrotis
'279': Arctic fox, white fox, Alopex lagopus
'280': grey fox, gray fox, Urocyon cinereoargenteus
'281': tabby, tabby cat
'282': tiger cat
'283': Persian cat
'284': Siamese cat, Siamese
'285': Egyptian cat
'286': cougar, puma, catamount, mountain lion, painter, panther, Felis concolor
'287': lynx, catamount
'288': leopard, Panthera pardus
'289': snow leopard, ounce, Panthera uncia
'290': jaguar, panther, Panthera onca, Felis onca
'291': lion, king of beasts, Panthera leo
'292': tiger, Panthera tigris
'293': cheetah, chetah, Acinonyx jubatus
'294': brown bear, bruin, Ursus arctos
'295': American black bear, black bear, Ursus americanus, Euarctos americanus
'296': ice bear, polar bear, Ursus Maritimus, Thalarctos maritimus
'297': sloth bear, Melursus ursinus, Ursus ursinus
'298': mongoose
'299': meerkat, mierkat
'300': tiger beetle
'301': ladybug, ladybeetle, lady beetle, ladybird, ladybird beetle
'302': ground beetle, carabid beetle
'303': long-horned beetle, longicorn, longicorn beetle
'304': leaf beetle, chrysomelid
'305': dung beetle
'306': rhinoceros beetle
'307': weevil
'308': fly
'309': bee
'310': ant, emmet, pismire
'311': grasshopper, hopper
'312': cricket
'313': walking stick, walkingstick, stick insect
'314': cockroach, roach
'315': mantis, mantid
'316': cicada, cicala
'317': leafhopper
'318': lacewing, lacewing fly
'319': dragonfly, darning needle, devil's darning needle, sewing needle,
snake feeder, snake doctor, mosquito hawk, skeeter hawk
'320': damselfly
'321': admiral
'322': ringlet, ringlet butterfly
'323': monarch, monarch butterfly, milkweed butterfly, Danaus plexippus
'324': cabbage butterfly
'325': sulphur butterfly, sulfur butterfly
'326': lycaenid, lycaenid butterfly
'327': starfish, sea star
'328': sea urchin
'329': sea cucumber, holothurian
'330': wood rabbit, cottontail, cottontail rabbit
'331': hare
'332': Angora, Angora rabbit
'333': hamster
'334': porcupine, hedgehog
'335': fox squirrel, eastern fox squirrel, Sciurus niger
'336': marmot
'337': beaver
'338': guinea pig, Cavia cobaya
'339': sorrel
'340': zebra
'341': hog, pig, grunter, squealer, Sus scrofa
'342': wild boar, boar, Sus scrofa
'343': warthog
'344': hippopotamus, hippo, river horse, Hippopotamus amphibius
'345': ox
'346': water buffalo, water ox, Asiatic buffalo, Bubalus bubalis
'347': bison
'348': ram, tup
'349': bighorn, bighorn sheep, cimarron, Rocky Mountain bighorn, Rocky Mountain
sheep, Ovis canadensis
'350': ibex, Capra ibex
'351': hartebeest
'352': impala, Aepyceros melampus
'353': gazelle
'354': Arabian camel, dromedary, Camelus dromedarius
'355': llama
'356': weasel
'357': mink
'358': polecat, fitch, foulmart, foumart, Mustela putorius
'359': black-footed ferret, ferret, Mustela nigripes
'360': otter
'361': skunk, polecat, wood pussy
'362': badger
'363': armadillo
'364': three-toed sloth, ai, Bradypus tridactylus
'365': orangutan, orang, orangutang, Pongo pygmaeus
'366': gorilla, Gorilla gorilla
'367': chimpanzee, chimp, Pan troglodytes
'368': gibbon, Hylobates lar
'369': siamang, Hylobates syndactylus, Symphalangus syndactylus
'370': guenon, guenon monkey
'371': patas, hussar monkey, Erythrocebus patas
'372': baboon
'373': macaque
'374': langur
'375': colobus, colobus monkey
'376': proboscis monkey, Nasalis larvatus
'377': marmoset
'378': capuchin, ringtail, Cebus capucinus
'379': howler monkey, howler
'380': titi, titi monkey
'381': spider monkey, Ateles geoffroyi
'382': squirrel monkey, Saimiri sciureus
'383': Madagascar cat, ring-tailed lemur, Lemur catta
'384': indri, indris, Indri indri, Indri brevicaudatus
'385': Indian elephant, Elephas maximus
'386': African elephant, Loxodonta africana
'387': lesser panda, red panda, panda, bear cat, cat bear, Ailurus fulgens
'388': giant panda, panda, panda bear, coon bear, Ailuropoda melanoleuca
'389': barracouta, snoek
'390': eel
'391': coho, cohoe, coho salmon, blue jack, silver salmon, Oncorhynchus
kisutch
'392': rock beauty, Holocanthus tricolor
'393': anemone fish
'394': sturgeon
'395': gar, garfish, garpike, billfish, Lepisosteus osseus
'396': lionfish
'397': puffer, pufferfish, blowfish, globefish
'398': abacus
'399': abaya
'400': academic gown, academic robe, judge's robe
'401': accordion, piano accordion, squeeze box
'402': acoustic guitar
'403': aircraft carrier, carrier, flattop, attack aircraft carrier
'404': airliner
'405': airship, dirigible
'406': altar
'407': ambulance
'408': amphibian, amphibious vehicle
'409': analog clock
'410': apiary, bee house
'411': apron
'412': ashcan, trash can, garbage can, wastebin, ash bin, ash-bin, ashbin,
dustbin, trash barrel, trash bin
'413': assault rifle, assault gun
'414': backpack, back pack, knapsack, packsack, rucksack, haversack
'415': bakery, bakeshop, bakehouse
'416': balance beam, beam
'417': balloon
'418': ballpoint, ballpoint pen, ballpen, Biro
'419': Band Aid
'420': banjo
'421': bannister, banister, balustrade, balusters, handrail
'422': barbell
'423': barber chair
'424': barbershop
'425': barn
'426': barometer
'427': barrel, cask
'428': barrow, garden cart, lawn cart, wheelbarrow
'429': baseball
'430': basketball
'431': bassinet
'432': bassoon
'433': bathing cap, swimming cap
'434': bath towel
'435': bathtub, bathing tub, bath, tub
'436': beach wagon, station wagon, wagon, estate car, beach waggon, station
waggon, waggon
'437': beacon, lighthouse, beacon light, pharos
'438': beaker
'439': bearskin, busby, shako
'440': beer bottle
'441': beer glass
'442': bell cote, bell cot
'443': bib
'444': bicycle-built-for-two, tandem bicycle, tandem
'445': bikini, two-piece
'446': binder, ring-binder
'447': binoculars, field glasses, opera glasses
'448': birdhouse
'449': boathouse
'450': bobsled, bobsleigh, bob
'451': bolo tie, bolo, bola tie, bola
'452': bonnet, poke bonnet
'453': bookcase
'454': bookshop, bookstore, bookstall
'455': bottlecap
'456': bow
'457': bow tie, bow-tie, bowtie
'458': brass, memorial tablet, plaque
'459': brassiere, bra, bandeau
'460': breakwater, groin, groyne, mole, bulwark, seawall, jetty
'461': breastplate, aegis, egis
'462': broom
'463': bucket, pail
'464': buckle
'465': bulletproof vest
'466': bullet train, bullet
'467': butcher shop, meat market
'468': cab, hack, taxi, taxicab
'469': caldron, cauldron
'470': candle, taper, wax light
'471': cannon
'472': canoe
'473': can opener, tin opener
'474': cardigan
'475': car mirror
'476': carousel, carrousel, merry-go-round, roundabout, whirligig
'477': carpenter's kit, tool kit
'478': carton
'479': car wheel
'480': cash machine, cash dispenser, automated teller machine, automatic
teller machine, automated teller, automatic teller, ATM
'481': cassette
'482': cassette player
'483': castle
'484': catamaran
'485': CD player
'486': cello, violoncello
'487': cellular telephone, cellular phone, cellphone, cell, mobile phone
'488': chain
'489': chainlink fence
'490': chain mail, ring mail, mail, chain armor, chain armour, ring armor,
ring armour
'491': chain saw, chainsaw
'492': chest
'493': chiffonier, commode
'494': chime, bell, gong
'495': china cabinet, china closet
'496': Christmas stocking
'497': church, church building
'498': cinema, movie theater, movie theatre, movie house, picture palace
'499': cleaver, meat cleaver, chopper
'500': cliff dwelling
'501': cloak
'502': clog, geta, patten, sabot
'503': cocktail shaker
'504': coffee mug
'505': coffeepot
'506': coil, spiral, volute, whorl, helix
'507': combination lock
'508': computer keyboard, keypad
'509': confectionery, confectionary, candy store
'510': container ship, containership, container vessel
'511': convertible
'512': corkscrew, bottle screw
'513': cornet, horn, trumpet, trump
'514': cowboy boot
'515': cowboy hat, ten-gallon hat
'516': cradle
'517': crane2
'518': crash helmet
'519': crate
'520': crib, cot
'521': Crock Pot
'522': croquet ball
'523': crutch
'524': cuirass
'525': dam, dike, dyke
'526': desk
'527': desktop computer
'528': dial telephone, dial phone
'529': diaper, nappy, napkin
'530': digital clock
'531': digital watch
'532': dining table, board
'533': dishrag, dishcloth
'534': dishwasher, dish washer, dishwashing machine
'535': disk brake, disc brake
'536': dock, dockage, docking facility
'537': dogsled, dog sled, dog sleigh
'538': dome
'539': doormat, welcome mat
'540': drilling platform, offshore rig
'541': drum, membranophone, tympan
'542': drumstick
'543': dumbbell
'544': Dutch oven
'545': electric fan, blower
'546': electric guitar
'547': electric locomotive
'548': entertainment center
'549': envelope
'550': espresso maker
'551': face powder
'552': feather boa, boa
'553': file, file cabinet, filing cabinet
'554': fireboat
'555': fire engine, fire truck
'556': fire screen, fireguard
'557': flagpole, flagstaff
'558': flute, transverse flute
'559': folding chair
'560': football helmet
'561': forklift
'562': fountain
'563': fountain pen
'564': four-poster
'565': freight car
'566': French horn, horn
'567': frying pan, frypan, skillet
'568': fur coat
'569': garbage truck, dustcart
'570': gasmask, respirator, gas helmet
'571': gas pump, gasoline pump, petrol pump, island dispenser
'572': goblet
'573': go-kart
'574': golf ball
'575': golfcart, golf cart
'576': gondola
'577': gong, tam-tam
'578': gown
'579': grand piano, grand
'580': greenhouse, nursery, glasshouse
'581': grille, radiator grille
'582': grocery store, grocery, food market, market
'583': guillotine
'584': hair slide
'585': hair spray
'586': half track
'587': hammer
'588': hamper
'589': hand blower, blow dryer, blow drier, hair dryer, hair drier
'590': hand-held computer, hand-held microcomputer
'591': handkerchief, hankie, hanky, hankey
'592': hard disc, hard disk, fixed disk
'593': harmonica, mouth organ, harp, mouth harp
'594': harp
'595': harvester, reaper
'596': hatchet
'597': holster
'598': home theater, home theatre
'599': honeycomb
'600': hook, claw
'601': hoopskirt, crinoline
'602': horizontal bar, high bar
'603': horse cart, horse-cart
'604': hourglass
'605': iPod
'606': iron, smoothing iron
'607': jack-o'-lantern
'608': jean, blue jean, denim
'609': jeep, landrover
'610': jersey, T-shirt, tee shirt
'611': jigsaw puzzle
'612': jinrikisha, ricksha, rickshaw
'613': joystick
'614': kimono
'615': knee pad
'616': knot
'617': lab coat, laboratory coat
'618': ladle
'619': lampshade, lamp shade
'620': laptop, laptop computer
'621': lawn mower, mower
'622': lens cap, lens cover
'623': letter opener, paper knife, paperknife
'624': library
'625': lifeboat
'626': lighter, light, igniter, ignitor
'627': limousine, limo
'628': liner, ocean liner
'629': lipstick, lip rouge
'630': Loafer
'631': lotion
'632': loudspeaker, speaker, speaker unit, loudspeaker system, speaker system
'633': loupe, jeweler's loupe
'634': lumbermill, sawmill
'635': magnetic compass
'636': mailbag, postbag
'637': mailbox, letter box
'638': maillot
'639': maillot, tank suit
'640': manhole cover
'641': maraca
'642': marimba, xylophone
'643': mask
'644': matchstick
'645': maypole
'646': maze, labyrinth
'647': measuring cup
'648': medicine chest, medicine cabinet
'649': megalith, megalithic structure
'650': microphone, mike
'651': microwave, microwave oven
'652': military uniform
'653': milk can
'654': minibus
'655': miniskirt, mini
'656': minivan
'657': missile
'658': mitten
'659': mixing bowl
'660': mobile home, manufactured home
'661': Model T
'662': modem
'663': monastery
'664': monitor
'665': moped
'666': mortar
'667': mortarboard
'668': mosque
'669': mosquito net
'670': motor scooter, scooter
'671': mountain bike, all-terrain bike, off-roader
'672': mountain tent
'673': mouse, computer mouse
'674': mousetrap
'675': moving van
'676': muzzle
'677': nail
'678': neck brace
'679': necklace
'680': nipple
'681': notebook, notebook computer
'682': obelisk
'683': oboe, hautboy, hautbois
'684': ocarina, sweet potato
'685': odometer, hodometer, mileometer, milometer
'686': oil filter
'687': organ, pipe organ
'688': oscilloscope, scope, cathode-ray oscilloscope, CRO
'689': overskirt
'690': oxcart
'691': oxygen mask
'692': packet
'693': paddle, boat paddle
'694': paddlewheel, paddle wheel
'695': padlock
'696': paintbrush
'697': pajama, pyjama, pj's, jammies
'698': palace
'699': panpipe, pandean pipe, syrinx
'700': paper towel
'701': parachute, chute
'702': parallel bars, bars
'703': park bench
'704': parking meter
'705': passenger car, coach, carriage
'706': patio, terrace
'707': pay-phone, pay-station
'708': pedestal, plinth, footstall
'709': pencil box, pencil case
'710': pencil sharpener
'711': perfume, essence
'712': Petri dish
'713': photocopier
'714': pick, plectrum, plectron
'715': pickelhaube
'716': picket fence, paling
'717': pickup, pickup truck
'718': pier
'719': piggy bank, penny bank
'720': pill bottle
'721': pillow
'722': ping-pong ball
'723': pinwheel
'724': pirate, pirate ship
'725': pitcher, ewer
'726': plane, carpenter's plane, woodworking plane
'727': planetarium
'728': plastic bag
'729': plate rack
'730': plow, plough
'731': plunger, plumber's helper
'732': Polaroid camera, Polaroid Land camera
'733': pole
'734': police van, police wagon, paddy wagon, patrol wagon, wagon, black
Maria
'735': poncho
'736': pool table, billiard table, snooker table
'737': pop bottle, soda bottle
'738': pot, flowerpot
'739': potter's wheel
'740': power drill
'741': prayer rug, prayer mat
'742': printer
'743': prison, prison house
'744': projectile, missile
'745': projector
'746': puck, hockey puck
'747': punching bag, punch bag, punching ball, punchball
'748': purse
'749': quill, quill pen
'750': quilt, comforter, comfort, puff
'751': racer, race car, racing car
'752': racket, racquet
'753': radiator
'754': radio, wireless
'755': radio telescope, radio reflector
'756': rain barrel
'757': recreational vehicle, RV, R.V.
'758': reel
'759': reflex camera
'760': refrigerator, icebox
'761': remote control, remote
'762': restaurant, eating house, eating place, eatery
'763': revolver, six-gun, six-shooter
'764': rifle
'765': rocking chair, rocker
'766': rotisserie
'767': rubber eraser, rubber, pencil eraser
'768': rugby ball
'769': rule, ruler
'770': running shoe
'771': safe
'772': safety pin
'773': saltshaker, salt shaker
'774': sandal
'775': sarong
'776': sax, saxophone
'777': scabbard
'778': scale, weighing machine
'779': school bus
'780': schooner
'781': scoreboard
'782': screen, CRT screen
'783': screw
'784': screwdriver
'785': seat belt, seatbelt
'786': sewing machine
'787': shield, buckler
'788': shoe shop, shoe-shop, shoe store
'789': shoji
'790': shopping basket
'791': shopping cart
'792': shovel
'793': shower cap
'794': shower curtain
'795': ski
'796': ski mask
'797': sleeping bag
'798': slide rule, slipstick
'799': sliding door
'800': slot, one-armed bandit
'801': snorkel
'802': snowmobile
'803': snowplow, snowplough
'804': soap dispenser
'805': soccer ball
'806': sock
'807': solar dish, solar collector, solar furnace
'808': sombrero
'809': soup bowl
'810': space bar
'811': space heater
'812': space shuttle
'813': spatula
'814': speedboat
'815': spider web, spider's web
'816': spindle
'817': sports car, sport car
'818': spotlight, spot
'819': stage
'820': steam locomotive
'821': steel arch bridge
'822': steel drum
'823': stethoscope
'824': stole
'825': stone wall
'826': stopwatch, stop watch
'827': stove
'828': strainer
'829': streetcar, tram, tramcar, trolley, trolley car
'830': stretcher
'831': studio couch, day bed
'832': stupa, tope
'833': submarine, pigboat, sub, U-boat
'834': suit, suit of clothes
'835': sundial
'836': sunglass
'837': sunglasses, dark glasses, shades
'838': sunscreen, sunblock, sun blocker
'839': suspension bridge
'840': swab, swob, mop
'841': sweatshirt
'842': swimming trunks, bathing trunks
'843': swing
'844': switch, electric switch, electrical switch
'845': syringe
'846': table lamp
'847': tank, army tank, armored combat vehicle, armoured combat vehicle
'848': tape player
'849': teapot
'850': teddy, teddy bear
'851': television, television system
'852': tennis ball
'853': thatch, thatched roof
'854': theater curtain, theatre curtain
'855': thimble
'856': thresher, thrasher, threshing machine
'857': throne
'858': tile roof
'859': toaster
'860': tobacco shop, tobacconist shop, tobacconist
'861': toilet seat
'862': torch
'863': totem pole
'864': tow truck, tow car, wrecker
'865': toyshop
'866': tractor
'867': trailer truck, tractor trailer, trucking rig, rig, articulated lorry,
semi
'868': tray
'869': trench coat
'870': tricycle, trike, velocipede
'871': trimaran
'872': tripod
'873': triumphal arch
'874': trolleybus, trolley coach, trackless trolley
'875': trombone
'876': tub, vat
'877': turnstile
'878': typewriter keyboard
'879': umbrella
'880': unicycle, monocycle
'881': upright, upright piano
'882': vacuum, vacuum cleaner
'883': vase
'884': vault
'885': velvet
'886': vending machine
'887': vestment
'888': viaduct
'889': violin, fiddle
'890': volleyball
'891': waffle iron
'892': wall clock
'893': wallet, billfold, notecase, pocketbook
'894': wardrobe, closet, press
'895': warplane, military plane
'896': washbasin, handbasin, washbowl, lavabo, wash-hand basin
'897': washer, automatic washer, washing machine
'898': water bottle
'899': water jug
'900': water tower
'901': whiskey jug
'902': whistle
'903': wig
'904': window screen
'905': window shade
'906': Windsor tie
'907': wine bottle
'908': wing
'909': wok
'910': wooden spoon
'911': wool, woolen, woollen
'912': worm fence, snake fence, snake-rail fence, Virginia fence
'913': wreck
'914': yawl
'915': yurt
'916': web site, website, internet site, site
'917': comic book
'918': crossword puzzle, crossword
'919': street sign
'920': traffic light, traffic signal, stoplight
'921': book jacket, dust cover, dust jacket, dust wrapper
'922': menu
'923': plate
'924': guacamole
'925': consomme
'926': hot pot, hotpot
'927': trifle
'928': ice cream, icecream
'929': ice lolly, lolly, lollipop, popsicle
'930': French loaf
'931': bagel, beigel
'932': pretzel
'933': cheeseburger
'934': hotdog, hot dog, red hot
'935': mashed potato
'936': head cabbage
'937': broccoli
'938': cauliflower
'939': zucchini, courgette
'940': spaghetti squash
'941': acorn squash
'942': butternut squash
'943': cucumber, cuke
'944': artichoke, globe artichoke
'945': bell pepper
'946': cardoon
'947': mushroom
'948': Granny Smith
'949': strawberry
'950': orange
'951': lemon
'952': fig
'953': pineapple, ananas
'954': banana
'955': jackfruit, jak, jack
'956': custard apple
'957': pomegranate
'958': hay
'959': carbonara
'960': chocolate sauce, chocolate syrup
'961': dough
'962': meat loaf, meatloaf
'963': pizza, pizza pie
'964': potpie
'965': burrito
'966': red wine
'967': espresso
'968': cup
'969': eggnog
'970': alp
'971': bubble
'972': cliff, drop, drop-off
'973': coral reef
'974': geyser
'975': lakeside, lakeshore
'976': promontory, headland, head, foreland
'977': sandbar, sand bar
'978': seashore, coast, seacoast, sea-coast
'979': valley, vale
'980': volcano
'981': ballplayer, baseball player
'982': groom, bridegroom
'983': scuba diver
'984': rapeseed
'985': daisy
'986': yellow lady's slipper, yellow lady-slipper, Cypripedium calceolus,
Cypripedium parviflorum
'987': corn
'988': acorn
'989': hip, rose hip, rosehip
'990': buckeye, horse chestnut, conker
'991': coral fungus
'992': agaric
'993': gyromitra
'994': stinkhorn, carrion fungus
'995': earthstar
'996': hen-of-the-woods, hen of the woods, Polyporus frondosus, Grifola
frondosa
'997': bolete
'998': ear, spike, capitulum
'999': toilet tissue, toilet paper, bathroom tissue
splits:
- name: train
num_bytes: 11957097.0
num_examples: 100
download_size: 11936960
dataset_size: 11957097.0
---
# Dataset Card for "imagenet-1k_mini_100"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Francesco/underwater-pipes-4ng4t | 2023-03-30T09:18:16.000Z | [
"task_categories:object-detection",
"annotations_creators:crowdsourced",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:en",
"license:cc",
"rf100",
"region:us"
] | Francesco | null | null | null | 1 | 113 | ---
dataset_info:
features:
- name: image_id
dtype: int64
- name: image
dtype: image
- name: width
dtype: int32
- name: height
dtype: int32
- name: objects
sequence:
- name: id
dtype: int64
- name: area
dtype: int64
- name: bbox
sequence: float32
length: 4
- name: category
dtype:
class_label:
names:
'0': underwater-pipes
'1': pipe
annotations_creators:
- crowdsourced
language_creators:
- found
language:
- en
license:
- cc
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- object-detection
task_ids: []
pretty_name: underwater-pipes-4ng4t
tags:
- rf100
---
# Dataset Card for underwater-pipes-4ng4t
** The original COCO dataset is stored at `dataset.tar.gz`**
## Dataset Description
- **Homepage:** https://universe.roboflow.com/object-detection/underwater-pipes-4ng4t
- **Point of Contact:** francesco.zuppichini@gmail.com
### Dataset Summary
underwater-pipes-4ng4t
### Supported Tasks and Leaderboards
- `object-detection`: The dataset can be used to train a model for Object Detection.
### Languages
English
## Dataset Structure
### Data Instances
A data point comprises an image and its object annotations.
```
{
'image_id': 15,
'image': <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=640x640 at 0x2373B065C18>,
'width': 964043,
'height': 640,
'objects': {
'id': [114, 115, 116, 117],
'area': [3796, 1596, 152768, 81002],
'bbox': [
[302.0, 109.0, 73.0, 52.0],
[810.0, 100.0, 57.0, 28.0],
[160.0, 31.0, 248.0, 616.0],
[741.0, 68.0, 202.0, 401.0]
],
'category': [4, 4, 0, 0]
}
}
```
### Data Fields
- `image`: the image id
- `image`: `PIL.Image.Image` object containing the image. Note that when accessing the image column: `dataset[0]["image"]` the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the `"image"` column, *i.e.* `dataset[0]["image"]` should **always** be preferred over `dataset["image"][0]`
- `width`: the image width
- `height`: the image height
- `objects`: a dictionary containing bounding box metadata for the objects present on the image
- `id`: the annotation id
- `area`: the area of the bounding box
- `bbox`: the object's bounding box (in the [coco](https://albumentations.ai/docs/getting_started/bounding_boxes_augmentation/#coco) format)
- `category`: the object's category.
#### Who are the annotators?
Annotators are Roboflow users
## Additional Information
### Licensing Information
See original homepage https://universe.roboflow.com/object-detection/underwater-pipes-4ng4t
### Citation Information
```
@misc{ underwater-pipes-4ng4t,
title = { underwater pipes 4ng4t Dataset },
type = { Open Source Dataset },
author = { Roboflow 100 },
howpublished = { \url{ https://universe.roboflow.com/object-detection/underwater-pipes-4ng4t } },
url = { https://universe.roboflow.com/object-detection/underwater-pipes-4ng4t },
journal = { Roboflow Universe },
publisher = { Roboflow },
year = { 2022 },
month = { nov },
note = { visited on 2023-03-29 },
}"
```
### Contributions
Thanks to [@mariosasko](https://github.com/mariosasko) for adding this dataset. |
akoksal/LongForm | 2023-09-19T20:20:39.000Z | [
"task_categories:text2text-generation",
"task_categories:text-generation",
"task_categories:question-answering",
"task_categories:conversational",
"task_categories:summarization",
"task_categories:table-to-text",
"size_categories:10K<n<100K",
"language:en",
"instruction-tuning",
"arxiv:2304.08460",
"region:us"
] | akoksal | null | null | null | 30 | 113 | ---
dataset_info:
features:
- name: input
dtype: string
- name: output
dtype: string
- name: source
dtype: string
- name: subset
dtype: string
splits:
- name: train
num_bytes: 63759065
num_examples: 23652
- name: validation
num_bytes: 6190242
num_examples: 2042
- name: test
num_bytes: 6080212
num_examples: 2045
download_size: 45525146
dataset_size: 76029519
task_categories:
- text2text-generation
- text-generation
- question-answering
- conversational
- summarization
- table-to-text
language:
- en
tags:
- instruction-tuning
pretty_name: longform
size_categories:
- 10K<n<100K
---
# LongForm
The LongForm dataset is created by leveraging English corpus
examples with augmented instructions. We select a
diverse set of human-written
documents from existing corpora such as C4 and
Wikipedia and generate instructions for the given
documents via LLMs. Then, we extend these examples with structured corpora examples such as Stack Exchange and WikiHow and task examples such as question answering, email writing, grammar error correction, story/poem generation, and text summarization.
## Distribution
The distribution of the LongForm dataset in terms of the source of examples is below. It contains examples generated from raw text corpora via LLMs, structured corpus examples, as well as various NLP task examples such as email writing, grammar error correction, story/poem generation, and text summarization.
| **Type** | **Source** | **Number of Examples** |
|------------------------|----------------|------------------------|
| **Corpora** | C4 | 10,000 |
| | Wikipedia | 5,000 |
| **Structured Corpora** | Stack Exchange | 4,380 |
| | WikiHow | 2,500 |
| **Tasks** | NIv2 | 3,684 |
| | Big Bench | 600 |
| | BEA-GEC | 1,203 |
| | Enron | 372 |
| **Total** | | 27,739 |
| | | |
| **Train** | | 23,652 |
| **Validation** | | 2,042 |
| **Test** | | 2,045 |
## Models
| | **All** | **Recipe Generation** | **ELI5** | **Writing Prompts** |
|-----------------------|---------|-----------------------------------|----------|---------------------|
| **T0++** | 10.9 | 18.7 | 3.8 | 10.2 |
| **Tk-Instruct** | 6.3 | 12.9* | 3.6 | 2.4 |
| **Flan-T5** | 10.6 | 20.9* | 3.5 | 7.4 |
| **Alpaca-LLaMA-7B** | 14.6 | 19.5 | 12.5 | 11.8 |
| **OPT-30B** | 11.1 | 18.6 | 12.2 | 2.6 |
| **[LongForm-T5-XL](https://huggingface.co/akoksal/LongForm-T5-XL)** | 16.3 | 20.2 | 18.3 | 10.6 |
| **[LongForm-OPT-2.7B](https://huggingface.co/akoksal/LongForm-OPT-2.7B)** | 17.8 | 15.5 | 17.9 | **19.9** |
| **[LongForm-OPT-6.7B](https://huggingface.co/akoksal/LongForm-OPT-6.7B)** | 17.7 | 16.9 | 17.2 | 19.0 |
| [**LongForm-LLaMA-7B**](https://huggingface.co/akoksal/LongForm-LLaMA-7B-diff)‡ | **19.7** | **21.7** | **18.6** | 18.9 |
‡: We can just release the difference between LongForm-LLaMA-7B and pretrained LLaMA-7B publicly due to restrictions of LLaMA models.
## Examples
| **Source** | **Instruction** | **Output**|
|------------------------|----------------|------------------------|
| C4 | What is the likelihood that Donald Trump's HHS secretary nominee, Alex Azar, will push policies that benefit pharmaceutical companies' pocketbooks? | His record suggests he’s unlikely to push pricing policies that would hit pharmaceutical companies’ pocketbooks.<br />President Donald Trump tweeted Monday that his nominee for HHS secretary, Alex Azar will “be a star for … lower drug prices!” But the record of the former top executive for Eli Lilly, which tripled the price of a top-selling insulin drug while he led its U.S. operation, suggests a different story.<br />Lilly is one of three drug companies targeted by a class-action lawsuit that accuses the company[...]<br />“This nominee has impeccable big pharma credentials but non-existent credentials when it comes to concerns on drug prices,” said Peter Welch, co-chair of the House Democrats Drug Pricing Task Force.
|Wikipedia | "Where is San Saba located in Rome, Italy?" Respond in 2 sentences. |San Saba is an ancient basilica church in Rome, Italy. It lies on the so-called "Piccolo Aventino", which is an area close to the ancient Aurelian Walls next to the Aventine Hill and Caelian Hill.|
## Dataset Description
- **Repository:** https://github.com/akoksal/LongForm
- **Paper:** https://arxiv.org/abs/2304.08460
- **Version:** v1.0 - April 18, 2023
- **Contact:** [Abdullatif Köksal](https://twitter.com/akoksal_)
## License
The LongForm project is subject to a MIT License with custom limitations for restrictions imposed by OpenAI (for the instruction generation part), as well as the license of language models (OPT, LLaMA, and T5).
## Citation
```
@misc{koksal2023longform,
title={LongForm: Optimizing Instruction Tuning for Long Text Generation with Corpus Extraction},
author={Abdullatif Köksal and Timo Schick and Anna Korhonen and Hinrich Schütze},
year={2023},
eprint={2304.08460},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` |
argilla/dolly-curated-comparison-falcon-7b-instruct | 2023-07-13T11:28:57.000Z | [
"language:en",
"region:us"
] | argilla | null | null | null | 4 | 113 | ---
language: en
dataset_info:
features:
- name: prompt
dtype: string
- name: response-1
dtype: string
- name: response-2
dtype: string
- name: category
dtype: string
- name: original_response
dtype: string
- name: external_id
dtype: int64
splits:
- name: train
num_bytes: 10328235
num_examples: 7401
download_size: 6598297
dataset_size: 10328235
---
# Dataset Card for "dolly-curated-comparison-falcon-7b-instruct"
This dataset contains two generated responses using the `falcon-7b-instruct` model and the original, curated, prompt + responses from the Dolly v2 curated dataset. For now only 50% of the original dataset is available but we plan to complete it.
This dataset can be used for training a reward model for RLHF using [Argilla Feedback](https://docs.argilla.io/en/latest/guides/llms/conceptual_guides/conceptual_guides.html)
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
verbrannter/invoice_dataset_large_cleaned_2 | 2023-07-16T14:59:01.000Z | [
"region:us"
] | verbrannter | null | null | null | 1 | 113 | ---
dataset_info:
features:
- name: image
dtype: image
- name: ground_truth
dtype: string
splits:
- name: train
num_bytes: 7090191899.096405
num_examples: 25434
- name: test
num_bytes: 1772687358.9035952
num_examples: 6359
download_size: 1645061199
dataset_size: 8862879258.0
---
# Dataset Card for "invoice_dataset_large_cleaned_2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
C-MTEB/BQ | 2023-07-28T13:52:50.000Z | [
"region:us"
] | C-MTEB | null | null | null | 0 | 113 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
dataset_info:
features:
- name: sentence1
dtype: string
- name: sentence2
dtype: string
- name: score
dtype: int32
splits:
- name: train
num_bytes: 8156338
num_examples: 100000
- name: validation
num_bytes: 812244
num_examples: 10000
- name: test
num_bytes: 815362
num_examples: 10000
download_size: 5588828
dataset_size: 9783944
---
# Dataset Card for "BQ"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
mcaleste/sat_multiple_choice_math_may_23 | 2023-09-18T21:38:15.000Z | [
"region:us"
] | mcaleste | null | null | null | 0 | 113 | Entry not found |
minh21/COVID-QA-testset-data | 2023-10-06T07:10:41.000Z | [
"region:us"
] | minh21 | null | null | null | 0 | 113 | ---
dataset_info:
features:
- name: question
dtype: string
- name: answer
dtype: string
- name: context_chunks
sequence: string
- name: document_id
dtype: int64
- name: id
dtype: int64
- name: context
dtype: string
splits:
- name: train
num_bytes: 16708455
num_examples: 201
download_size: 442083
dataset_size: 16708455
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "COVID-QA-testset-data"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
result-kand2-sdxl-wuerst-karlo/53f478ab | 2023-10-06T00:13:20.000Z | [
"region:us"
] | result-kand2-sdxl-wuerst-karlo | null | null | null | 0 | 113 | ---
dataset_info:
features:
- name: result
dtype: string
- name: id
dtype: int64
splits:
- name: train
num_bytes: 257
num_examples: 10
download_size: 1433
dataset_size: 257
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "53f478ab"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
code_x_glue_tt_text_to_text | 2023-07-27T15:29:15.000Z | [
"task_categories:translation",
"annotations_creators:found",
"language_creators:found",
"multilinguality:multilingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:da",
"language:en",
"language:lv",
"language:nb",
"language:zh",
"license:c-uda",
"code-documentation-translation",
"arxiv:2102.04664",
"region:us"
] | null | The dataset we use is crawled and filtered from Microsoft Documentation, whose document located at https://github.com/MicrosoftDocs/. | @article{DBLP:journals/corr/abs-2102-04664,
author = {Shuai Lu and
Daya Guo and
Shuo Ren and
Junjie Huang and
Alexey Svyatkovskiy and
Ambrosio Blanco and
Colin B. Clement and
Dawn Drain and
Daxin Jiang and
Duyu Tang and
Ge Li and
Lidong Zhou and
Linjun Shou and
Long Zhou and
Michele Tufano and
Ming Gong and
Ming Zhou and
Nan Duan and
Neel Sundaresan and
Shao Kun Deng and
Shengyu Fu and
Shujie Liu},
title = {CodeXGLUE: {A} Machine Learning Benchmark Dataset for Code Understanding
and Generation},
journal = {CoRR},
volume = {abs/2102.04664},
year = {2021}
} | null | 1 | 112 | ---
annotations_creators:
- found
language_creators:
- found
language:
- da
- en
- lv
- nb
- zh
license:
- c-uda
multilinguality:
- multilingual
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- translation
task_ids: []
pretty_name: CodeXGlueTtTextToText
tags:
- code-documentation-translation
dataset_info:
- config_name: da_en
features:
- name: id
dtype: int32
- name: source
dtype: string
- name: target
dtype: string
splits:
- name: train
num_bytes: 8163215
num_examples: 42701
- name: validation
num_bytes: 190340
num_examples: 1000
- name: test
num_bytes: 190780
num_examples: 1000
download_size: 8007867
dataset_size: 8544335
- config_name: lv_en
features:
- name: id
dtype: int32
- name: source
dtype: string
- name: target
dtype: string
splits:
- name: train
num_bytes: 3644127
num_examples: 18749
- name: validation
num_bytes: 192519
num_examples: 1000
- name: test
num_bytes: 190875
num_examples: 1000
download_size: 3778501
dataset_size: 4027521
- config_name: no_en
features:
- name: id
dtype: int32
- name: source
dtype: string
- name: target
dtype: string
splits:
- name: train
num_bytes: 8761795
num_examples: 44322
- name: validation
num_bytes: 203823
num_examples: 1000
- name: test
num_bytes: 197135
num_examples: 1000
download_size: 8606833
dataset_size: 9162753
- config_name: zh_en
features:
- name: id
dtype: int32
- name: source
dtype: string
- name: target
dtype: string
splits:
- name: train
num_bytes: 9592196
num_examples: 50154
- name: validation
num_bytes: 192155
num_examples: 1000
- name: test
num_bytes: 195245
num_examples: 1000
download_size: 9353684
dataset_size: 9979596
---
# Dataset Card for "code_x_glue_tt_text_to_text"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits-sample-size)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://github.com/microsoft/CodeXGLUE/tree/main/Text-Text/text-to-text
- **Paper:** https://arxiv.org/abs/2102.04664
### Dataset Summary
CodeXGLUE text-to-text dataset, available at https://github.com/microsoft/CodeXGLUE/tree/main/Text-Text/text-to-text
The dataset we use is crawled and filtered from Microsoft Documentation, whose document located at https://github.com/MicrosoftDocs/.
### Supported Tasks and Leaderboards
- `machine-translation`: The dataset can be used to train a model for translating Technical documentation between languages.
### Languages
da_en, lv_en, no_en, zh_en
## Dataset Structure
### Data Instances
#### da_en
An example of 'test' looks as follows.
```
{
"id": 0,
"source": "4 . K\u00f8r modellen , og udgiv den som en webtjeneste .\n",
"target": "4 . Run the model , and publish it as a web service .\n"
}
```
#### lv_en
An example of 'train' looks as follows.
```
{
"id": 0,
"source": "title : Pakalpojumu objektu izveide\n",
"target": "title : Create service objects\n"
}
```
#### no_en
An example of 'validation' looks as follows.
```
{
"id": 0,
"source": "2 . \u00c5pne servicevaren du vil definere komponenter fra en stykkliste for .\n",
"target": "2 . Open the service item for which you want to set up components from a BOM .\n"
}
```
#### zh_en
An example of 'validation' looks as follows.
```
{
"id": 0,
"source": "& # 124 ; MCDUserNotificationReadStateFilterAny & # 124 ; 0 & # 124 ; \u5305\u62ec \u901a\u77e5 , \u800c \u4e0d \u8003\u8651 \u8bfb\u53d6 \u72b6\u6001 \u3002 & # 124 ;\n",
"target": "| MCDUserNotificationReadStateFilterAny | 0 | Include notifications regardless of read state . |\n"
}
```
### Data Fields
In the following each data field in go is explained for each config. The data fields are the same among all splits.
#### da_en, lv_en, no_en, zh_en
|field name| type | description |
|----------|------|----------------------------------------|
|id |int32 | The index of the sample |
|source |string| The source language version of the text|
|target |string| The target language version of the text|
### Data Splits
|name |train|validation|test|
|-----|----:|---------:|---:|
|da_en|42701| 1000|1000|
|lv_en|18749| 1000|1000|
|no_en|44322| 1000|1000|
|zh_en|50154| 1000|1000|
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
https://github.com/microsoft, https://github.com/madlag
### Licensing Information
Computational Use of Data Agreement (C-UDA) License.
### Citation Information
```
@article{DBLP:journals/corr/abs-2102-04664,
author = {Shuai Lu and
Daya Guo and
Shuo Ren and
Junjie Huang and
Alexey Svyatkovskiy and
Ambrosio Blanco and
Colin B. Clement and
Dawn Drain and
Daxin Jiang and
Duyu Tang and
Ge Li and
Lidong Zhou and
Linjun Shou and
Long Zhou and
Michele Tufano and
Ming Gong and
Ming Zhou and
Nan Duan and
Neel Sundaresan and
Shao Kun Deng and
Shengyu Fu and
Shujie Liu},
title = {CodeXGLUE: {A} Machine Learning Benchmark Dataset for Code Understanding
and Generation},
journal = {CoRR},
volume = {abs/2102.04664},
year = {2021}
}
```
### Contributions
Thanks to @madlag (and partly also @ncoop57) for adding this dataset. |
jakeazcona/short-text-multi-labeled-emotion-classification | 2021-12-02T01:08:12.000Z | [
"region:us"
] | jakeazcona | null | null | null | 0 | 112 | Entry not found |
lgrobol/openminuscule | 2022-10-23T09:28:36.000Z | [
"task_categories:text-generation",
"task_ids:language-modeling",
"language_creators:crowdsourced",
"multilinguality:multilingual",
"size_categories:100k<n<1M",
"source_datasets:original",
"language:en",
"language:fr",
"license:cc-by-4.0",
"region:us"
] | lgrobol | null | null | null | 0 | 112 | ---
language_creators:
- crowdsourced
language:
- en
- fr
license:
- cc-by-4.0
multilinguality:
- multilingual
size_categories:
- 100k<n<1M
source_datasets:
- original
task_categories:
- text-generation
task_ids:
- language-modeling
pretty_name: Open Minuscule
language_bcp47:
- en-GB
- fr-FR
---
Open Minuscule
==============
A little small wee corpus to train little small wee models.
## Dataset Description
### Dataset Summary
This is a raw text corpus, mainly intended for testing purposes.
### Languages
- French
- English
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Source Data
It is a mashup including the following [CC-BY-SA 4.0](https://creativecommons.org/licenses/by-sa/4.0/) licenced texts
- [*Rayons émis par les composés de l’uranium et du
thorium*](https://fr.wikisource.org/wiki/Rayons_%C3%A9mis_par_les_compos%C3%A9s_de_l%E2%80%99uranium_et_du_thorium),
Maria Skłodowska Curie
- [*Frankenstein, or the Modern
Prometheus*](https://en.wikisource.org/wiki/Frankenstein,_or_the_Modern_Prometheus_(Revised_Edition,_1831)),
Mary Wollstonecraft Shelley
- [*Les maîtres sonneurs*](https://fr.wikisource.org/wiki/Les_Ma%C3%AEtres_sonneurs), George Sand
It also includes the text of *Sketch of The Analytical Engine Invented by Charles Babbage With
notes upon the Memoir by the Translator* by Luigi Menabrea and Ada Lovelace, which to the best of
my knowledge should be public domain.
## Considerations for Using the Data
This really should not be used for anything but testing purposes
## Licence
This corpus is available under the Creative Commons Attribution-ShareAlike 4.0 License |
pkavumba/balanced-copa | 2022-10-03T00:39:01.000Z | [
"task_categories:question-answering",
"task_ids:multiple-choice-qa",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:unknown",
"source_datasets:extended|copa",
"language:en",
"license:cc-by-4.0",
"region:us"
] | pkavumba | null | null | null | 0 | 112 | ---
annotations_creators:
- expert-generated
language_creators:
- expert-generated
language:
- en
license:
- cc-by-4.0
multilinguality:
- monolingual
pretty_name: BCOPA
size_categories:
- unknown
source_datasets:
- extended|copa
task_categories:
- question-answering
task_ids:
- multiple-choice-qa
---
# Dataset Card for "Balanced COPA"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://balanced-copa.github.io/](https://balanced-copa.github.io/)
- **Repository:** [Balanced COPA](https://github.com/Balanced-COPA/Balanced-COPA)
- **Paper:** [When Choosing Plausible Alternatives, Clever Hans can be Clever](https://aclanthology.org/D19-6004/)
- **Point of Contact:** [@pkavumba](https://github.com/pkavumba)
### Dataset Summary
Bala-COPA: An English language Dataset for Training Robust Commonsense Causal Reasoning Models
The Balanced Choice of Plausible Alternatives dataset is a benchmark for training machine learning models that are robust to superficial cues/spurious correlations. The dataset extends the COPA dataset(Roemmele et al. 2011) with mirrored instances that mitigate against token-level superficial cues in the original COPA answers. The superficial cues in the original COPA datasets result from an unbalanced token distribution between the correct and the incorrect answer choices, i.e., some tokens appear more in the correct choices than the incorrect ones. Balanced COPA equalizes the token distribution by adding mirrored instances with identical answer choices but different labels.
The details about the creation of Balanced COPA and the implementation of the baselines are available in the paper.
Balanced COPA language en
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
- English
## Dataset Structure
### Data Instances
An example of 'validation' looks as follows.
```
{
"id": 1,
"premise": "My body cast a shadow over the grass.",
"choice1": "The sun was rising.",
"choice2": "The grass was cut.",
"question": "cause",
"label": 1,
"mirrored": false,
}
{
"id": 1001,
"premise": "The garden looked well-groomed.",
"choice1": "The sun was rising.",
"choice2": "The grass was cut.",
"question": "cause",
"label": 1,
"mirrored": true,
}
```
### Data Fields
The data fields are the same among all splits.
#### en
- `premise`: a `string` feature.
- `choice1`: a `string` feature.
- `choice2`: a `string` feature.
- `question`: a `string` feature.
- `label`: a `int32` feature.
- `id`: a `int32` feature.
- `mirrored`: a `bool` feature.
### Data Splits
| validation | test |
| ---------: | ---: |
| 1,000 | 500 |
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[Creative Commons Attribution 4.0 International (CC BY 4.0)](https://creativecommons.org/licenses/by/4.0/).
### Citation Information
```
@inproceedings{kavumba-etal-2019-choosing,
title = "When Choosing Plausible Alternatives, Clever Hans can be Clever",
author = "Kavumba, Pride and
Inoue, Naoya and
Heinzerling, Benjamin and
Singh, Keshav and
Reisert, Paul and
Inui, Kentaro",
booktitle = "Proceedings of the First Workshop on Commonsense Inference in Natural Language Processing",
month = nov,
year = "2019",
address = "Hong Kong, China",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/D19-6004",
doi = "10.18653/v1/D19-6004",
pages = "33--42",
abstract = "Pretrained language models, such as BERT and RoBERTa, have shown large improvements in the commonsense reasoning benchmark COPA. However, recent work found that many improvements in benchmarks of natural language understanding are not due to models learning the task, but due to their increasing ability to exploit superficial cues, such as tokens that occur more often in the correct answer than the wrong one. Are BERT{'}s and RoBERTa{'}s good performance on COPA also caused by this? We find superficial cues in COPA, as well as evidence that BERT exploits these cues.To remedy this problem, we introduce Balanced COPA, an extension of COPA that does not suffer from easy-to-exploit single token cues. We analyze BERT{'}s and RoBERTa{'}s performance on original and Balanced COPA, finding that BERT relies on superficial cues when they are present, but still achieves comparable performance once they are made ineffective, suggesting that BERT learns the task to a certain degree when forced to. In contrast, RoBERTa does not appear to rely on superficial cues.",
}
@inproceedings{roemmele2011choice,
title={Choice of plausible alternatives: An evaluation of commonsense causal reasoning},
author={Roemmele, Melissa and Bejan, Cosmin Adrian and Gordon, Andrew S},
booktitle={2011 AAAI Spring Symposium Series},
year={2011},
url={https://people.ict.usc.edu/~gordon/publications/AAAI-SPRING11A.PDF},
}
```
### Contributions
Thanks to [@pkavumba](https://github.com/pkavumba) for adding this dataset.
|
MickyMike/cvefixes_bigvul | 2022-10-12T10:31:00.000Z | [
"license:mit",
"region:us"
] | MickyMike | null | null | null | 4 | 112 | ---
license: mit
---
|
graphs-datasets/CIFAR10 | 2023-02-07T16:37:24.000Z | [
"task_categories:graph-ml",
"license:mit",
"arxiv:2003.00982",
"region:us"
] | graphs-datasets | null | null | null | 1 | 112 | ---
licence: unknown
license: mit
task_categories:
- graph-ml
---
# Dataset Card for CIFAR10
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [External Use](#external-use)
- [PyGeometric](#pygeometric)
- [Dataset Structure](#dataset-structure)
- [Data Properties](#data-properties)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Additional Information](#additional-information)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **[Homepage](https://github.com/graphdeeplearning/benchmarking-gnns)**
- **Paper:**: (see citation)
### Dataset Summary
The `CIFAR10` dataset consists of 45000 images in 10 classes, represented as graphs.
### Supported Tasks and Leaderboards
`CIFAR10` should be used for multiclass graph classification.
## External Use
### PyGeometric
To load in PyGeometric, do the following:
```python
from datasets import load_dataset
from torch_geometric.data import Data
from torch_geometric.loader import DataLoader
dataset_hf = load_dataset("graphs-datasets/<mydataset>")
# For the train set (replace by valid or test as needed)
dataset_pg_list = [Data(graph) for graph in dataset_hf["train"]]
dataset_pg = DataLoader(dataset_pg_list)
```
## Dataset Structure
### Data Properties
| property | value |
|---|---|
| #graphs | 45,000 |
| average #nodes | 117.6 |
| average #edges | 941.2 |
### Data Fields
Each row of a given file is a graph, with:
- `node_feat` (list: #nodes x #node-features): nodes
- `edge_index` (list: 2 x #edges): pairs of nodes constituting edges
- `edge_attr` (list: #edges x #edge-features): for the aforementioned edges, contains their features
- `y` (list: #labels): contains the number of labels available to predict
- `num_nodes` (int): number of nodes of the graph
- `pos` (list: 2 x #node): positional information of each node
### Data Splits
This data is split. It comes from the PyGeometric version of the dataset.
## Additional Information
### Licensing Information
The dataset has been released under MIT license.
### Citation Information
```
@article{DBLP:journals/corr/abs-2003-00982,
author = {Vijay Prakash Dwivedi and
Chaitanya K. Joshi and
Thomas Laurent and
Yoshua Bengio and
Xavier Bresson},
title = {Benchmarking Graph Neural Networks},
journal = {CoRR},
volume = {abs/2003.00982},
year = {2020},
url = {https://arxiv.org/abs/2003.00982},
eprinttype = {arXiv},
eprint = {2003.00982},
timestamp = {Sat, 23 Jan 2021 01:14:30 +0100},
biburl = {https://dblp.org/rec/journals/corr/abs-2003-00982.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
``` |
keremberke/hard-hat-detection | 2023-01-16T21:39:24.000Z | [
"task_categories:object-detection",
"roboflow",
"roboflow2huggingface",
"Construction",
"Utilities",
"Manufacturing",
"Logistics",
"Ppe",
"Assembly Line",
"Warehouse",
"Factory",
"Damage Risk",
"region:us"
] | keremberke | null | @misc{ hard-hats-fhbh5_dataset,
title = { Hard Hats Dataset },
type = { Open Source Dataset },
author = { Roboflow Universe Projects },
howpublished = { \\url{ https://universe.roboflow.com/roboflow-universe-projects/hard-hats-fhbh5 } },
url = { https://universe.roboflow.com/roboflow-universe-projects/hard-hats-fhbh5 },
journal = { Roboflow Universe },
publisher = { Roboflow },
year = { 2022 },
month = { dec },
note = { visited on 2023-01-16 },
} | null | 3 | 112 | ---
task_categories:
- object-detection
tags:
- roboflow
- roboflow2huggingface
- Construction
- Utilities
- Manufacturing
- Logistics
- Ppe
- Assembly Line
- Warehouse
- Factory
- Construction
- Logistics
- Utilities
- Damage Risk
- Ppe
---
<div align="center">
<img width="640" alt="keremberke/hard-hat-detection" src="https://huggingface.co/datasets/keremberke/hard-hat-detection/resolve/main/thumbnail.jpg">
</div>
### Dataset Labels
```
['hardhat', 'no-hardhat']
```
### Number of Images
```json
{'test': 2001, 'train': 13782, 'valid': 3962}
```
### How to Use
- Install [datasets](https://pypi.org/project/datasets/):
```bash
pip install datasets
```
- Load the dataset:
```python
from datasets import load_dataset
ds = load_dataset("keremberke/hard-hat-detection", name="full")
example = ds['train'][0]
```
### Roboflow Dataset Page
[https://universe.roboflow.com/roboflow-universe-projects/hard-hats-fhbh5/dataset/2](https://universe.roboflow.com/roboflow-universe-projects/hard-hats-fhbh5/dataset/2?ref=roboflow2huggingface)
### Citation
```
@misc{ hard-hats-fhbh5_dataset,
title = { Hard Hats Dataset },
type = { Open Source Dataset },
author = { Roboflow Universe Projects },
howpublished = { \\url{ https://universe.roboflow.com/roboflow-universe-projects/hard-hats-fhbh5 } },
url = { https://universe.roboflow.com/roboflow-universe-projects/hard-hats-fhbh5 },
journal = { Roboflow Universe },
publisher = { Roboflow },
year = { 2022 },
month = { dec },
note = { visited on 2023-01-16 },
}
```
### License
CC BY 4.0
### Dataset Summary
This dataset was exported via roboflow.com on January 16, 2023 at 9:17 PM GMT
Roboflow is an end-to-end computer vision platform that helps you
* collaborate with your team on computer vision projects
* collect & organize images
* understand and search unstructured image data
* annotate, and create datasets
* export, train, and deploy computer vision models
* use active learning to improve your dataset over time
For state of the art Computer Vision training notebooks you can use with this dataset,
visit https://github.com/roboflow/notebooks
To find over 100k other datasets and pre-trained models, visit https://universe.roboflow.com
The dataset includes 19745 images.
Hardhat-ppe are annotated in COCO format.
The following pre-processing was applied to each image:
* Auto-orientation of pixel data (with EXIF-orientation stripping)
* Resize to 640x640 (Stretch)
No image augmentation techniques were applied.
|
pierreguillou/DocLayNet-base | 2023-05-17T08:56:30.000Z | [
"task_categories:object-detection",
"task_categories:image-segmentation",
"task_categories:token-classification",
"task_ids:instance-segmentation",
"annotations_creators:crowdsourced",
"size_categories:1K<n<10K",
"language:en",
"language:de",
"language:fr",
"language:ja",
"license:other",
"DocLayNet",
"COCO",
"PDF",
"IBM",
"Financial-Reports",
"Finance",
"Manuals",
"Scientific-Articles",
"Science",
"Laws",
"Law",
"Regulations",
"Patents",
"Government-Tenders",
"object-detection",
"image-segmentation",
"token-classification",
"arxiv:2206.01062",
"region:us"
] | pierreguillou | Accurate document layout analysis is a key requirement for high-quality PDF document conversion. With the recent availability of public, large ground-truth datasets such as PubLayNet and DocBank, deep-learning models have proven to be very effective at layout detection and segmentation. While these datasets are of adequate size to train such models, they severely lack in layout variability since they are sourced from scientific article repositories such as PubMed and arXiv only. Consequently, the accuracy of the layout segmentation drops significantly when these models are applied on more challenging and diverse layouts. In this paper, we present \textit{DocLayNet}, a new, publicly available, document-layout annotation dataset in COCO format. It contains 80863 manually annotated pages from diverse data sources to represent a wide variability in layouts. For each PDF page, the layout annotations provide labelled bounding-boxes with a choice of 11 distinct classes. DocLayNet also provides a subset of double- and triple-annotated pages to determine the inter-annotator agreement. In multiple experiments, we provide smallline accuracy scores (in mAP) for a set of popular object detection models. We also demonstrate that these models fall approximately 10\% behind the inter-annotator agreement. Furthermore, we provide evidence that DocLayNet is of sufficient size. Lastly, we compare models trained on PubLayNet, DocBank and DocLayNet, showing that layout predictions of the DocLayNet-trained models are more robust and thus the preferred choice for general-purpose document-layout analysis. | @article{doclaynet2022,
title = {DocLayNet: A Large Human-Annotated Dataset for Document-Layout Analysis},
doi = {10.1145/3534678.353904},
url = {https://arxiv.org/abs/2206.01062},
author = {Pfitzmann, Birgit and Auer, Christoph and Dolfi, Michele and Nassar, Ahmed S and Staar, Peter W J},
year = {2022}
} | null | 6 | 112 | ---
language:
- en
- de
- fr
- ja
annotations_creators:
- crowdsourced
license: other
pretty_name: DocLayNet base
size_categories:
- 1K<n<10K
tags:
- DocLayNet
- COCO
- PDF
- IBM
- Financial-Reports
- Finance
- Manuals
- Scientific-Articles
- Science
- Laws
- Law
- Regulations
- Patents
- Government-Tenders
- object-detection
- image-segmentation
- token-classification
task_categories:
- object-detection
- image-segmentation
- token-classification
task_ids:
- instance-segmentation
---
# Dataset Card for DocLayNet base
## About this card (01/27/2023)
### Property and license
All information from this page but the content of this paragraph "About this card (01/27/2023)" has been copied/pasted from [Dataset Card for DocLayNet](https://huggingface.co/datasets/ds4sd/DocLayNet).
DocLayNet is a dataset created by Deep Search (IBM Research) published under [license CDLA-Permissive-1.0](https://huggingface.co/datasets/ds4sd/DocLayNet#licensing-information).
I do not claim any rights to the data taken from this dataset and published on this page.
### DocLayNet dataset
[DocLayNet dataset](https://github.com/DS4SD/DocLayNet) (IBM) provides page-by-page layout segmentation ground-truth using bounding-boxes for 11 distinct class labels on 80863 unique pages from 6 document categories.
Until today, the dataset can be downloaded through direct links or as a dataset from Hugging Face datasets:
- direct links: [doclaynet_core.zip](https://codait-cos-dax.s3.us.cloud-object-storage.appdomain.cloud/dax-doclaynet/1.0.0/DocLayNet_core.zip) (28 GiB), [doclaynet_extra.zip](https://codait-cos-dax.s3.us.cloud-object-storage.appdomain.cloud/dax-doclaynet/1.0.0/DocLayNet_extra.zip) (7.5 GiB)
- Hugging Face dataset library: [dataset DocLayNet](https://huggingface.co/datasets/ds4sd/DocLayNet)
Paper: [DocLayNet: A Large Human-Annotated Dataset for Document-Layout Analysis](https://arxiv.org/abs/2206.01062) (06/02/2022)
### Processing into a format facilitating its use by HF notebooks
These 2 options require the downloading of all the data (approximately 30GBi), which requires downloading time (about 45 mn in Google Colab) and a large space on the hard disk. These could limit experimentation for people with low resources.
Moreover, even when using the download via HF datasets library, it is necessary to download the EXTRA zip separately ([doclaynet_extra.zip](https://codait-cos-dax.s3.us.cloud-object-storage.appdomain.cloud/dax-doclaynet/1.0.0/DocLayNet_extra.zip), 7.5 GiB) to associate the annotated bounding boxes with the text extracted by OCR from the PDFs. This operation also requires additional code because the boundings boxes of the texts do not necessarily correspond to those annotated (a calculation of the percentage of area in common between the boundings boxes annotated and those of the texts makes it possible to make a comparison between them).
At last, in order to use Hugging Face notebooks on fine-tuning layout models like LayoutLMv3 or LiLT, DocLayNet data must be processed in a proper format.
For all these reasons, I decided to process the DocLayNet dataset:
- into 3 datasets of different sizes:
- [DocLayNet small](https://huggingface.co/datasets/pierreguillou/DocLayNet-small) (about 1% of DocLayNet) < 1.000k document images (691 train, 64 val, 49 test)
- [DocLayNet base](https://huggingface.co/datasets/pierreguillou/DocLayNet-base) (about 10% of DocLayNet) < 10.000k document images (6910 train, 648 val, 499 test)
- [DocLayNet large](https://huggingface.co/datasets/pierreguillou/DocLayNet-large) (about 100% of DocLayNet) < 100.000k document images (69.103 train, 6.480 val, 4.994 test)
- with associated texts and PDFs (base64 format),
- and in a format facilitating their use by HF notebooks.
*Note: the layout HF notebooks will greatly help participants of the IBM [ICDAR 2023 Competition on Robust Layout Segmentation in Corporate Documents](https://ds4sd.github.io/icdar23-doclaynet/)!*
### About PDFs languages
Citation of the page 3 of the [DocLayNet paper](https://arxiv.org/abs/2206.01062):
"We did not control the document selection with regard to language. **The vast majority of documents contained in DocLayNet (close to 95%) are published in English language.** However, **DocLayNet also contains a number of documents in other languages such as German (2.5%), French (1.0%) and Japanese (1.0%)**. While the document language has negligible impact on the performance of computer vision methods such as object detection and segmentation models, it might prove challenging for layout analysis methods which exploit textual features."
### About PDFs categories distribution
Citation of the page 3 of the [DocLayNet paper](https://arxiv.org/abs/2206.01062):
"The pages in DocLayNet can be grouped into **six distinct categories**, namely **Financial Reports, Manuals, Scientific Articles, Laws & Regulations, Patents and Government Tenders**. Each document category was sourced from various repositories. For example, Financial Reports contain both free-style format annual reports which expose company-specific, artistic layouts as well as the more formal SEC filings. The two largest categories (Financial Reports and Manuals) contain a large amount of free-style layouts in order to obtain maximum variability. In the other four categories, we boosted the variability by mixing documents from independent providers, such as different government websites or publishers. In Figure 2, we show the document categories contained in DocLayNet with their respective sizes."

### Download & overview
The size of the DocLayNet small is about 10% of the DocLayNet dataset (random selection respectively in the train, val and test files).
```
# !pip install -q datasets
from datasets import load_dataset
dataset_base = load_dataset("pierreguillou/DocLayNet-base")
# overview of dataset_base
DatasetDict({
train: Dataset({
features: ['id', 'texts', 'bboxes_block', 'bboxes_line', 'categories', 'image', 'pdf', 'page_hash', 'original_filename', 'page_no', 'num_pages', 'original_width', 'original_height', 'coco_width', 'coco_height', 'collection', 'doc_category'],
num_rows: 6910
})
validation: Dataset({
features: ['id', 'texts', 'bboxes_block', 'bboxes_line', 'categories', 'image', 'pdf', 'page_hash', 'original_filename', 'page_no', 'num_pages', 'original_width', 'original_height', 'coco_width', 'coco_height', 'collection', 'doc_category'],
num_rows: 648
})
test: Dataset({
features: ['id', 'texts', 'bboxes_block', 'bboxes_line', 'categories', 'image', 'pdf', 'page_hash', 'original_filename', 'page_no', 'num_pages', 'original_width', 'original_height', 'coco_width', 'coco_height', 'collection', 'doc_category'],
num_rows: 499
})
})
```
### Annotated bounding boxes
The DocLayNet base makes easy to display document image with the annotaed bounding boxes of paragraphes or lines.
Check the notebook [processing_DocLayNet_dataset_to_be_used_by_layout_models_of_HF_hub.ipynb](https://github.com/piegu/language-models/blob/master/processing_DocLayNet_dataset_to_be_used_by_layout_models_of_HF_hub.ipynb) in order to get the code.
#### Paragraphes

#### Lines

### HF notebooks
- [notebooks LayoutLM](https://github.com/NielsRogge/Transformers-Tutorials/tree/master/LayoutLM) (Niels Rogge)
- [notebooks LayoutLMv2](https://github.com/NielsRogge/Transformers-Tutorials/tree/master/LayoutLMv2) (Niels Rogge)
- [notebooks LayoutLMv3](https://github.com/NielsRogge/Transformers-Tutorials/tree/master/LayoutLMv3) (Niels Rogge)
- [notebooks LiLT](https://github.com/NielsRogge/Transformers-Tutorials/tree/master/LiLT) (Niels Rogge)
- [Document AI: Fine-tuning LiLT for document-understanding using Hugging Face Transformers](https://github.com/philschmid/document-ai-transformers/blob/main/training/lilt_funsd.ipynb) ([post](https://www.philschmid.de/fine-tuning-lilt#3-fine-tune-and-evaluate-lilt) of Phil Schmid)
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Dataset Structure](#dataset-structure)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Annotations](#annotations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://developer.ibm.com/exchanges/data/all/doclaynet/
- **Repository:** https://github.com/DS4SD/DocLayNet
- **Paper:** https://doi.org/10.1145/3534678.3539043
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
DocLayNet provides page-by-page layout segmentation ground-truth using bounding-boxes for 11 distinct class labels on 80863 unique pages from 6 document categories. It provides several unique features compared to related work such as PubLayNet or DocBank:
1. *Human Annotation*: DocLayNet is hand-annotated by well-trained experts, providing a gold-standard in layout segmentation through human recognition and interpretation of each page layout
2. *Large layout variability*: DocLayNet includes diverse and complex layouts from a large variety of public sources in Finance, Science, Patents, Tenders, Law texts and Manuals
3. *Detailed label set*: DocLayNet defines 11 class labels to distinguish layout features in high detail.
4. *Redundant annotations*: A fraction of the pages in DocLayNet are double- or triple-annotated, allowing to estimate annotation uncertainty and an upper-bound of achievable prediction accuracy with ML models
5. *Pre-defined train- test- and validation-sets*: DocLayNet provides fixed sets for each to ensure proportional representation of the class-labels and avoid leakage of unique layout styles across the sets.
### Supported Tasks and Leaderboards
We are hosting a competition in ICDAR 2023 based on the DocLayNet dataset. For more information see https://ds4sd.github.io/icdar23-doclaynet/.
## Dataset Structure
### Data Fields
DocLayNet provides four types of data assets:
1. PNG images of all pages, resized to square `1025 x 1025px`
2. Bounding-box annotations in COCO format for each PNG image
3. Extra: Single-page PDF files matching each PNG image
4. Extra: JSON file matching each PDF page, which provides the digital text cells with coordinates and content
The COCO image record are defined like this example
```js
...
{
"id": 1,
"width": 1025,
"height": 1025,
"file_name": "132a855ee8b23533d8ae69af0049c038171a06ddfcac892c3c6d7e6b4091c642.png",
// Custom fields:
"doc_category": "financial_reports" // high-level document category
"collection": "ann_reports_00_04_fancy", // sub-collection name
"doc_name": "NASDAQ_FFIN_2002.pdf", // original document filename
"page_no": 9, // page number in original document
"precedence": 0, // Annotation order, non-zero in case of redundant double- or triple-annotation
},
...
```
The `doc_category` field uses one of the following constants:
```
financial_reports,
scientific_articles,
laws_and_regulations,
government_tenders,
manuals,
patents
```
### Data Splits
The dataset provides three splits
- `train`
- `val`
- `test`
## Dataset Creation
### Annotations
#### Annotation process
The labeling guideline used for training of the annotation experts are available at [DocLayNet_Labeling_Guide_Public.pdf](https://raw.githubusercontent.com/DS4SD/DocLayNet/main/assets/DocLayNet_Labeling_Guide_Public.pdf).
#### Who are the annotators?
Annotations are crowdsourced.
## Additional Information
### Dataset Curators
The dataset is curated by the [Deep Search team](https://ds4sd.github.io/) at IBM Research.
You can contact us at [deepsearch-core@zurich.ibm.com](mailto:deepsearch-core@zurich.ibm.com).
Curators:
- Christoph Auer, [@cau-git](https://github.com/cau-git)
- Michele Dolfi, [@dolfim-ibm](https://github.com/dolfim-ibm)
- Ahmed Nassar, [@nassarofficial](https://github.com/nassarofficial)
- Peter Staar, [@PeterStaar-IBM](https://github.com/PeterStaar-IBM)
### Licensing Information
License: [CDLA-Permissive-1.0](https://cdla.io/permissive-1-0/)
### Citation Information
```bib
@article{doclaynet2022,
title = {DocLayNet: A Large Human-Annotated Dataset for Document-Layout Segmentation},
doi = {10.1145/3534678.353904},
url = {https://doi.org/10.1145/3534678.3539043},
author = {Pfitzmann, Birgit and Auer, Christoph and Dolfi, Michele and Nassar, Ahmed S and Staar, Peter W J},
year = {2022},
isbn = {9781450393850},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
booktitle = {Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining},
pages = {3743–3751},
numpages = {9},
location = {Washington DC, USA},
series = {KDD '22}
}
```
### Contributions
Thanks to [@dolfim-ibm](https://github.com/dolfim-ibm), [@cau-git](https://github.com/cau-git) for adding this dataset. |
JasperLS/prompt-injections | 2023-05-16T17:16:21.000Z | [
"region:us"
] | JasperLS | null | null | null | 5 | 112 | ---
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype: int64
splits:
- name: train
num_bytes: 71720
num_examples: 546
- name: test
num_bytes: 15981
num_examples: 116
download_size: 51215
dataset_size: 87701
---
# Dataset Card for "deberta-v3-base-injection-dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
tyzhu/squad_baseline_train_10_eval_10 | 2023-09-19T05:30:24.000Z | [
"region:us"
] | tyzhu | null | null | null | 0 | 112 | ---
dataset_info:
features:
- name: id
dtype: string
- name: title
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: text
dtype: string
- name: answer_start
dtype: int32
- name: context_id
dtype: string
- name: inputs
dtype: string
- name: targets
dtype: string
splits:
- name: train
num_bytes: 52389
num_examples: 51
- name: validation
num_bytes: 58313
num_examples: 48
download_size: 0
dataset_size: 110702
---
# Dataset Card for "squad_baseline_train_10_eval_10"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
spyroot/cornell_sum_movie_dialog | 2023-09-22T09:32:52.000Z | [
"license:apache-2.0",
"region:us"
] | spyroot | null | null | null | 0 | 112 | ---
license: apache-2.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: movieID
dtype: string
- name: movieTitle
dtype: string
- name: movieYear
dtype: string
- name: movieIMDBRating
dtype: string
- name: movieNoIMDBVotes
dtype: string
- name: movieGenres
sequence: string
- name: utterance
sequence:
- name: lines
dtype: string
- name: lids
dtype: string
splits:
- name: train
num_bytes: 32283731
num_examples: 83097
download_size: 0
dataset_size: 32283731
---
|
reversebutlerianjihad/AnorexicPajama | 2023-09-25T08:04:20.000Z | [
"region:us"
] | reversebutlerianjihad | null | null | null | 1 | 112 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
- split: validation
path: data/validation-*
dataset_info:
features:
- name: text
dtype: string
- name: meta
struct:
- name: redpajama_set_name
dtype: string
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 239181187.24
num_examples: 54890
- name: test
num_bytes: 40114950
num_examples: 9346
- name: validation
num_bytes: 39109042
num_examples: 9347
download_size: 185544769
dataset_size: 318405179.24
---
# Dataset Card for "AnorexicPajama"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
facebook/babi_qa | 2023-01-25T14:26:58.000Z | [
"task_categories:question-answering",
"annotations_creators:machine-generated",
"language_creators:machine-generated",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"size_categories:1K<n<10K",
"size_categories:n<1K",
"source_datasets:original",
"language:en",
"license:cc-by-3.0",
"chained-qa",
"arxiv:1502.05698",
"arxiv:1511.06931",
"region:us"
] | facebook | The (20) QA bAbI tasks are a set of proxy tasks that evaluate reading
comprehension via question answering. Our tasks measure understanding
in several ways: whether a system is able to answer questions via chaining facts,
simple induction, deduction and many more. The tasks are designed to be prerequisites
for any system that aims to be capable of conversing with a human.
The aim is to classify these tasks into skill sets,so that researchers
can identify (and then rectify)the failings of their systems. | @misc{weston2015aicomplete,
title={Towards AI-Complete Question Answering: A Set of Prerequisite Toy Tasks},
author={Jason Weston and Antoine Bordes and Sumit Chopra and Alexander M. Rush and Bart van Merriënboer and Armand Joulin and Tomas Mikolov},
year={2015},
eprint={1502.05698},
archivePrefix={arXiv},
primaryClass={cs.AI}
} | null | 5 | 111 | ---
annotations_creators:
- machine-generated
language_creators:
- machine-generated
language:
- en
license:
- cc-by-3.0
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
- 1K<n<10K
- n<1K
source_datasets:
- original
task_categories:
- question-answering
task_ids: []
paperswithcode_id: babi-1
pretty_name: BabiQa
configs:
- en-10k-qa1
- en-10k-qa10
- en-10k-qa11
- en-10k-qa12
- en-10k-qa13
- en-10k-qa14
- en-10k-qa15
- en-10k-qa16
- en-10k-qa17
- en-10k-qa18
- en-10k-qa19
- en-10k-qa2
- en-10k-qa20
- en-10k-qa3
- en-10k-qa4
- en-10k-qa5
- en-10k-qa6
- en-10k-qa7
- en-10k-qa8
- en-10k-qa9
- en-qa1
- en-qa10
- en-qa11
- en-qa12
- en-qa13
- en-qa14
- en-qa15
- en-qa16
- en-qa17
- en-qa18
- en-qa19
- en-qa2
- en-qa20
- en-qa3
- en-qa4
- en-qa5
- en-qa6
- en-qa7
- en-qa8
- en-qa9
- en-valid-10k-qa1
- en-valid-10k-qa10
- en-valid-10k-qa11
- en-valid-10k-qa12
- en-valid-10k-qa13
- en-valid-10k-qa14
- en-valid-10k-qa15
- en-valid-10k-qa16
- en-valid-10k-qa17
- en-valid-10k-qa18
- en-valid-10k-qa19
- en-valid-10k-qa2
- en-valid-10k-qa20
- en-valid-10k-qa3
- en-valid-10k-qa4
- en-valid-10k-qa5
- en-valid-10k-qa6
- en-valid-10k-qa7
- en-valid-10k-qa8
- en-valid-10k-qa9
- en-valid-qa1
- en-valid-qa10
- en-valid-qa11
- en-valid-qa12
- en-valid-qa13
- en-valid-qa14
- en-valid-qa15
- en-valid-qa16
- en-valid-qa17
- en-valid-qa18
- en-valid-qa19
- en-valid-qa2
- en-valid-qa20
- en-valid-qa3
- en-valid-qa4
- en-valid-qa5
- en-valid-qa6
- en-valid-qa7
- en-valid-qa8
- en-valid-qa9
- hn-10k-qa1
- hn-10k-qa10
- hn-10k-qa11
- hn-10k-qa12
- hn-10k-qa13
- hn-10k-qa14
- hn-10k-qa15
- hn-10k-qa16
- hn-10k-qa17
- hn-10k-qa18
- hn-10k-qa19
- hn-10k-qa2
- hn-10k-qa20
- hn-10k-qa3
- hn-10k-qa4
- hn-10k-qa5
- hn-10k-qa6
- hn-10k-qa7
- hn-10k-qa8
- hn-10k-qa9
- hn-qa1
- hn-qa10
- hn-qa11
- hn-qa12
- hn-qa13
- hn-qa14
- hn-qa15
- hn-qa16
- hn-qa17
- hn-qa18
- hn-qa19
- hn-qa2
- hn-qa20
- hn-qa3
- hn-qa4
- hn-qa5
- hn-qa6
- hn-qa7
- hn-qa8
- hn-qa9
- shuffled-10k-qa1
- shuffled-10k-qa10
- shuffled-10k-qa11
- shuffled-10k-qa12
- shuffled-10k-qa13
- shuffled-10k-qa14
- shuffled-10k-qa15
- shuffled-10k-qa16
- shuffled-10k-qa17
- shuffled-10k-qa18
- shuffled-10k-qa19
- shuffled-10k-qa2
- shuffled-10k-qa20
- shuffled-10k-qa3
- shuffled-10k-qa4
- shuffled-10k-qa5
- shuffled-10k-qa6
- shuffled-10k-qa7
- shuffled-10k-qa8
- shuffled-10k-qa9
- shuffled-qa1
- shuffled-qa10
- shuffled-qa11
- shuffled-qa12
- shuffled-qa13
- shuffled-qa14
- shuffled-qa15
- shuffled-qa16
- shuffled-qa17
- shuffled-qa18
- shuffled-qa19
- shuffled-qa2
- shuffled-qa20
- shuffled-qa3
- shuffled-qa4
- shuffled-qa5
- shuffled-qa6
- shuffled-qa7
- shuffled-qa8
- shuffled-qa9
tags:
- chained-qa
dataset_info:
- config_name: en-qa1
features:
- name: story
sequence:
- name: id
dtype: string
- name: type
dtype:
class_label:
names:
'0': context
'1': question
- name: text
dtype: string
- name: supporting_ids
sequence: string
- name: answer
dtype: string
splits:
- name: train
num_bytes: 165386
num_examples: 200
- name: test
num_bytes: 165517
num_examples: 200
download_size: 15719851
dataset_size: 330903
- config_name: en-qa2
features:
- name: story
sequence:
- name: id
dtype: string
- name: type
dtype:
class_label:
names:
'0': context
'1': question
- name: text
dtype: string
- name: supporting_ids
sequence: string
- name: answer
dtype: string
splits:
- name: train
num_bytes: 302888
num_examples: 200
- name: test
num_bytes: 306631
num_examples: 200
download_size: 15719851
dataset_size: 609519
- config_name: en-qa3
features:
- name: story
sequence:
- name: id
dtype: string
- name: type
dtype:
class_label:
names:
'0': context
'1': question
- name: text
dtype: string
- name: supporting_ids
sequence: string
- name: answer
dtype: string
splits:
- name: train
num_bytes: 887756
num_examples: 200
- name: test
num_bytes: 883187
num_examples: 200
download_size: 15719851
dataset_size: 1770943
- config_name: en-qa4
features:
- name: story
sequence:
- name: id
dtype: string
- name: type
dtype:
class_label:
names:
'0': context
'1': question
- name: text
dtype: string
- name: supporting_ids
sequence: string
- name: answer
dtype: string
splits:
- name: train
num_bytes: 205510
num_examples: 1000
- name: test
num_bytes: 205434
num_examples: 1000
download_size: 15719851
dataset_size: 410944
- config_name: en-qa5
features:
- name: story
sequence:
- name: id
dtype: string
- name: type
dtype:
class_label:
names:
'0': context
'1': question
- name: text
dtype: string
- name: supporting_ids
sequence: string
- name: answer
dtype: string
splits:
- name: train
num_bytes: 337349
num_examples: 200
- name: test
num_bytes: 350457
num_examples: 200
download_size: 15719851
dataset_size: 687806
- config_name: en-qa6
features:
- name: story
sequence:
- name: id
dtype: string
- name: type
dtype:
class_label:
names:
'0': context
'1': question
- name: text
dtype: string
- name: supporting_ids
sequence: string
- name: answer
dtype: string
splits:
- name: train
num_bytes: 173053
num_examples: 200
- name: test
num_bytes: 172249
num_examples: 200
download_size: 15719851
dataset_size: 345302
- config_name: en-qa7
features:
- name: story
sequence:
- name: id
dtype: string
- name: type
dtype:
class_label:
names:
'0': context
'1': question
- name: text
dtype: string
- name: supporting_ids
sequence: string
- name: answer
dtype: string
splits:
- name: train
num_bytes: 224778
num_examples: 200
- name: test
num_bytes: 215512
num_examples: 200
download_size: 15719851
dataset_size: 440290
- config_name: en-qa8
features:
- name: story
sequence:
- name: id
dtype: string
- name: type
dtype:
class_label:
names:
'0': context
'1': question
- name: text
dtype: string
- name: supporting_ids
sequence: string
- name: answer
dtype: string
splits:
- name: train
num_bytes: 212517
num_examples: 200
- name: test
num_bytes: 216244
num_examples: 200
download_size: 15719851
dataset_size: 428761
- config_name: en-qa9
features:
- name: story
sequence:
- name: id
dtype: string
- name: type
dtype:
class_label:
names:
'0': context
'1': question
- name: text
dtype: string
- name: supporting_ids
sequence: string
- name: answer
dtype: string
splits:
- name: train
num_bytes: 168350
num_examples: 200
- name: test
num_bytes: 168248
num_examples: 200
download_size: 15719851
dataset_size: 336598
- config_name: en-qa10
features:
- name: story
sequence:
- name: id
dtype: string
- name: type
dtype:
class_label:
names:
'0': context
'1': question
- name: text
dtype: string
- name: supporting_ids
sequence: string
- name: answer
dtype: string
splits:
- name: train
num_bytes: 170257
num_examples: 200
- name: test
num_bytes: 170672
num_examples: 200
download_size: 15719851
dataset_size: 340929
- config_name: en-qa11
features:
- name: story
sequence:
- name: id
dtype: string
- name: type
dtype:
class_label:
names:
'0': context
'1': question
- name: text
dtype: string
- name: supporting_ids
sequence: string
- name: answer
dtype: string
splits:
- name: train
num_bytes: 178560
num_examples: 200
- name: test
num_bytes: 178840
num_examples: 200
download_size: 15719851
dataset_size: 357400
- config_name: en-qa12
features:
- name: story
sequence:
- name: id
dtype: string
- name: type
dtype:
class_label:
names:
'0': context
'1': question
- name: text
dtype: string
- name: supporting_ids
sequence: string
- name: answer
dtype: string
splits:
- name: train
num_bytes: 185600
num_examples: 200
- name: test
num_bytes: 185529
num_examples: 200
download_size: 15719851
dataset_size: 371129
- config_name: en-qa13
features:
- name: story
sequence:
- name: id
dtype: string
- name: type
dtype:
class_label:
names:
'0': context
'1': question
- name: text
dtype: string
- name: supporting_ids
sequence: string
- name: answer
dtype: string
splits:
- name: train
num_bytes: 190556
num_examples: 200
- name: test
num_bytes: 190484
num_examples: 200
download_size: 15719851
dataset_size: 381040
- config_name: en-qa14
features:
- name: story
sequence:
- name: id
dtype: string
- name: type
dtype:
class_label:
names:
'0': context
'1': question
- name: text
dtype: string
- name: supporting_ids
sequence: string
- name: answer
dtype: string
splits:
- name: train
num_bytes: 234355
num_examples: 200
- name: test
num_bytes: 233204
num_examples: 200
download_size: 15719851
dataset_size: 467559
- config_name: en-qa15
features:
- name: story
sequence:
- name: id
dtype: string
- name: type
dtype:
class_label:
names:
'0': context
'1': question
- name: text
dtype: string
- name: supporting_ids
sequence: string
- name: answer
dtype: string
splits:
- name: train
num_bytes: 163728
num_examples: 250
- name: test
num_bytes: 163809
num_examples: 250
download_size: 15719851
dataset_size: 327537
- config_name: en-qa16
features:
- name: story
sequence:
- name: id
dtype: string
- name: type
dtype:
class_label:
names:
'0': context
'1': question
- name: text
dtype: string
- name: supporting_ids
sequence: string
- name: answer
dtype: string
splits:
- name: train
num_bytes: 456374
num_examples: 1000
- name: test
num_bytes: 456248
num_examples: 1000
download_size: 15719851
dataset_size: 912622
- config_name: en-qa17
features:
- name: story
sequence:
- name: id
dtype: string
- name: type
dtype:
class_label:
names:
'0': context
'1': question
- name: text
dtype: string
- name: supporting_ids
sequence: string
- name: answer
dtype: string
splits:
- name: train
num_bytes: 103636
num_examples: 125
- name: test
num_bytes: 103618
num_examples: 125
download_size: 15719851
dataset_size: 207254
- config_name: en-qa18
features:
- name: story
sequence:
- name: id
dtype: string
- name: type
dtype:
class_label:
names:
'0': context
'1': question
- name: text
dtype: string
- name: supporting_ids
sequence: string
- name: answer
dtype: string
splits:
- name: train
num_bytes: 162875
num_examples: 198
- name: test
num_bytes: 161266
num_examples: 199
download_size: 15719851
dataset_size: 324141
- config_name: en-qa19
features:
- name: story
sequence:
- name: id
dtype: string
- name: type
dtype:
class_label:
names:
'0': context
'1': question
- name: text
dtype: string
- name: supporting_ids
sequence: string
- name: answer
dtype: string
splits:
- name: train
num_bytes: 404536
num_examples: 1000
- name: test
num_bytes: 404489
num_examples: 1000
download_size: 15719851
dataset_size: 809025
- config_name: en-qa20
features:
- name: story
sequence:
- name: id
dtype: string
- name: type
dtype:
class_label:
names:
'0': context
'1': question
- name: text
dtype: string
- name: supporting_ids
sequence: string
- name: answer
dtype: string
splits:
- name: train
num_bytes: 115812
num_examples: 94
- name: test
num_bytes: 115863
num_examples: 93
download_size: 15719851
dataset_size: 231675
- config_name: hn-qa1
features:
- name: story
sequence:
- name: id
dtype: string
- name: type
dtype:
class_label:
names:
'0': context
'1': question
- name: text
dtype: string
- name: supporting_ids
sequence: string
- name: answer
dtype: string
splits:
- name: train
num_bytes: 168605
num_examples: 200
- name: test
num_bytes: 168572
num_examples: 200
download_size: 15719851
dataset_size: 337177
- config_name: hn-qa2
features:
- name: story
sequence:
- name: id
dtype: string
- name: type
dtype:
class_label:
names:
'0': context
'1': question
- name: text
dtype: string
- name: supporting_ids
sequence: string
- name: answer
dtype: string
splits:
- name: train
num_bytes: 296391
num_examples: 200
- name: test
num_bytes: 288429
num_examples: 200
download_size: 15719851
dataset_size: 584820
- config_name: hn-qa3
features:
- name: story
sequence:
- name: id
dtype: string
- name: type
dtype:
class_label:
names:
'0': context
'1': question
- name: text
dtype: string
- name: supporting_ids
sequence: string
- name: answer
dtype: string
splits:
- name: train
num_bytes: 842184
num_examples: 167
- name: test
num_bytes: 808460
num_examples: 167
download_size: 15719851
dataset_size: 1650644
- config_name: hn-qa4
features:
- name: story
sequence:
- name: id
dtype: string
- name: type
dtype:
class_label:
names:
'0': context
'1': question
- name: text
dtype: string
- name: supporting_ids
sequence: string
- name: answer
dtype: string
splits:
- name: train
num_bytes: 231303
num_examples: 1000
- name: test
num_bytes: 231230
num_examples: 1000
download_size: 15719851
dataset_size: 462533
- config_name: hn-qa5
features:
- name: story
sequence:
- name: id
dtype: string
- name: type
dtype:
class_label:
names:
'0': context
'1': question
- name: text
dtype: string
- name: supporting_ids
sequence: string
- name: answer
dtype: string
splits:
- name: train
num_bytes: 320859
num_examples: 200
- name: test
num_bytes: 315396
num_examples: 200
download_size: 15719851
dataset_size: 636255
- config_name: hn-qa6
features:
- name: story
sequence:
- name: id
dtype: string
- name: type
dtype:
class_label:
names:
'0': context
'1': question
- name: text
dtype: string
- name: supporting_ids
sequence: string
- name: answer
dtype: string
splits:
- name: train
num_bytes: 170796
num_examples: 200
- name: test
num_bytes: 171360
num_examples: 200
download_size: 15719851
dataset_size: 342156
- config_name: hn-qa7
features:
- name: story
sequence:
- name: id
dtype: string
- name: type
dtype:
class_label:
names:
'0': context
'1': question
- name: text
dtype: string
- name: supporting_ids
sequence: string
- name: answer
dtype: string
splits:
- name: train
num_bytes: 206981
num_examples: 200
- name: test
num_bytes: 208080
num_examples: 200
download_size: 15719851
dataset_size: 415061
- config_name: hn-qa8
features:
- name: story
sequence:
- name: id
dtype: string
- name: type
dtype:
class_label:
names:
'0': context
'1': question
- name: text
dtype: string
- name: supporting_ids
sequence: string
- name: answer
dtype: string
splits:
- name: train
num_bytes: 211584
num_examples: 200
- name: test
num_bytes: 222232
num_examples: 200
download_size: 15719851
dataset_size: 433816
- config_name: hn-qa9
features:
- name: story
sequence:
- name: id
dtype: string
- name: type
dtype:
class_label:
names:
'0': context
'1': question
- name: text
dtype: string
- name: supporting_ids
sequence: string
- name: answer
dtype: string
splits:
- name: train
num_bytes: 187718
num_examples: 200
- name: test
num_bytes: 187341
num_examples: 200
download_size: 15719851
dataset_size: 375059
- config_name: hn-qa10
features:
- name: story
sequence:
- name: id
dtype: string
- name: type
dtype:
class_label:
names:
'0': context
'1': question
- name: text
dtype: string
- name: supporting_ids
sequence: string
- name: answer
dtype: string
splits:
- name: train
num_bytes: 183583
num_examples: 200
- name: test
num_bytes: 182932
num_examples: 200
download_size: 15719851
dataset_size: 366515
- config_name: hn-qa11
features:
- name: story
sequence:
- name: id
dtype: string
- name: type
dtype:
class_label:
names:
'0': context
'1': question
- name: text
dtype: string
- name: supporting_ids
sequence: string
- name: answer
dtype: string
splits:
- name: train
num_bytes: 179698
num_examples: 200
- name: test
num_bytes: 180461
num_examples: 200
download_size: 15719851
dataset_size: 360159
- config_name: hn-qa12
features:
- name: story
sequence:
- name: id
dtype: string
- name: type
dtype:
class_label:
names:
'0': context
'1': question
- name: text
dtype: string
- name: supporting_ids
sequence: string
- name: answer
dtype: string
splits:
- name: train
num_bytes: 187731
num_examples: 200
- name: test
num_bytes: 187954
num_examples: 200
download_size: 15719851
dataset_size: 375685
- config_name: hn-qa13
features:
- name: story
sequence:
- name: id
dtype: string
- name: type
dtype:
class_label:
names:
'0': context
'1': question
- name: text
dtype: string
- name: supporting_ids
sequence: string
- name: answer
dtype: string
splits:
- name: train
num_bytes: 191395
num_examples: 125
- name: test
num_bytes: 191747
num_examples: 125
download_size: 15719851
dataset_size: 383142
- config_name: hn-qa14
features:
- name: story
sequence:
- name: id
dtype: string
- name: type
dtype:
class_label:
names:
'0': context
'1': question
- name: text
dtype: string
- name: supporting_ids
sequence: string
- name: answer
dtype: string
splits:
- name: train
num_bytes: 240659
num_examples: 200
- name: test
num_bytes: 240436
num_examples: 200
download_size: 15719851
dataset_size: 481095
- config_name: hn-qa15
features:
- name: story
sequence:
- name: id
dtype: string
- name: type
dtype:
class_label:
names:
'0': context
'1': question
- name: text
dtype: string
- name: supporting_ids
sequence: string
- name: answer
dtype: string
splits:
- name: train
num_bytes: 170358
num_examples: 250
- name: test
num_bytes: 170259
num_examples: 250
download_size: 15719851
dataset_size: 340617
- config_name: hn-qa16
features:
- name: story
sequence:
- name: id
dtype: string
- name: type
dtype:
class_label:
names:
'0': context
'1': question
- name: text
dtype: string
- name: supporting_ids
sequence: string
- name: answer
dtype: string
splits:
- name: train
num_bytes: 523093
num_examples: 1000
- name: test
num_bytes: 523032
num_examples: 1000
download_size: 15719851
dataset_size: 1046125
- config_name: hn-qa17
features:
- name: story
sequence:
- name: id
dtype: string
- name: type
dtype:
class_label:
names:
'0': context
'1': question
- name: text
dtype: string
- name: supporting_ids
sequence: string
- name: answer
dtype: string
splits:
- name: train
num_bytes: 103878
num_examples: 125
- name: test
num_bytes: 104061
num_examples: 125
download_size: 15719851
dataset_size: 207939
- config_name: hn-qa18
features:
- name: story
sequence:
- name: id
dtype: string
- name: type
dtype:
class_label:
names:
'0': context
'1': question
- name: text
dtype: string
- name: supporting_ids
sequence: string
- name: answer
dtype: string
splits:
- name: train
num_bytes: 173056
num_examples: 198
- name: test
num_bytes: 176824
num_examples: 198
download_size: 15719851
dataset_size: 349880
- config_name: hn-qa19
features:
- name: story
sequence:
- name: id
dtype: string
- name: type
dtype:
class_label:
names:
'0': context
'1': question
- name: text
dtype: string
- name: supporting_ids
sequence: string
- name: answer
dtype: string
splits:
- name: train
num_bytes: 470225
num_examples: 1000
- name: test
num_bytes: 470479
num_examples: 1000
download_size: 15719851
dataset_size: 940704
- config_name: hn-qa20
features:
- name: story
sequence:
- name: id
dtype: string
- name: type
dtype:
class_label:
names:
'0': context
'1': question
- name: text
dtype: string
- name: supporting_ids
sequence: string
- name: answer
dtype: string
splits:
- name: train
num_bytes: 115021
num_examples: 93
- name: test
num_bytes: 115088
num_examples: 94
download_size: 15719851
dataset_size: 230109
- config_name: en-10k-qa1
features:
- name: story
sequence:
- name: id
dtype: string
- name: type
dtype:
class_label:
names:
'0': context
'1': question
- name: text
dtype: string
- name: supporting_ids
sequence: string
- name: answer
dtype: string
splits:
- name: train
num_bytes: 1654288
num_examples: 2000
- name: test
num_bytes: 165517
num_examples: 200
download_size: 15719851
dataset_size: 1819805
- config_name: en-10k-qa2
features:
- name: story
sequence:
- name: id
dtype: string
- name: type
dtype:
class_label:
names:
'0': context
'1': question
- name: text
dtype: string
- name: supporting_ids
sequence: string
- name: answer
dtype: string
splits:
- name: train
num_bytes: 3062580
num_examples: 2000
- name: test
num_bytes: 306631
num_examples: 200
download_size: 15719851
dataset_size: 3369211
- config_name: en-10k-qa3
features:
- name: story
sequence:
- name: id
dtype: string
- name: type
dtype:
class_label:
names:
'0': context
'1': question
- name: text
dtype: string
- name: supporting_ids
sequence: string
- name: answer
dtype: string
splits:
- name: train
num_bytes: 8921215
num_examples: 2000
- name: test
num_bytes: 883187
num_examples: 200
download_size: 15719851
dataset_size: 9804402
- config_name: en-10k-qa4
features:
- name: story
sequence:
- name: id
dtype: string
- name: type
dtype:
class_label:
names:
'0': context
'1': question
- name: text
dtype: string
- name: supporting_ids
sequence: string
- name: answer
dtype: string
splits:
- name: train
num_bytes: 2055105
num_examples: 10000
- name: test
num_bytes: 205434
num_examples: 1000
download_size: 15719851
dataset_size: 2260539
- config_name: en-10k-qa5
features:
- name: story
sequence:
- name: id
dtype: string
- name: type
dtype:
class_label:
names:
'0': context
'1': question
- name: text
dtype: string
- name: supporting_ids
sequence: string
- name: answer
dtype: string
splits:
- name: train
num_bytes: 3592157
num_examples: 2000
- name: test
num_bytes: 350457
num_examples: 200
download_size: 15719851
dataset_size: 3942614
- config_name: en-10k-qa6
features:
- name: story
sequence:
- name: id
dtype: string
- name: type
dtype:
class_label:
names:
'0': context
'1': question
- name: text
dtype: string
- name: supporting_ids
sequence: string
- name: answer
dtype: string
splits:
- name: train
num_bytes: 1726716
num_examples: 2000
- name: test
num_bytes: 172249
num_examples: 200
download_size: 15719851
dataset_size: 1898965
- config_name: en-10k-qa7
features:
- name: story
sequence:
- name: id
dtype: string
- name: type
dtype:
class_label:
names:
'0': context
'1': question
- name: text
dtype: string
- name: supporting_ids
sequence: string
- name: answer
dtype: string
splits:
- name: train
num_bytes: 2228087
num_examples: 2000
- name: test
num_bytes: 215512
num_examples: 200
download_size: 15719851
dataset_size: 2443599
- config_name: en-10k-qa8
features:
- name: story
sequence:
- name: id
dtype: string
- name: type
dtype:
class_label:
names:
'0': context
'1': question
- name: text
dtype: string
- name: supporting_ids
sequence: string
- name: answer
dtype: string
splits:
- name: train
num_bytes: 2141880
num_examples: 2000
- name: test
num_bytes: 216244
num_examples: 200
download_size: 15719851
dataset_size: 2358124
- config_name: en-10k-qa9
features:
- name: story
sequence:
- name: id
dtype: string
- name: type
dtype:
class_label:
names:
'0': context
'1': question
- name: text
dtype: string
- name: supporting_ids
sequence: string
- name: answer
dtype: string
splits:
- name: train
num_bytes: 1681213
num_examples: 2000
- name: test
num_bytes: 168248
num_examples: 200
download_size: 15719851
dataset_size: 1849461
- config_name: en-10k-qa10
features:
- name: story
sequence:
- name: id
dtype: string
- name: type
dtype:
class_label:
names:
'0': context
'1': question
- name: text
dtype: string
- name: supporting_ids
sequence: string
- name: answer
dtype: string
splits:
- name: train
num_bytes: 1707675
num_examples: 2000
- name: test
num_bytes: 170672
num_examples: 200
download_size: 15719851
dataset_size: 1878347
- config_name: en-10k-qa11
features:
- name: story
sequence:
- name: id
dtype: string
- name: type
dtype:
class_label:
names:
'0': context
'1': question
- name: text
dtype: string
- name: supporting_ids
sequence: string
- name: answer
dtype: string
splits:
- name: train
num_bytes: 1786179
num_examples: 2000
- name: test
num_bytes: 178840
num_examples: 200
download_size: 15719851
dataset_size: 1965019
- config_name: en-10k-qa12
features:
- name: story
sequence:
- name: id
dtype: string
- name: type
dtype:
class_label:
names:
'0': context
'1': question
- name: text
dtype: string
- name: supporting_ids
sequence: string
- name: answer
dtype: string
splits:
- name: train
num_bytes: 1854745
num_examples: 2000
- name: test
num_bytes: 185529
num_examples: 200
download_size: 15719851
dataset_size: 2040274
- config_name: en-10k-qa13
features:
- name: story
sequence:
- name: id
dtype: string
- name: type
dtype:
class_label:
names:
'0': context
'1': question
- name: text
dtype: string
- name: supporting_ids
sequence: string
- name: answer
dtype: string
splits:
- name: train
num_bytes: 1903149
num_examples: 2000
- name: test
num_bytes: 190484
num_examples: 200
download_size: 15719851
dataset_size: 2093633
- config_name: en-10k-qa14
features:
- name: story
sequence:
- name: id
dtype: string
- name: type
dtype:
class_label:
names:
'0': context
'1': question
- name: text
dtype: string
- name: supporting_ids
sequence: string
- name: answer
dtype: string
splits:
- name: train
num_bytes: 2321511
num_examples: 2000
- name: test
num_bytes: 233204
num_examples: 200
download_size: 15719851
dataset_size: 2554715
- config_name: en-10k-qa15
features:
- name: story
sequence:
- name: id
dtype: string
- name: type
dtype:
class_label:
names:
'0': context
'1': question
- name: text
dtype: string
- name: supporting_ids
sequence: string
- name: answer
dtype: string
splits:
- name: train
num_bytes: 1637398
num_examples: 2500
- name: test
num_bytes: 163809
num_examples: 250
download_size: 15719851
dataset_size: 1801207
- config_name: en-10k-qa16
features:
- name: story
sequence:
- name: id
dtype: string
- name: type
dtype:
class_label:
names:
'0': context
'1': question
- name: text
dtype: string
- name: supporting_ids
sequence: string
- name: answer
dtype: string
splits:
- name: train
num_bytes: 4562844
num_examples: 10000
- name: test
num_bytes: 456248
num_examples: 1000
download_size: 15719851
dataset_size: 5019092
- config_name: en-10k-qa17
features:
- name: story
sequence:
- name: id
dtype: string
- name: type
dtype:
class_label:
names:
'0': context
'1': question
- name: text
dtype: string
- name: supporting_ids
sequence: string
- name: answer
dtype: string
splits:
- name: train
num_bytes: 1034333
num_examples: 1250
- name: test
num_bytes: 103618
num_examples: 125
download_size: 15719851
dataset_size: 1137951
- config_name: en-10k-qa18
features:
- name: story
sequence:
- name: id
dtype: string
- name: type
dtype:
class_label:
names:
'0': context
'1': question
- name: text
dtype: string
- name: supporting_ids
sequence: string
- name: answer
dtype: string
splits:
- name: train
num_bytes: 1641650
num_examples: 1978
- name: test
num_bytes: 161266
num_examples: 199
download_size: 15719851
dataset_size: 1802916
- config_name: en-10k-qa19
features:
- name: story
sequence:
- name: id
dtype: string
- name: type
dtype:
class_label:
names:
'0': context
'1': question
- name: text
dtype: string
- name: supporting_ids
sequence: string
- name: answer
dtype: string
splits:
- name: train
num_bytes: 4045086
num_examples: 10000
- name: test
num_bytes: 404489
num_examples: 1000
download_size: 15719851
dataset_size: 4449575
- config_name: en-10k-qa20
features:
- name: story
sequence:
- name: id
dtype: string
- name: type
dtype:
class_label:
names:
'0': context
'1': question
- name: text
dtype: string
- name: supporting_ids
sequence: string
- name: answer
dtype: string
splits:
- name: train
num_bytes: 1157351
num_examples: 933
- name: test
num_bytes: 115863
num_examples: 93
download_size: 15719851
dataset_size: 1273214
- config_name: en-valid-qa1
features:
- name: story
sequence:
- name: id
dtype: string
- name: type
dtype:
class_label:
names:
'0': context
'1': question
- name: text
dtype: string
- name: supporting_ids
sequence: string
- name: answer
dtype: string
splits:
- name: train
num_bytes: 148887
num_examples: 180
- name: test
num_bytes: 165517
num_examples: 200
- name: validation
num_bytes: 16539
num_examples: 20
download_size: 15719851
dataset_size: 330943
- config_name: en-valid-qa2
features:
- name: story
sequence:
- name: id
dtype: string
- name: type
dtype:
class_label:
names:
'0': context
'1': question
- name: text
dtype: string
- name: supporting_ids
sequence: string
- name: answer
dtype: string
splits:
- name: train
num_bytes: 275106
num_examples: 180
- name: test
num_bytes: 306631
num_examples: 200
- name: validation
num_bytes: 27822
num_examples: 20
download_size: 15719851
dataset_size: 609559
- config_name: en-valid-qa3
features:
- name: story
sequence:
- name: id
dtype: string
- name: type
dtype:
class_label:
names:
'0': context
'1': question
- name: text
dtype: string
- name: supporting_ids
sequence: string
- name: answer
dtype: string
splits:
- name: train
num_bytes: 794565
num_examples: 180
- name: test
num_bytes: 883187
num_examples: 200
- name: validation
num_bytes: 93231
num_examples: 20
download_size: 15719851
dataset_size: 1770983
- config_name: en-valid-qa4
features:
- name: story
sequence:
- name: id
dtype: string
- name: type
dtype:
class_label:
names:
'0': context
'1': question
- name: text
dtype: string
- name: supporting_ids
sequence: string
- name: answer
dtype: string
splits:
- name: train
num_bytes: 184992
num_examples: 900
- name: test
num_bytes: 205434
num_examples: 1000
- name: validation
num_bytes: 20558
num_examples: 100
download_size: 15719851
dataset_size: 410984
- config_name: en-valid-qa5
features:
- name: story
sequence:
- name: id
dtype: string
- name: type
dtype:
class_label:
names:
'0': context
'1': question
- name: text
dtype: string
- name: supporting_ids
sequence: string
- name: answer
dtype: string
splits:
- name: train
num_bytes: 305472
num_examples: 180
- name: test
num_bytes: 350457
num_examples: 200
- name: validation
num_bytes: 31917
num_examples: 20
download_size: 15719851
dataset_size: 687846
- config_name: en-valid-qa6
features:
- name: story
sequence:
- name: id
dtype: string
- name: type
dtype:
class_label:
names:
'0': context
'1': question
- name: text
dtype: string
- name: supporting_ids
sequence: string
- name: answer
dtype: string
splits:
- name: train
num_bytes: 155845
num_examples: 180
- name: test
num_bytes: 172249
num_examples: 200
- name: validation
num_bytes: 17248
num_examples: 20
download_size: 15719851
dataset_size: 345342
- config_name: en-valid-qa7
features:
- name: story
sequence:
- name: id
dtype: string
- name: type
dtype:
class_label:
names:
'0': context
'1': question
- name: text
dtype: string
- name: supporting_ids
sequence: string
- name: answer
dtype: string
splits:
- name: train
num_bytes: 203642
num_examples: 180
- name: test
num_bytes: 215512
num_examples: 200
- name: validation
num_bytes: 21176
num_examples: 20
download_size: 15719851
dataset_size: 440330
- config_name: en-valid-qa8
features:
- name: story
sequence:
- name: id
dtype: string
- name: type
dtype:
class_label:
names:
'0': context
'1': question
- name: text
dtype: string
- name: supporting_ids
sequence: string
- name: answer
dtype: string
splits:
- name: train
num_bytes: 191599
num_examples: 180
- name: test
num_bytes: 216244
num_examples: 200
- name: validation
num_bytes: 20958
num_examples: 20
download_size: 15719851
dataset_size: 428801
- config_name: en-valid-qa9
features:
- name: story
sequence:
- name: id
dtype: string
- name: type
dtype:
class_label:
names:
'0': context
'1': question
- name: text
dtype: string
- name: supporting_ids
sequence: string
- name: answer
dtype: string
splits:
- name: train
num_bytes: 151458
num_examples: 180
- name: test
num_bytes: 168248
num_examples: 200
- name: validation
num_bytes: 16932
num_examples: 20
download_size: 15719851
dataset_size: 336638
- config_name: en-valid-qa10
features:
- name: story
sequence:
- name: id
dtype: string
- name: type
dtype:
class_label:
names:
'0': context
'1': question
- name: text
dtype: string
- name: supporting_ids
sequence: string
- name: answer
dtype: string
splits:
- name: train
num_bytes: 153240
num_examples: 180
- name: test
num_bytes: 170672
num_examples: 200
- name: validation
num_bytes: 17057
num_examples: 20
download_size: 15719851
dataset_size: 340969
- config_name: en-valid-qa11
features:
- name: story
sequence:
- name: id
dtype: string
- name: type
dtype:
class_label:
names:
'0': context
'1': question
- name: text
dtype: string
- name: supporting_ids
sequence: string
- name: answer
dtype: string
splits:
- name: train
num_bytes: 160701
num_examples: 180
- name: test
num_bytes: 178840
num_examples: 200
- name: validation
num_bytes: 17899
num_examples: 20
download_size: 15719851
dataset_size: 357440
- config_name: en-valid-qa12
features:
- name: story
sequence:
- name: id
dtype: string
- name: type
dtype:
class_label:
names:
'0': context
'1': question
- name: text
dtype: string
- name: supporting_ids
sequence: string
- name: answer
dtype: string
splits:
- name: train
num_bytes: 167031
num_examples: 180
- name: test
num_bytes: 185529
num_examples: 200
- name: validation
num_bytes: 18609
num_examples: 20
download_size: 15719851
dataset_size: 371169
- config_name: en-valid-qa13
features:
- name: story
sequence:
- name: id
dtype: string
- name: type
dtype:
class_label:
names:
'0': context
'1': question
- name: text
dtype: string
- name: supporting_ids
sequence: string
- name: answer
dtype: string
splits:
- name: train
num_bytes: 171527
num_examples: 180
- name: test
num_bytes: 190484
num_examples: 200
- name: validation
num_bytes: 19069
num_examples: 20
download_size: 15719851
dataset_size: 381080
- config_name: en-valid-qa14
features:
- name: story
sequence:
- name: id
dtype: string
- name: type
dtype:
class_label:
names:
'0': context
'1': question
- name: text
dtype: string
- name: supporting_ids
sequence: string
- name: answer
dtype: string
splits:
- name: train
num_bytes: 210650
num_examples: 180
- name: test
num_bytes: 233204
num_examples: 200
- name: validation
num_bytes: 23745
num_examples: 20
download_size: 15719851
dataset_size: 467599
- config_name: en-valid-qa15
features:
- name: story
sequence:
- name: id
dtype: string
- name: type
dtype:
class_label:
names:
'0': context
'1': question
- name: text
dtype: string
- name: supporting_ids
sequence: string
- name: answer
dtype: string
splits:
- name: train
num_bytes: 147356
num_examples: 225
- name: test
num_bytes: 163809
num_examples: 250
- name: validation
num_bytes: 16412
num_examples: 25
download_size: 15719851
dataset_size: 327577
- config_name: en-valid-qa16
features:
- name: story
sequence:
- name: id
dtype: string
- name: type
dtype:
class_label:
names:
'0': context
'1': question
- name: text
dtype: string
- name: supporting_ids
sequence: string
- name: answer
dtype: string
splits:
- name: train
num_bytes: 410711
num_examples: 900
- name: test
num_bytes: 456248
num_examples: 1000
- name: validation
num_bytes: 45703
num_examples: 100
download_size: 15719851
dataset_size: 912662
- config_name: en-valid-qa17
features:
- name: story
sequence:
- name: id
dtype: string
- name: type
dtype:
class_label:
names:
'0': context
'1': question
- name: text
dtype: string
- name: supporting_ids
sequence: string
- name: answer
dtype: string
splits:
- name: train
num_bytes: 93596
num_examples: 113
- name: test
num_bytes: 103618
num_examples: 125
- name: validation
num_bytes: 10080
num_examples: 12
download_size: 15719851
dataset_size: 207294
- config_name: en-valid-qa18
features:
- name: story
sequence:
- name: id
dtype: string
- name: type
dtype:
class_label:
names:
'0': context
'1': question
- name: text
dtype: string
- name: supporting_ids
sequence: string
- name: answer
dtype: string
splits:
- name: train
num_bytes: 147338
num_examples: 179
- name: test
num_bytes: 161266
num_examples: 199
- name: validation
num_bytes: 15577
num_examples: 19
download_size: 15719851
dataset_size: 324181
- config_name: en-valid-qa19
features:
- name: story
sequence:
- name: id
dtype: string
- name: type
dtype:
class_label:
names:
'0': context
'1': question
- name: text
dtype: string
- name: supporting_ids
sequence: string
- name: answer
dtype: string
splits:
- name: train
num_bytes: 364090
num_examples: 900
- name: test
num_bytes: 404489
num_examples: 1000
- name: validation
num_bytes: 40486
num_examples: 100
download_size: 15719851
dataset_size: 809065
- config_name: en-valid-qa20
features:
- name: story
sequence:
- name: id
dtype: string
- name: type
dtype:
class_label:
names:
'0': context
'1': question
- name: text
dtype: string
- name: supporting_ids
sequence: string
- name: answer
dtype: string
splits:
- name: train
num_bytes: 104706
num_examples: 85
- name: test
num_bytes: 115863
num_examples: 93
- name: validation
num_bytes: 11146
num_examples: 9
download_size: 15719851
dataset_size: 231715
- config_name: en-valid-10k-qa1
features:
- name: story
sequence:
- name: id
dtype: string
- name: type
dtype:
class_label:
names:
'0': context
'1': question
- name: text
dtype: string
- name: supporting_ids
sequence: string
- name: answer
dtype: string
splits:
- name: train
num_bytes: 1488751
num_examples: 1800
- name: test
num_bytes: 165517
num_examples: 200
- name: validation
num_bytes: 165577
num_examples: 200
download_size: 15719851
dataset_size: 1819845
- config_name: en-valid-10k-qa2
features:
- name: story
sequence:
- name: id
dtype: string
- name: type
dtype:
class_label:
names:
'0': context
'1': question
- name: text
dtype: string
- name: supporting_ids
sequence: string
- name: answer
dtype: string
splits:
- name: train
num_bytes: 2746462
num_examples: 1800
- name: test
num_bytes: 306631
num_examples: 200
- name: validation
num_bytes: 316158
num_examples: 200
download_size: 15719851
dataset_size: 3369251
- config_name: en-valid-10k-qa3
features:
- name: story
sequence:
- name: id
dtype: string
- name: type
dtype:
class_label:
names:
'0': context
'1': question
- name: text
dtype: string
- name: supporting_ids
sequence: string
- name: answer
dtype: string
splits:
- name: train
num_bytes: 8021847
num_examples: 1800
- name: test
num_bytes: 883187
num_examples: 200
- name: validation
num_bytes: 899408
num_examples: 200
download_size: 15719851
dataset_size: 9804442
- config_name: en-valid-10k-qa4
features:
- name: story
sequence:
- name: id
dtype: string
- name: type
dtype:
class_label:
names:
'0': context
'1': question
- name: text
dtype: string
- name: supporting_ids
sequence: string
- name: answer
dtype: string
splits:
- name: train
num_bytes: 1849497
num_examples: 9000
- name: test
num_bytes: 205434
num_examples: 1000
- name: validation
num_bytes: 205648
num_examples: 1000
download_size: 15719851
dataset_size: 2260579
- config_name: en-valid-10k-qa5
features:
- name: story
sequence:
- name: id
dtype: string
- name: type
dtype:
class_label:
names:
'0': context
'1': question
- name: text
dtype: string
- name: supporting_ids
sequence: string
- name: answer
dtype: string
splits:
- name: train
num_bytes: 3234186
num_examples: 1800
- name: test
num_bytes: 350457
num_examples: 200
- name: validation
num_bytes: 358011
num_examples: 200
download_size: 15719851
dataset_size: 3942654
- config_name: en-valid-10k-qa6
features:
- name: story
sequence:
- name: id
dtype: string
- name: type
dtype:
class_label:
names:
'0': context
'1': question
- name: text
dtype: string
- name: supporting_ids
sequence: string
- name: answer
dtype: string
splits:
- name: train
num_bytes: 1553957
num_examples: 1800
- name: test
num_bytes: 172249
num_examples: 200
- name: validation
num_bytes: 172799
num_examples: 200
download_size: 15719851
dataset_size: 1899005
- config_name: en-valid-10k-qa7
features:
- name: story
sequence:
- name: id
dtype: string
- name: type
dtype:
class_label:
names:
'0': context
'1': question
- name: text
dtype: string
- name: supporting_ids
sequence: string
- name: answer
dtype: string
splits:
- name: train
num_bytes: 2003820
num_examples: 1800
- name: test
num_bytes: 215512
num_examples: 200
- name: validation
num_bytes: 224307
num_examples: 200
download_size: 15719851
dataset_size: 2443639
- config_name: en-valid-10k-qa8
features:
- name: story
sequence:
- name: id
dtype: string
- name: type
dtype:
class_label:
names:
'0': context
'1': question
- name: text
dtype: string
- name: supporting_ids
sequence: string
- name: answer
dtype: string
splits:
- name: train
num_bytes: 1926339
num_examples: 1800
- name: test
num_bytes: 216244
num_examples: 200
- name: validation
num_bytes: 215581
num_examples: 200
download_size: 15719851
dataset_size: 2358164
- config_name: en-valid-10k-qa9
features:
- name: story
sequence:
- name: id
dtype: string
- name: type
dtype:
class_label:
names:
'0': context
'1': question
- name: text
dtype: string
- name: supporting_ids
sequence: string
- name: answer
dtype: string
splits:
- name: train
num_bytes: 1512917
num_examples: 1800
- name: test
num_bytes: 168248
num_examples: 200
- name: validation
num_bytes: 168336
num_examples: 200
download_size: 15719851
dataset_size: 1849501
- config_name: en-valid-10k-qa10
features:
- name: story
sequence:
- name: id
dtype: string
- name: type
dtype:
class_label:
names:
'0': context
'1': question
- name: text
dtype: string
- name: supporting_ids
sequence: string
- name: answer
dtype: string
splits:
- name: train
num_bytes: 1536416
num_examples: 1800
- name: test
num_bytes: 170672
num_examples: 200
- name: validation
num_bytes: 171299
num_examples: 200
download_size: 15719851
dataset_size: 1878387
- config_name: en-valid-10k-qa11
features:
- name: story
sequence:
- name: id
dtype: string
- name: type
dtype:
class_label:
names:
'0': context
'1': question
- name: text
dtype: string
- name: supporting_ids
sequence: string
- name: answer
dtype: string
splits:
- name: train
num_bytes: 1607505
num_examples: 1800
- name: test
num_bytes: 178840
num_examples: 200
- name: validation
num_bytes: 178714
num_examples: 200
download_size: 15719851
dataset_size: 1965059
- config_name: en-valid-10k-qa12
features:
- name: story
sequence:
- name: id
dtype: string
- name: type
dtype:
class_label:
names:
'0': context
'1': question
- name: text
dtype: string
- name: supporting_ids
sequence: string
- name: answer
dtype: string
splits:
- name: train
num_bytes: 1669198
num_examples: 1800
- name: test
num_bytes: 185529
num_examples: 200
- name: validation
num_bytes: 185587
num_examples: 200
download_size: 15719851
dataset_size: 2040314
- config_name: en-valid-10k-qa13
features:
- name: story
sequence:
- name: id
dtype: string
- name: type
dtype:
class_label:
names:
'0': context
'1': question
- name: text
dtype: string
- name: supporting_ids
sequence: string
- name: answer
dtype: string
splits:
- name: train
num_bytes: 1712558
num_examples: 1800
- name: test
num_bytes: 190484
num_examples: 200
- name: validation
num_bytes: 190631
num_examples: 200
download_size: 15719851
dataset_size: 2093673
- config_name: en-valid-10k-qa14
features:
- name: story
sequence:
- name: id
dtype: string
- name: type
dtype:
class_label:
names:
'0': context
'1': question
- name: text
dtype: string
- name: supporting_ids
sequence: string
- name: answer
dtype: string
splits:
- name: train
num_bytes: 2091491
num_examples: 1800
- name: test
num_bytes: 233204
num_examples: 200
- name: validation
num_bytes: 230060
num_examples: 200
download_size: 15719851
dataset_size: 2554755
- config_name: en-valid-10k-qa15
features:
- name: story
sequence:
- name: id
dtype: string
- name: type
dtype:
class_label:
names:
'0': context
'1': question
- name: text
dtype: string
- name: supporting_ids
sequence: string
- name: answer
dtype: string
splits:
- name: train
num_bytes: 1473615
num_examples: 2250
- name: test
num_bytes: 163809
num_examples: 250
- name: validation
num_bytes: 163823
num_examples: 250
download_size: 15719851
dataset_size: 1801247
- config_name: en-valid-10k-qa16
features:
- name: story
sequence:
- name: id
dtype: string
- name: type
dtype:
class_label:
names:
'0': context
'1': question
- name: text
dtype: string
- name: supporting_ids
sequence: string
- name: answer
dtype: string
splits:
- name: train
num_bytes: 4106444
num_examples: 9000
- name: test
num_bytes: 456248
num_examples: 1000
- name: validation
num_bytes: 456440
num_examples: 1000
download_size: 15719851
dataset_size: 5019132
- config_name: en-valid-10k-qa17
features:
- name: story
sequence:
- name: id
dtype: string
- name: type
dtype:
class_label:
names:
'0': context
'1': question
- name: text
dtype: string
- name: supporting_ids
sequence: string
- name: answer
dtype: string
splits:
- name: train
num_bytes: 930465
num_examples: 1125
- name: test
num_bytes: 103618
num_examples: 125
- name: validation
num_bytes: 103908
num_examples: 125
download_size: 15719851
dataset_size: 1137991
- config_name: en-valid-10k-qa18
features:
- name: story
sequence:
- name: id
dtype: string
- name: type
dtype:
class_label:
names:
'0': context
'1': question
- name: text
dtype: string
- name: supporting_ids
sequence: string
- name: answer
dtype: string
splits:
- name: train
num_bytes: 1477467
num_examples: 1781
- name: test
num_bytes: 161266
num_examples: 199
- name: validation
num_bytes: 164223
num_examples: 197
download_size: 15719851
dataset_size: 1802956
- config_name: en-valid-10k-qa19
features:
- name: story
sequence:
- name: id
dtype: string
- name: type
dtype:
class_label:
names:
'0': context
'1': question
- name: text
dtype: string
- name: supporting_ids
sequence: string
- name: answer
dtype: string
splits:
- name: train
num_bytes: 3640527
num_examples: 9000
- name: test
num_bytes: 404489
num_examples: 1000
- name: validation
num_bytes: 404599
num_examples: 1000
download_size: 15719851
dataset_size: 4449615
- config_name: en-valid-10k-qa20
features:
- name: story
sequence:
- name: id
dtype: string
- name: type
dtype:
class_label:
names:
'0': context
'1': question
- name: text
dtype: string
- name: supporting_ids
sequence: string
- name: answer
dtype: string
splits:
- name: train
num_bytes: 1041856
num_examples: 840
- name: test
num_bytes: 115863
num_examples: 93
- name: validation
num_bytes: 115535
num_examples: 93
download_size: 15719851
dataset_size: 1273254
- config_name: hn-10k-qa1
features:
- name: story
sequence:
- name: id
dtype: string
- name: type
dtype:
class_label:
names:
'0': context
'1': question
- name: text
dtype: string
- name: supporting_ids
sequence: string
- name: answer
dtype: string
splits:
- name: train
num_bytes: 1684003
num_examples: 2000
- name: test
num_bytes: 168572
num_examples: 200
download_size: 15719851
dataset_size: 1852575
- config_name: hn-10k-qa2
features:
- name: story
sequence:
- name: id
dtype: string
- name: type
dtype:
class_label:
names:
'0': context
'1': question
- name: text
dtype: string
- name: supporting_ids
sequence: string
- name: answer
dtype: string
splits:
- name: train
num_bytes: 2934642
num_examples: 2000
- name: test
num_bytes: 288429
num_examples: 200
download_size: 15719851
dataset_size: 3223071
- config_name: hn-10k-qa3
features:
- name: story
sequence:
- name: id
dtype: string
- name: type
dtype:
class_label:
names:
'0': context
'1': question
- name: text
dtype: string
- name: supporting_ids
sequence: string
- name: answer
dtype: string
splits:
- name: train
num_bytes: 8440008
num_examples: 1667
- name: test
num_bytes: 808460
num_examples: 167
download_size: 15719851
dataset_size: 9248468
- config_name: hn-10k-qa4
features:
- name: story
sequence:
- name: id
dtype: string
- name: type
dtype:
class_label:
names:
'0': context
'1': question
- name: text
dtype: string
- name: supporting_ids
sequence: string
- name: answer
dtype: string
splits:
- name: train
num_bytes: 2312075
num_examples: 10000
- name: test
num_bytes: 231230
num_examples: 1000
download_size: 15719851
dataset_size: 2543305
- config_name: hn-10k-qa5
features:
- name: story
sequence:
- name: id
dtype: string
- name: type
dtype:
class_label:
names:
'0': context
'1': question
- name: text
dtype: string
- name: supporting_ids
sequence: string
- name: answer
dtype: string
splits:
- name: train
num_bytes: 3301271
num_examples: 2000
- name: test
num_bytes: 315396
num_examples: 200
download_size: 15719851
dataset_size: 3616667
- config_name: hn-10k-qa6
features:
- name: story
sequence:
- name: id
dtype: string
- name: type
dtype:
class_label:
names:
'0': context
'1': question
- name: text
dtype: string
- name: supporting_ids
sequence: string
- name: answer
dtype: string
splits:
- name: train
num_bytes: 1703863
num_examples: 2000
- name: test
num_bytes: 171360
num_examples: 200
download_size: 15719851
dataset_size: 1875223
- config_name: hn-10k-qa7
features:
- name: story
sequence:
- name: id
dtype: string
- name: type
dtype:
class_label:
names:
'0': context
'1': question
- name: text
dtype: string
- name: supporting_ids
sequence: string
- name: answer
dtype: string
splits:
- name: train
num_bytes: 2091460
num_examples: 2000
- name: test
num_bytes: 208080
num_examples: 200
download_size: 15719851
dataset_size: 2299540
- config_name: hn-10k-qa8
features:
- name: story
sequence:
- name: id
dtype: string
- name: type
dtype:
class_label:
names:
'0': context
'1': question
- name: text
dtype: string
- name: supporting_ids
sequence: string
- name: answer
dtype: string
splits:
- name: train
num_bytes: 2178277
num_examples: 2000
- name: test
num_bytes: 222232
num_examples: 200
download_size: 15719851
dataset_size: 2400509
- config_name: hn-10k-qa9
features:
- name: story
sequence:
- name: id
dtype: string
- name: type
dtype:
class_label:
names:
'0': context
'1': question
- name: text
dtype: string
- name: supporting_ids
sequence: string
- name: answer
dtype: string
splits:
- name: train
num_bytes: 1874753
num_examples: 2000
- name: test
num_bytes: 187341
num_examples: 200
download_size: 15719851
dataset_size: 2062094
- config_name: hn-10k-qa10
features:
- name: story
sequence:
- name: id
dtype: string
- name: type
dtype:
class_label:
names:
'0': context
'1': question
- name: text
dtype: string
- name: supporting_ids
sequence: string
- name: answer
dtype: string
splits:
- name: train
num_bytes: 1830698
num_examples: 2000
- name: test
num_bytes: 182932
num_examples: 200
download_size: 15719851
dataset_size: 2013630
- config_name: hn-10k-qa11
features:
- name: story
sequence:
- name: id
dtype: string
- name: type
dtype:
class_label:
names:
'0': context
'1': question
- name: text
dtype: string
- name: supporting_ids
sequence: string
- name: answer
dtype: string
splits:
- name: train
num_bytes: 1798057
num_examples: 2000
- name: test
num_bytes: 180461
num_examples: 200
download_size: 15719851
dataset_size: 1978518
- config_name: hn-10k-qa12
features:
- name: story
sequence:
- name: id
dtype: string
- name: type
dtype:
class_label:
names:
'0': context
'1': question
- name: text
dtype: string
- name: supporting_ids
sequence: string
- name: answer
dtype: string
splits:
- name: train
num_bytes: 1879776
num_examples: 2000
- name: test
num_bytes: 187954
num_examples: 200
download_size: 15719851
dataset_size: 2067730
- config_name: hn-10k-qa13
features:
- name: story
sequence:
- name: id
dtype: string
- name: type
dtype:
class_label:
names:
'0': context
'1': question
- name: text
dtype: string
- name: supporting_ids
sequence: string
- name: answer
dtype: string
splits:
- name: train
num_bytes: 1915482
num_examples: 1250
- name: test
num_bytes: 191747
num_examples: 125
download_size: 15719851
dataset_size: 2107229
- config_name: hn-10k-qa14
features:
- name: story
sequence:
- name: id
dtype: string
- name: type
dtype:
class_label:
names:
'0': context
'1': question
- name: text
dtype: string
- name: supporting_ids
sequence: string
- name: answer
dtype: string
splits:
- name: train
num_bytes: 2392212
num_examples: 2000
- name: test
num_bytes: 240436
num_examples: 200
download_size: 15719851
dataset_size: 2632648
- config_name: hn-10k-qa15
features:
- name: story
sequence:
- name: id
dtype: string
- name: type
dtype:
class_label:
names:
'0': context
'1': question
- name: text
dtype: string
- name: supporting_ids
sequence: string
- name: answer
dtype: string
splits:
- name: train
num_bytes: 1702512
num_examples: 2500
- name: test
num_bytes: 170259
num_examples: 250
download_size: 15719851
dataset_size: 1872771
- config_name: hn-10k-qa16
features:
- name: story
sequence:
- name: id
dtype: string
- name: type
dtype:
class_label:
names:
'0': context
'1': question
- name: text
dtype: string
- name: supporting_ids
sequence: string
- name: answer
dtype: string
splits:
- name: train
num_bytes: 5229983
num_examples: 10000
- name: test
num_bytes: 523032
num_examples: 1000
download_size: 15719851
dataset_size: 5753015
- config_name: hn-10k-qa17
features:
- name: story
sequence:
- name: id
dtype: string
- name: type
dtype:
class_label:
names:
'0': context
'1': question
- name: text
dtype: string
- name: supporting_ids
sequence: string
- name: answer
dtype: string
splits:
- name: train
num_bytes: 1039729
num_examples: 1250
- name: test
num_bytes: 104061
num_examples: 125
download_size: 15719851
dataset_size: 1143790
- config_name: hn-10k-qa18
features:
- name: story
sequence:
- name: id
dtype: string
- name: type
dtype:
class_label:
names:
'0': context
'1': question
- name: text
dtype: string
- name: supporting_ids
sequence: string
- name: answer
dtype: string
splits:
- name: train
num_bytes: 1738458
num_examples: 1977
- name: test
num_bytes: 176824
num_examples: 198
download_size: 15719851
dataset_size: 1915282
- config_name: hn-10k-qa19
features:
- name: story
sequence:
- name: id
dtype: string
- name: type
dtype:
class_label:
names:
'0': context
'1': question
- name: text
dtype: string
- name: supporting_ids
sequence: string
- name: answer
dtype: string
splits:
- name: train
num_bytes: 4702044
num_examples: 10000
- name: test
num_bytes: 470479
num_examples: 1000
download_size: 15719851
dataset_size: 5172523
- config_name: hn-10k-qa20
features:
- name: story
sequence:
- name: id
dtype: string
- name: type
dtype:
class_label:
names:
'0': context
'1': question
- name: text
dtype: string
- name: supporting_ids
sequence: string
- name: answer
dtype: string
splits:
- name: train
num_bytes: 1147599
num_examples: 934
- name: test
num_bytes: 115088
num_examples: 94
download_size: 15719851
dataset_size: 1262687
- config_name: shuffled-qa1
features:
- name: story
sequence:
- name: id
dtype: string
- name: type
dtype:
class_label:
names:
'0': context
'1': question
- name: text
dtype: string
- name: supporting_ids
sequence: string
- name: answer
dtype: string
splits:
- name: train
num_bytes: 165386
num_examples: 200
- name: test
num_bytes: 165517
num_examples: 200
download_size: 15719851
dataset_size: 330903
- config_name: shuffled-qa2
features:
- name: story
sequence:
- name: id
dtype: string
- name: type
dtype:
class_label:
names:
'0': context
'1': question
- name: text
dtype: string
- name: supporting_ids
sequence: string
- name: answer
dtype: string
splits:
- name: train
num_bytes: 302888
num_examples: 200
- name: test
num_bytes: 306631
num_examples: 200
download_size: 15719851
dataset_size: 609519
- config_name: shuffled-qa3
features:
- name: story
sequence:
- name: id
dtype: string
- name: type
dtype:
class_label:
names:
'0': context
'1': question
- name: text
dtype: string
- name: supporting_ids
sequence: string
- name: answer
dtype: string
splits:
- name: train
num_bytes: 887756
num_examples: 200
- name: test
num_bytes: 883187
num_examples: 200
download_size: 15719851
dataset_size: 1770943
- config_name: shuffled-qa4
features:
- name: story
sequence:
- name: id
dtype: string
- name: type
dtype:
class_label:
names:
'0': context
'1': question
- name: text
dtype: string
- name: supporting_ids
sequence: string
- name: answer
dtype: string
splits:
- name: train
num_bytes: 205510
num_examples: 1000
- name: test
num_bytes: 205434
num_examples: 1000
download_size: 15719851
dataset_size: 410944
- config_name: shuffled-qa5
features:
- name: story
sequence:
- name: id
dtype: string
- name: type
dtype:
class_label:
names:
'0': context
'1': question
- name: text
dtype: string
- name: supporting_ids
sequence: string
- name: answer
dtype: string
splits:
- name: train
num_bytes: 337349
num_examples: 200
- name: test
num_bytes: 350457
num_examples: 200
download_size: 15719851
dataset_size: 687806
- config_name: shuffled-qa6
features:
- name: story
sequence:
- name: id
dtype: string
- name: type
dtype:
class_label:
names:
'0': context
'1': question
- name: text
dtype: string
- name: supporting_ids
sequence: string
- name: answer
dtype: string
splits:
- name: train
num_bytes: 173053
num_examples: 200
- name: test
num_bytes: 172249
num_examples: 200
download_size: 15719851
dataset_size: 345302
- config_name: shuffled-qa7
features:
- name: story
sequence:
- name: id
dtype: string
- name: type
dtype:
class_label:
names:
'0': context
'1': question
- name: text
dtype: string
- name: supporting_ids
sequence: string
- name: answer
dtype: string
splits:
- name: train
num_bytes: 224778
num_examples: 200
- name: test
num_bytes: 215512
num_examples: 200
download_size: 15719851
dataset_size: 440290
- config_name: shuffled-qa8
features:
- name: story
sequence:
- name: id
dtype: string
- name: type
dtype:
class_label:
names:
'0': context
'1': question
- name: text
dtype: string
- name: supporting_ids
sequence: string
- name: answer
dtype: string
splits:
- name: train
num_bytes: 212517
num_examples: 200
- name: test
num_bytes: 216244
num_examples: 200
download_size: 15719851
dataset_size: 428761
- config_name: shuffled-qa9
features:
- name: story
sequence:
- name: id
dtype: string
- name: type
dtype:
class_label:
names:
'0': context
'1': question
- name: text
dtype: string
- name: supporting_ids
sequence: string
- name: answer
dtype: string
splits:
- name: train
num_bytes: 168350
num_examples: 200
- name: test
num_bytes: 168248
num_examples: 200
download_size: 15719851
dataset_size: 336598
- config_name: shuffled-qa10
features:
- name: story
sequence:
- name: id
dtype: string
- name: type
dtype:
class_label:
names:
'0': context
'1': question
- name: text
dtype: string
- name: supporting_ids
sequence: string
- name: answer
dtype: string
splits:
- name: train
num_bytes: 170257
num_examples: 200
- name: test
num_bytes: 170672
num_examples: 200
download_size: 15719851
dataset_size: 340929
- config_name: shuffled-qa11
features:
- name: story
sequence:
- name: id
dtype: string
- name: type
dtype:
class_label:
names:
'0': context
'1': question
- name: text
dtype: string
- name: supporting_ids
sequence: string
- name: answer
dtype: string
splits:
- name: train
num_bytes: 178083
num_examples: 200
- name: test
num_bytes: 178313
num_examples: 200
download_size: 15719851
dataset_size: 356396
- config_name: shuffled-qa12
features:
- name: story
sequence:
- name: id
dtype: string
- name: type
dtype:
class_label:
names:
'0': context
'1': question
- name: text
dtype: string
- name: supporting_ids
sequence: string
- name: answer
dtype: string
splits:
- name: train
num_bytes: 185600
num_examples: 200
- name: test
num_bytes: 185529
num_examples: 200
download_size: 15719851
dataset_size: 371129
- config_name: shuffled-qa13
features:
- name: story
sequence:
- name: id
dtype: string
- name: type
dtype:
class_label:
names:
'0': context
'1': question
- name: text
dtype: string
- name: supporting_ids
sequence: string
- name: answer
dtype: string
splits:
- name: train
num_bytes: 190556
num_examples: 200
- name: test
num_bytes: 190484
num_examples: 200
download_size: 15719851
dataset_size: 381040
- config_name: shuffled-qa14
features:
- name: story
sequence:
- name: id
dtype: string
- name: type
dtype:
class_label:
names:
'0': context
'1': question
- name: text
dtype: string
- name: supporting_ids
sequence: string
- name: answer
dtype: string
splits:
- name: train
num_bytes: 234355
num_examples: 200
- name: test
num_bytes: 233204
num_examples: 200
download_size: 15719851
dataset_size: 467559
- config_name: shuffled-qa15
features:
- name: story
sequence:
- name: id
dtype: string
- name: type
dtype:
class_label:
names:
'0': context
'1': question
- name: text
dtype: string
- name: supporting_ids
sequence: string
- name: answer
dtype: string
splits:
- name: train
num_bytes: 163728
num_examples: 250
- name: test
num_bytes: 163809
num_examples: 250
download_size: 15719851
dataset_size: 327537
- config_name: shuffled-qa16
features:
- name: story
sequence:
- name: id
dtype: string
- name: type
dtype:
class_label:
names:
'0': context
'1': question
- name: text
dtype: string
- name: supporting_ids
sequence: string
- name: answer
dtype: string
splits:
- name: train
num_bytes: 456374
num_examples: 1000
- name: test
num_bytes: 456248
num_examples: 1000
download_size: 15719851
dataset_size: 912622
- config_name: shuffled-qa17
features:
- name: story
sequence:
- name: id
dtype: string
- name: type
dtype:
class_label:
names:
'0': context
'1': question
- name: text
dtype: string
- name: supporting_ids
sequence: string
- name: answer
dtype: string
splits:
- name: train
num_bytes: 103636
num_examples: 125
- name: test
num_bytes: 103618
num_examples: 125
download_size: 15719851
dataset_size: 207254
- config_name: shuffled-qa18
features:
- name: story
sequence:
- name: id
dtype: string
- name: type
dtype:
class_label:
names:
'0': context
'1': question
- name: text
dtype: string
- name: supporting_ids
sequence: string
- name: answer
dtype: string
splits:
- name: train
num_bytes: 162875
num_examples: 198
- name: test
num_bytes: 161266
num_examples: 199
download_size: 15719851
dataset_size: 324141
- config_name: shuffled-qa19
features:
- name: story
sequence:
- name: id
dtype: string
- name: type
dtype:
class_label:
names:
'0': context
'1': question
- name: text
dtype: string
- name: supporting_ids
sequence: string
- name: answer
dtype: string
splits:
- name: train
num_bytes: 404536
num_examples: 1000
- name: test
num_bytes: 404489
num_examples: 1000
download_size: 15719851
dataset_size: 809025
- config_name: shuffled-qa20
features:
- name: story
sequence:
- name: id
dtype: string
- name: type
dtype:
class_label:
names:
'0': context
'1': question
- name: text
dtype: string
- name: supporting_ids
sequence: string
- name: answer
dtype: string
splits:
- name: train
num_bytes: 115812
num_examples: 94
- name: test
num_bytes: 115863
num_examples: 93
download_size: 15719851
dataset_size: 231675
- config_name: shuffled-10k-qa1
features:
- name: story
sequence:
- name: id
dtype: string
- name: type
dtype:
class_label:
names:
'0': context
'1': question
- name: text
dtype: string
- name: supporting_ids
sequence: string
- name: answer
dtype: string
splits:
- name: train
num_bytes: 1654288
num_examples: 2000
- name: test
num_bytes: 165517
num_examples: 200
download_size: 15719851
dataset_size: 1819805
- config_name: shuffled-10k-qa2
features:
- name: story
sequence:
- name: id
dtype: string
- name: type
dtype:
class_label:
names:
'0': context
'1': question
- name: text
dtype: string
- name: supporting_ids
sequence: string
- name: answer
dtype: string
splits:
- name: train
num_bytes: 3062580
num_examples: 2000
- name: test
num_bytes: 306631
num_examples: 200
download_size: 15719851
dataset_size: 3369211
- config_name: shuffled-10k-qa3
features:
- name: story
sequence:
- name: id
dtype: string
- name: type
dtype:
class_label:
names:
'0': context
'1': question
- name: text
dtype: string
- name: supporting_ids
sequence: string
- name: answer
dtype: string
splits:
- name: train
num_bytes: 8921215
num_examples: 2000
- name: test
num_bytes: 883187
num_examples: 200
download_size: 15719851
dataset_size: 9804402
- config_name: shuffled-10k-qa4
features:
- name: story
sequence:
- name: id
dtype: string
- name: type
dtype:
class_label:
names:
'0': context
'1': question
- name: text
dtype: string
- name: supporting_ids
sequence: string
- name: answer
dtype: string
splits:
- name: train
num_bytes: 2055105
num_examples: 10000
- name: test
num_bytes: 205434
num_examples: 1000
download_size: 15719851
dataset_size: 2260539
- config_name: shuffled-10k-qa5
features:
- name: story
sequence:
- name: id
dtype: string
- name: type
dtype:
class_label:
names:
'0': context
'1': question
- name: text
dtype: string
- name: supporting_ids
sequence: string
- name: answer
dtype: string
splits:
- name: train
num_bytes: 3592157
num_examples: 2000
- name: test
num_bytes: 350457
num_examples: 200
download_size: 15719851
dataset_size: 3942614
- config_name: shuffled-10k-qa6
features:
- name: story
sequence:
- name: id
dtype: string
- name: type
dtype:
class_label:
names:
'0': context
'1': question
- name: text
dtype: string
- name: supporting_ids
sequence: string
- name: answer
dtype: string
splits:
- name: train
num_bytes: 1726716
num_examples: 2000
- name: test
num_bytes: 172249
num_examples: 200
download_size: 15719851
dataset_size: 1898965
- config_name: shuffled-10k-qa7
features:
- name: story
sequence:
- name: id
dtype: string
- name: type
dtype:
class_label:
names:
'0': context
'1': question
- name: text
dtype: string
- name: supporting_ids
sequence: string
- name: answer
dtype: string
splits:
- name: train
num_bytes: 2228087
num_examples: 2000
- name: test
num_bytes: 215512
num_examples: 200
download_size: 15719851
dataset_size: 2443599
- config_name: shuffled-10k-qa8
features:
- name: story
sequence:
- name: id
dtype: string
- name: type
dtype:
class_label:
names:
'0': context
'1': question
- name: text
dtype: string
- name: supporting_ids
sequence: string
- name: answer
dtype: string
splits:
- name: train
num_bytes: 2141880
num_examples: 2000
- name: test
num_bytes: 216244
num_examples: 200
download_size: 15719851
dataset_size: 2358124
- config_name: shuffled-10k-qa9
features:
- name: story
sequence:
- name: id
dtype: string
- name: type
dtype:
class_label:
names:
'0': context
'1': question
- name: text
dtype: string
- name: supporting_ids
sequence: string
- name: answer
dtype: string
splits:
- name: train
num_bytes: 1681213
num_examples: 2000
- name: test
num_bytes: 168248
num_examples: 200
download_size: 15719851
dataset_size: 1849461
- config_name: shuffled-10k-qa10
features:
- name: story
sequence:
- name: id
dtype: string
- name: type
dtype:
class_label:
names:
'0': context
'1': question
- name: text
dtype: string
- name: supporting_ids
sequence: string
- name: answer
dtype: string
splits:
- name: train
num_bytes: 1707675
num_examples: 2000
- name: test
num_bytes: 170672
num_examples: 200
download_size: 15719851
dataset_size: 1878347
- config_name: shuffled-10k-qa11
features:
- name: story
sequence:
- name: id
dtype: string
- name: type
dtype:
class_label:
names:
'0': context
'1': question
- name: text
dtype: string
- name: supporting_ids
sequence: string
- name: answer
dtype: string
splits:
- name: train
num_bytes: 1781176
num_examples: 2000
- name: test
num_bytes: 178313
num_examples: 200
download_size: 15719851
dataset_size: 1959489
- config_name: shuffled-10k-qa12
features:
- name: story
sequence:
- name: id
dtype: string
- name: type
dtype:
class_label:
names:
'0': context
'1': question
- name: text
dtype: string
- name: supporting_ids
sequence: string
- name: answer
dtype: string
splits:
- name: train
num_bytes: 1854745
num_examples: 2000
- name: test
num_bytes: 185529
num_examples: 200
download_size: 15719851
dataset_size: 2040274
- config_name: shuffled-10k-qa13
features:
- name: story
sequence:
- name: id
dtype: string
- name: type
dtype:
class_label:
names:
'0': context
'1': question
- name: text
dtype: string
- name: supporting_ids
sequence: string
- name: answer
dtype: string
splits:
- name: train
num_bytes: 1903149
num_examples: 2000
- name: test
num_bytes: 190484
num_examples: 200
download_size: 15719851
dataset_size: 2093633
- config_name: shuffled-10k-qa14
features:
- name: story
sequence:
- name: id
dtype: string
- name: type
dtype:
class_label:
names:
'0': context
'1': question
- name: text
dtype: string
- name: supporting_ids
sequence: string
- name: answer
dtype: string
splits:
- name: train
num_bytes: 2321511
num_examples: 2000
- name: test
num_bytes: 233204
num_examples: 200
download_size: 15719851
dataset_size: 2554715
- config_name: shuffled-10k-qa15
features:
- name: story
sequence:
- name: id
dtype: string
- name: type
dtype:
class_label:
names:
'0': context
'1': question
- name: text
dtype: string
- name: supporting_ids
sequence: string
- name: answer
dtype: string
splits:
- name: train
num_bytes: 1637398
num_examples: 2500
- name: test
num_bytes: 163809
num_examples: 250
download_size: 15719851
dataset_size: 1801207
- config_name: shuffled-10k-qa16
features:
- name: story
sequence:
- name: id
dtype: string
- name: type
dtype:
class_label:
names:
'0': context
'1': question
- name: text
dtype: string
- name: supporting_ids
sequence: string
- name: answer
dtype: string
splits:
- name: train
num_bytes: 4562844
num_examples: 10000
- name: test
num_bytes: 456248
num_examples: 1000
download_size: 15719851
dataset_size: 5019092
- config_name: shuffled-10k-qa17
features:
- name: story
sequence:
- name: id
dtype: string
- name: type
dtype:
class_label:
names:
'0': context
'1': question
- name: text
dtype: string
- name: supporting_ids
sequence: string
- name: answer
dtype: string
splits:
- name: train
num_bytes: 1034333
num_examples: 1250
- name: test
num_bytes: 103618
num_examples: 125
download_size: 15719851
dataset_size: 1137951
- config_name: shuffled-10k-qa18
features:
- name: story
sequence:
- name: id
dtype: string
- name: type
dtype:
class_label:
names:
'0': context
'1': question
- name: text
dtype: string
- name: supporting_ids
sequence: string
- name: answer
dtype: string
splits:
- name: train
num_bytes: 1641650
num_examples: 1978
- name: test
num_bytes: 161266
num_examples: 199
download_size: 15719851
dataset_size: 1802916
- config_name: shuffled-10k-qa19
features:
- name: story
sequence:
- name: id
dtype: string
- name: type
dtype:
class_label:
names:
'0': context
'1': question
- name: text
dtype: string
- name: supporting_ids
sequence: string
- name: answer
dtype: string
splits:
- name: train
num_bytes: 4045086
num_examples: 10000
- name: test
num_bytes: 404489
num_examples: 1000
download_size: 15719851
dataset_size: 4449575
- config_name: shuffled-10k-qa20
features:
- name: story
sequence:
- name: id
dtype: string
- name: type
dtype:
class_label:
names:
'0': context
'1': question
- name: text
dtype: string
- name: supporting_ids
sequence: string
- name: answer
dtype: string
splits:
- name: train
num_bytes: 1157351
num_examples: 933
- name: test
num_bytes: 115863
num_examples: 93
download_size: 15719851
dataset_size: 1273214
---
# Dataset Card for bAbi QA
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**[The bAbI project](https://research.fb.com/downloads/babi/)
- **Repository:**
- **Paper:** [arXiv Paper](https://arxiv.org/pdf/1502.05698.pdf)
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
The (20) QA bAbI tasks are a set of proxy tasks that evaluate reading comprehension via question answering. Our tasks measure understanding in several ways: whether a system is able to answer questions via chaining facts, simple induction, deduction and many more. The tasks are designed to be prerequisites for any system that aims to be capable of conversing with a human. The aim is to classify these tasks into skill sets,so that researchers can identify (and then rectify) the failings of their systems.
### Supported Tasks and Leaderboards
The dataset supports a set of 20 proxy story-based question answering tasks for various "types" in English and Hindi. The tasks are:
|task_no|task_name|
|----|------------|
|qa1 |single-supporting-fact|
|qa2 |two-supporting-facts|
|qa3 |three-supporting-facts|
|qa4 |two-arg-relations|
|qa5 |three-arg-relations|
|qa6 |yes-no-questions|
|qa7 |counting|
|qa8 |lists-sets|
|qa9 |simple-negation|
|qa10| indefinite-knowledge|
|qa11| basic-coreference|
|qa12| conjunction|
|qa13| compound-coreference|
|qa14| time-reasoning|
|qa15| basic-deduction|
|qa16| basic-induction|
|qa17| positional-reasoning|
|qa18| size-reasoning|
|qa19| path-finding|
|qa20| agents-motivations|
The "types" are are:
- `en`
- the tasks in English, readable by humans.
- `hn`
- the tasks in Hindi, readable by humans.
- `shuffled`
- the same tasks with shuffled letters so they are not readable by humans, and for existing parsers and taggers cannot be used in a straight-forward fashion to leverage extra resources-- in this case the learner is more forced to rely on the given training data. This mimics a learner being first presented a language and having to learn from scratch.
- `en-10k`, `shuffled-10k` and `hn-10k`
- the same tasks in the three formats, but with 10,000 training examples, rather than 1000 training examples.
- `en-valid` and `en-valid-10k`
- are the same as `en` and `en10k` except the train sets have been conveniently split into train and valid portions (90% and 10% split).
To get a particular dataset, use `load_dataset('babi_qa',type=f'{type}',task_no=f'{task_no}')` where `type` is one of the types, and `task_no` is one of the task numbers. For example, `load_dataset('babi_qa', type='en', task_no='qa1')`.
### Languages
## Dataset Structure
### Data Instances
An instance from the `en-qa1` config's `train` split:
```
{'story': {'answer': ['', '', 'bathroom', '', '', 'hallway', '', '', 'hallway', '', '', 'office', '', '', 'bathroom'], 'id': ['1', '2', '3', '4', '5', '6', '7', '8', '9', '10', '11', '12', '13', '14', '15'], 'supporting_ids': [[], [], ['1'], [], [], ['4'], [], [], ['4'], [], [], ['11'], [], [], ['8']], 'text': ['Mary moved to the bathroom.', 'John went to the hallway.', 'Where is Mary?', 'Daniel went back to the hallway.', 'Sandra moved to the garden.', 'Where is Daniel?', 'John moved to the office.', 'Sandra journeyed to the bathroom.', 'Where is Daniel?', 'Mary moved to the hallway.', 'Daniel travelled to the office.', 'Where is Daniel?', 'John went back to the garden.', 'John moved to the bedroom.', 'Where is Sandra?'], 'type': [0, 0, 1, 0, 0, 1, 0, 0, 1, 0, 0, 1, 0, 0, 1]}}
```
### Data Fields
- `story`: a dictionary feature containing:
- `id`: a `string` feature, which denotes the line number in the example.
- `type`: a classification label, with possible values including `context`, `question`, denoting whether the text is context or a question.
- `text`: a `string` feature the text present, whether it is a question or context.
- `supporting_ids`: a `list` of `string` features containing the line numbers of the lines in the example which support the answer.
- `answer`: a `string` feature containing the answer to the question, or an empty string if the `type`s is not `question`.
### Data Splits
The splits and corresponding sizes are:
| | train | test | validation |
|-------------------|---------|--------|--------------|
| en-qa1 | 200 | 200 | - |
| en-qa2 | 200 | 200 | - |
| en-qa3 | 200 | 200 | - |
| en-qa4 | 1000 | 1000 | - |
| en-qa5 | 200 | 200 | - |
| en-qa6 | 200 | 200 | - |
| en-qa7 | 200 | 200 | - |
| en-qa8 | 200 | 200 | - |
| en-qa9 | 200 | 200 | - |
| en-qa10 | 200 | 200 | - |
| en-qa11 | 200 | 200 | - |
| en-qa12 | 200 | 200 | - |
| en-qa13 | 200 | 200 | - |
| en-qa14 | 200 | 200 | - |
| en-qa15 | 250 | 250 | - |
| en-qa16 | 1000 | 1000 | - |
| en-qa17 | 125 | 125 | - |
| en-qa18 | 198 | 199 | - |
| en-qa19 | 1000 | 1000 | - |
| en-qa20 | 94 | 93 | - |
| en-10k-qa1 | 2000 | 200 | - |
| en-10k-qa2 | 2000 | 200 | - |
| en-10k-qa3 | 2000 | 200 | - |
| en-10k-qa4 | 10000 | 1000 | - |
| en-10k-qa5 | 2000 | 200 | - |
| en-10k-qa6 | 2000 | 200 | - |
| en-10k-qa7 | 2000 | 200 | - |
| en-10k-qa8 | 2000 | 200 | - |
| en-10k-qa9 | 2000 | 200 | - |
| en-10k-qa10 | 2000 | 200 | - |
| en-10k-qa11 | 2000 | 200 | - |
| en-10k-qa12 | 2000 | 200 | - |
| en-10k-qa13 | 2000 | 200 | - |
| en-10k-qa14 | 2000 | 200 | - |
| en-10k-qa15 | 2500 | 250 | - |
| en-10k-qa16 | 10000 | 1000 | - |
| en-10k-qa17 | 1250 | 125 | - |
| en-10k-qa18 | 1978 | 199 | - |
| en-10k-qa19 | 10000 | 1000 | - |
| en-10k-qa20 | 933 | 93 | - |
| en-valid-qa1 | 180 | 200 | 20 |
| en-valid-qa2 | 180 | 200 | 20 |
| en-valid-qa3 | 180 | 200 | 20 |
| en-valid-qa4 | 900 | 1000 | 100 |
| en-valid-qa5 | 180 | 200 | 20 |
| en-valid-qa6 | 180 | 200 | 20 |
| en-valid-qa7 | 180 | 200 | 20 |
| en-valid-qa8 | 180 | 200 | 20 |
| en-valid-qa9 | 180 | 200 | 20 |
| en-valid-qa10 | 180 | 200 | 20 |
| en-valid-qa11 | 180 | 200 | 20 |
| en-valid-qa12 | 180 | 200 | 20 |
| en-valid-qa13 | 180 | 200 | 20 |
| en-valid-qa14 | 180 | 200 | 20 |
| en-valid-qa15 | 225 | 250 | 25 |
| en-valid-qa16 | 900 | 1000 | 100 |
| en-valid-qa17 | 113 | 125 | 12 |
| en-valid-qa18 | 179 | 199 | 19 |
| en-valid-qa19 | 900 | 1000 | 100 |
| en-valid-qa20 | 85 | 93 | 9 |
| en-valid-10k-qa1 | 1800 | 200 | 200 |
| en-valid-10k-qa2 | 1800 | 200 | 200 |
| en-valid-10k-qa3 | 1800 | 200 | 200 |
| en-valid-10k-qa4 | 9000 | 1000 | 1000 |
| en-valid-10k-qa5 | 1800 | 200 | 200 |
| en-valid-10k-qa6 | 1800 | 200 | 200 |
| en-valid-10k-qa7 | 1800 | 200 | 200 |
| en-valid-10k-qa8 | 1800 | 200 | 200 |
| en-valid-10k-qa9 | 1800 | 200 | 200 |
| en-valid-10k-qa10 | 1800 | 200 | 200 |
| en-valid-10k-qa11 | 1800 | 200 | 200 |
| en-valid-10k-qa12 | 1800 | 200 | 200 |
| en-valid-10k-qa13 | 1800 | 200 | 200 |
| en-valid-10k-qa14 | 1800 | 200 | 200 |
| en-valid-10k-qa15 | 2250 | 250 | 250 |
| en-valid-10k-qa16 | 9000 | 1000 | 1000 |
| en-valid-10k-qa17 | 1125 | 125 | 125 |
| en-valid-10k-qa18 | 1781 | 199 | 197 |
| en-valid-10k-qa19 | 9000 | 1000 | 1000 |
| en-valid-10k-qa20 | 840 | 93 | 93 |
| hn-qa1 | 200 | 200 | - |
| hn-qa2 | 200 | 200 | - |
| hn-qa3 | 167 | 167 | - |
| hn-qa4 | 1000 | 1000 | - |
| hn-qa5 | 200 | 200 | - |
| hn-qa6 | 200 | 200 | - |
| hn-qa7 | 200 | 200 | - |
| hn-qa8 | 200 | 200 | - |
| hn-qa9 | 200 | 200 | - |
| hn-qa10 | 200 | 200 | - |
| hn-qa11 | 200 | 200 | - |
| hn-qa12 | 200 | 200 | - |
| hn-qa13 | 125 | 125 | - |
| hn-qa14 | 200 | 200 | - |
| hn-qa15 | 250 | 250 | - |
| hn-qa16 | 1000 | 1000 | - |
| hn-qa17 | 125 | 125 | - |
| hn-qa18 | 198 | 198 | - |
| hn-qa19 | 1000 | 1000 | - |
| hn-qa20 | 93 | 94 | - |
| hn-10k-qa1 | 2000 | 200 | - |
| hn-10k-qa2 | 2000 | 200 | - |
| hn-10k-qa3 | 1667 | 167 | - |
| hn-10k-qa4 | 10000 | 1000 | - |
| hn-10k-qa5 | 2000 | 200 | - |
| hn-10k-qa6 | 2000 | 200 | - |
| hn-10k-qa7 | 2000 | 200 | - |
| hn-10k-qa8 | 2000 | 200 | - |
| hn-10k-qa9 | 2000 | 200 | - |
| hn-10k-qa10 | 2000 | 200 | - |
| hn-10k-qa11 | 2000 | 200 | - |
| hn-10k-qa12 | 2000 | 200 | - |
| hn-10k-qa13 | 1250 | 125 | - |
| hn-10k-qa14 | 2000 | 200 | - |
| hn-10k-qa15 | 2500 | 250 | - |
| hn-10k-qa16 | 10000 | 1000 | - |
| hn-10k-qa17 | 1250 | 125 | - |
| hn-10k-qa18 | 1977 | 198 | - |
| hn-10k-qa19 | 10000 | 1000 | - |
| hn-10k-qa20 | 934 | 94 | - |
| shuffled-qa1 | 200 | 200 | - |
| shuffled-qa2 | 200 | 200 | - |
| shuffled-qa3 | 200 | 200 | - |
| shuffled-qa4 | 1000 | 1000 | - |
| shuffled-qa5 | 200 | 200 | - |
| shuffled-qa6 | 200 | 200 | - |
| shuffled-qa7 | 200 | 200 | - |
| shuffled-qa8 | 200 | 200 | - |
| shuffled-qa9 | 200 | 200 | - |
| shuffled-qa10 | 200 | 200 | - |
| shuffled-qa11 | 200 | 200 | - |
| shuffled-qa12 | 200 | 200 | - |
| shuffled-qa13 | 200 | 200 | - |
| shuffled-qa14 | 200 | 200 | - |
| shuffled-qa15 | 250 | 250 | - |
| shuffled-qa16 | 1000 | 1000 | - |
| shuffled-qa17 | 125 | 125 | - |
| shuffled-qa18 | 198 | 199 | - |
| shuffled-qa19 | 1000 | 1000 | - |
| shuffled-qa20 | 94 | 93 | - |
| shuffled-10k-qa1 | 2000 | 200 | - |
| shuffled-10k-qa2 | 2000 | 200 | - |
| shuffled-10k-qa3 | 2000 | 200 | - |
| shuffled-10k-qa4 | 10000 | 1000 | - |
| shuffled-10k-qa5 | 2000 | 200 | - |
| shuffled-10k-qa6 | 2000 | 200 | - |
| shuffled-10k-qa7 | 2000 | 200 | - |
| shuffled-10k-qa8 | 2000 | 200 | - |
| shuffled-10k-qa9 | 2000 | 200 | - |
| shuffled-10k-qa10 | 2000 | 200 | - |
| shuffled-10k-qa11 | 2000 | 200 | - |
| shuffled-10k-qa12 | 2000 | 200 | - |
| shuffled-10k-qa13 | 2000 | 200 | - |
| shuffled-10k-qa14 | 2000 | 200 | - |
| shuffled-10k-qa15 | 2500 | 250 | - |
| shuffled-10k-qa16 | 10000 | 1000 | - |
| shuffled-10k-qa17 | 1250 | 125 | - |
| shuffled-10k-qa18 | 1978 | 199 | - |
| shuffled-10k-qa19 | 10000 | 1000 | - |
| shuffled-10k-qa20 | 933 | 93 | - |
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
Code to generate tasks is available on [github](https://github.com/facebook/bAbI-tasks)
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
Jesse Dodge and Andreea Gane and Xiang Zhang and Antoine Bordes and Sumit Chopra and Alexander Miller and Arthur Szlam and Jason Weston, at Facebook Research.
### Licensing Information
```
Creative Commons Attribution 3.0 License
```
### Citation Information
```
@misc{dodge2016evaluating,
title={Evaluating Prerequisite Qualities for Learning End-to-End Dialog Systems},
author={Jesse Dodge and Andreea Gane and Xiang Zhang and Antoine Bordes and Sumit Chopra and Alexander Miller and Arthur Szlam and Jason Weston},
year={2016},
eprint={1511.06931},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
### Contributions
Thanks to [@gchhablani](https://github.com/gchhablani) for adding this dataset. |
social_bias_frames | 2023-04-05T13:40:19.000Z | [
"task_categories:text2text-generation",
"task_categories:text-classification",
"task_ids:hate-speech-detection",
"annotations_creators:crowdsourced",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:original",
"language:en",
"license:cc-by-4.0",
"explanation-generation",
"region:us"
] | null | Social Bias Frames is a new way of representing the biases and offensiveness that are implied in language.
For example, these frames are meant to distill the implication that "women (candidates) are less qualified"
behind the statement "we shouldn’t lower our standards to hire more women." | @inproceedings{sap2020socialbiasframes,
title={Social Bias Frames: Reasoning about Social and Power Implications of Language},
author={Sap, Maarten and Gabriel, Saadia and Qin, Lianhui and Jurafsky, Dan and Smith, Noah A and Choi, Yejin},
year={2020},
booktitle={ACL},
} | null | 8 | 111 | ---
pretty_name: Social Bias Frames
annotations_creators:
- crowdsourced
language_creators:
- found
language:
- en
license:
- cc-by-4.0
multilinguality:
- monolingual
size_categories:
- 100K<n<1M
source_datasets:
- original
task_categories:
- text2text-generation
- text-classification
task_ids:
- hate-speech-detection
paperswithcode_id: null
tags:
- explanation-generation
dataset_info:
features:
- name: whoTarget
dtype: string
- name: intentYN
dtype: string
- name: sexYN
dtype: string
- name: sexReason
dtype: string
- name: offensiveYN
dtype: string
- name: annotatorGender
dtype: string
- name: annotatorMinority
dtype: string
- name: sexPhrase
dtype: string
- name: speakerMinorityYN
dtype: string
- name: WorkerId
dtype: string
- name: HITId
dtype: string
- name: annotatorPolitics
dtype: string
- name: annotatorRace
dtype: string
- name: annotatorAge
dtype: string
- name: post
dtype: string
- name: targetMinority
dtype: string
- name: targetCategory
dtype: string
- name: targetStereotype
dtype: string
- name: dataSource
dtype: string
splits:
- name: test
num_bytes: 5371665
num_examples: 17501
- name: validation
num_bytes: 5096009
num_examples: 16738
- name: train
num_bytes: 34006886
num_examples: 112900
download_size: 9464583
dataset_size: 44474560
---
# Dataset Card for "social_bias_frames"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://homes.cs.washington.edu/~msap/social-bias-frames/](https://homes.cs.washington.edu/~msap/social-bias-frames/)
- **Repository:** [https://homes.cs.washington.edu/~msap/social-bias-frames/](https://homes.cs.washington.edu/~msap/social-bias-frames/)
- **Paper:** [Social Bias Frames: Reasoning about Social and Power Implications of Language](https://www.aclweb.org/anthology/2020.acl-main.486.pdf)
- **Leaderboard:**
- **Point of Contact:** [Maartin Sap](mailto:msap@cs.washington.edu)
- **Size of downloaded dataset files:** 6.32 MB
- **Size of the generated dataset:** 44.47 MB
- **Total amount of disk used:** 50.80 MB
### Dataset Summary
Warning: this document and dataset contain content that may be offensive or upsetting.
Social Bias Frames is a new way of representing the biases and offensiveness that are implied in language. For example, these frames are meant to distill the implication that "women (candidates) are less qualified" behind the statement "we shouldn’t lower our standards to hire more women." The Social Bias Inference Corpus (SBIC) supports large-scale learning and evaluation of social implications with over 150k structured annotations of social media posts, spanning over 34k implications about a thousand demographic groups.
### Supported Tasks and Leaderboards
This dataset supports both classification and generation. Sap et al. developed several models using the SBIC. They report an F1 score of 78.8 in predicting whether the posts in the test set were offensive, an F1 score of 78.6 in predicting whether the posts were intending to be offensive, an F1 score of 80.7 in predicting whether the posts were lewd, and an F1 score of 69.9 in predicting whether the posts were targeting a specific group.
Another of Sap et al.’s models performed better in the generation task. They report a BLUE score of 77.9, a Rouge-L score of 68.7, and a WMD score of 0.74 in generating a description of the targeted group given a post as well as a BLUE score of 52.6, a Rouge-L score of 44.9, and a WMD score of 2.79 in generating a description of the implied offensive statement given a post. See the paper for further details.
### Languages
The language in SBIC is predominantly white-aligned English (78%, using a lexical dialect detector, [Blodgett et al., 2016](https://www.aclweb.org/anthology/D16-1120)). The curators find less than 10% of posts in SBIC are detected to have the AAE dialect category. The BCP-47 language tag is, presumably, en-US.
## Dataset Structure
### Data Instances
Each instance contains a post that may contain an offensive statement and annotated information concerning the nature of the offensive implication as well as the demographics of the annotator and origin of the post. See the [Social Bias Frames dataset viewer](https://huggingface.co/datasets/viewer/?dataset=social_bias_frames) to explore more examples.
#### default
- **Size of downloaded dataset files:** 6.32 MB
- **Size of the generated dataset:** 44.47 MB
- **Total amount of disk used:** 50.80 MB
An example of 'validation' looks as follows.
```
{
"HITId": "3Z8UJEJOCZ74QHXOTFP0JOOGD7039J",
"WorkerId": "7162172225567546910",
"annotatorAge": "27.0",
"annotatorGender": "woman",
"annotatorMinority": "women",
"annotatorPolitics": "liberal",
"annotatorRace": "white",
"dataSource": "t/davidson",
"intentYN": "1.0",
"offensiveYN": "1.0",
"post": "Every half-breed cop in America is trying to rape white women.",
"sexPhrase": "rape white women",
"sexReason": "rape",
"sexYN": "1.0",
"speakerMinorityYN": "0.0",
"targetCategory": "race",
"targetMinority": "mixed folks",
"targetStereotype": "mixed folks are rapists.",
"whoTarget": "1.0"
}
```
### Data Fields
The data fields are the same among all splits.
#### default
- _whoTarget_: a string, ‘0.0’ if the target is a group, ‘1.0’ if the target is an individual, and blank if the post is not offensive
- _intentYN_: a string indicating if the intent behind the statement was to offend. This is a categorical variable with four possible answers, ‘1.0’ if yes, ‘0.66’ if probably, ‘0.33’ if probably not, and ‘0.0’ if no.
- _sexYN_: a string indicating whether the post contains a sexual or lewd reference. This is a categorical variable with three possible answers, ‘1.0’ if yes, ‘0.5’ if maybe, ‘0.0’ if no.
- _sexReason_: a string containing a free text explanation of what is sexual if indicated so, blank otherwise
- _offensiveYN_: a string indicating if the post could be offensive to anyone. This is a categorical variable with three possible answers, ‘1.0’ if yes, ‘0.5’ if maybe, ‘0.0’ if no.
- _annotatorGender_: a string indicating the gender of the MTurk worker
- _annotatorMinority_: a string indicating whether the MTurk worker identifies as a minority
- _sexPhrase_: a string indicating which part of the post references something sexual, blank otherwise
- _speakerMinorityYN_: a string indicating whether the speaker was part of the same minority group that's being targeted. This is a categorical variable with three possible answers, ‘1.0’ if yes, ‘0.5’ if maybe, ‘0.0’ if no.
- _WorkerId_: a string hashed version of the MTurk workerId
- _HITId_: a string id that uniquely identifies each post
- _annotatorPolitics_: a string indicating the political leaning of the MTurk worker
- _annotatorRace_: a string indicating the race of the MTurk worker
- _annotatorAge_: a string indicating the age of the MTurk worker
- _post_: a string containing the text of the post that was annotated
- _targetMinority_: a string indicating the demographic group targeted
- _targetCategory_: a string indicating the high-level category of the demographic group(s) targeted
- _targetStereotype_: a string containing the implied statement
- _dataSource_: a string indicating the source of the post (`t/...`: means Twitter, `r/...`: means a subreddit)
### Data Splits
To ensure that no post appeared in multiple splits, the curators defined a training instance as the post and its three sets of annotations. They then split the dataset into train, validation, and test sets (75%/12.5%/12.5%).
| name |train |validation|test |
|-------|-----:|---------:|----:|
|default|112900| 16738|17501|
## Dataset Creation
### Curation Rationale
The main aim for this dataset is to cover a wide variety of social biases that are implied in text, both subtle and overt, and make the biases representative of real world discrimination that people experience [RWJF 2017](https://web.archive.org/web/20200620105955/https://www.rwjf.org/en/library/research/2017/10/discrimination-in-america--experiences-and-views.html). The curators also included some innocuous statements, to balance out biases, offensive, or harmful content.
### Source Data
The curators included online posts from the following sources sometime between 2014-2019:
- r/darkJokes, r/meanJokes, r/offensiveJokes
- Reddit microaggressions ([Breitfeller et al., 2019](https://www.aclweb.org/anthology/D19-1176/))
- Toxic language detection Twitter corpora ([Waseem & Hovy, 2016](https://www.aclweb.org/anthology/N16-2013/); [Davidson et al., 2017](https://www.aaai.org/ocs/index.php/ICWSM/ICWSM17/paper/viewPaper/15665); [Founa et al., 2018](https://www.aaai.org/ocs/index.php/ICWSM/ICWSM18/paper/viewPaper/17909))
- Data scraped from hate sites (Gab, Stormfront, r/incels, r/mensrights)
#### Initial Data Collection and Normalization
The curators wanted posts to be as self-contained as possible, therefore, they applied some filtering to prevent posts from being highly context-dependent. For Twitter data, they filtered out @-replies, retweets, and links, and subsample posts such that there is a smaller correlation between AAE and offensiveness (to avoid racial bias; [Sap et al., 2019](https://www.aclweb.org/anthology/P19-1163/)). For Reddit, Gab, and Stormfront, they only selected posts that were one sentence long, don't contain links, and are between 10 and 80 words. Furthemore, for Reddit, they automatically removed posts that target automated moderation.
#### Who are the source language producers?
Due to the nature of this corpus, there is no way to know who the speakers are. But, the speakers of the Reddit, Gab, and Stormfront posts are likely white men (see [Gender by subreddit](http://bburky.com/subredditgenderratios/), [Gab users](https://en.wikipedia.org/wiki/Gab_(social_network)#cite_note-insidetheright-22), [Stormfront description](https://en.wikipedia.org/wiki/Stormfront_(website))).
### Annotations
#### Annotation process
For each post, Amazon Mechanical Turk workers indicate whether the post is offensive, whether the intent was to offend, and whether it contains lewd or sexual content. Only if annotators indicate potential offensiveness do they answer the group implication question. If the post targets or references a group or demographic, workers select or write which one(s); per selected group, they then write two to four stereotypes. Finally, workers are asked whether they think the speaker is part of one of the minority groups referenced by the post. The curators collected three annotations per post, and restricted the worker pool to the U.S. and Canada. The annotations in SBIC showed 82.4% pairwise agreement and Krippendorf’s α=0.45 on average.
Recent work has highlighted various negative side effects caused by annotating potentially abusive or harmful content (e.g., acute stress; Roberts, 2016). The curators mitigated these by limiting the number of posts that one worker could annotate in one day, paying workers above minimum wage ($7–12), and providing crisis management resources to the annotators.
#### Who are the annotators?
The annotators are Amazon Mechanical Turk workers aged 36±10 years old. The annotators consisted of 55% women, 42% men, and <1% non-binary and 82% identified as White, 4% Asian, 4% Hispanic, 4% Black. Information on their first language(s) and professional backgrounds was not collected.
### Personal and Sensitive Information
Usernames are not included with the data, but the site where the post was collected is, so the user could potentially be recovered.
## Considerations for Using the Data
### Social Impact of Dataset
The curators recognize that studying Social Bias Frames necessarily requires confronting online content that may be offensive or disturbing but argue that deliberate avoidance does not eliminate such problems. By assessing social media content through the lens of Social Bias Frames, automatic flagging or AI-augmented writing interfaces may be analyzed for potentially harmful online content with detailed explanations for users or moderators to consider and verify. In addition, the collective analysis over large corpora can also be insightful for educating people on reducing unconscious biases in their language by encouraging empathy towards a targeted group.
### Discussion of Biases
Because this is a corpus of social biases, a lot of posts contain implied or overt biases against the following groups (in decreasing order of prevalence):
- gender/sexuality
- race/ethnicity
- religion/culture
- social/political
- disability body/age
- victims
The curators warn that technology trained on this dataset could have side effects such as censorship and dialect-based racial bias.
### Other Known Limitations
Because the curators found that the dataset is predominantly written in White-aligned English, they caution researchers to consider the potential for dialect or identity-based biases in labelling ([Davidson et al.,2019](https://www.aclweb.org/anthology/W19-3504.pdf); [Sap et al., 2019a](https://www.aclweb.org/anthology/P19-1163.pdf)) before deploying technology based on SBIC.
## Additional Information
### Dataset Curators
This dataset was developed by Maarten Sap of the Paul G. Allen School of Computer Science & Engineering at the University of Washington, Saadia Gabriel, Lianhui Qin, Noah A Smith, and Yejin Choi of the Paul G. Allen School of Computer Science & Engineering and the Allen Institute for Artificial Intelligence, and Dan Jurafsky of the Linguistics & Computer Science Departments of Stanford University.
### Licensing Information
The SBIC is licensed under the [Creative Commons 4.0 License](https://creativecommons.org/licenses/by/4.0/)
### Citation Information
```
@inproceedings{sap-etal-2020-social,
title = "Social Bias Frames: Reasoning about Social and Power Implications of Language",
author = "Sap, Maarten and
Gabriel, Saadia and
Qin, Lianhui and
Jurafsky, Dan and
Smith, Noah A. and
Choi, Yejin",
booktitle = "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
month = jul,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2020.acl-main.486",
doi = "10.18653/v1/2020.acl-main.486",
pages = "5477--5490",
abstract = "Warning: this paper contains content that may be offensive or upsetting. Language has the power to reinforce stereotypes and project social biases onto others. At the core of the challenge is that it is rarely what is stated explicitly, but rather the implied meanings, that frame people{'}s judgments about others. For example, given a statement that {``}we shouldn{'}t lower our standards to hire more women,{''} most listeners will infer the implicature intended by the speaker - that {``}women (candidates) are less qualified.{''} Most semantic formalisms, to date, do not capture such pragmatic implications in which people express social biases and power differentials in language. We introduce Social Bias Frames, a new conceptual formalism that aims to model the pragmatic frames in which people project social biases and stereotypes onto others. In addition, we introduce the Social Bias Inference Corpus to support large-scale modelling and evaluation with 150k structured annotations of social media posts, covering over 34k implications about a thousand demographic groups. We then establish baseline approaches that learn to recover Social Bias Frames from unstructured text. We find that while state-of-the-art neural models are effective at high-level categorization of whether a given statement projects unwanted social bias (80{\%} F1), they are not effective at spelling out more detailed explanations in terms of Social Bias Frames. Our study motivates future work that combines structured pragmatic inference with commonsense reasoning on social implications.",
}
```
### Contributions
Thanks to [@thomwolf](https://github.com/thomwolf), [@lewtun](https://github.com/lewtun), [@otakumesi](https://github.com/otakumesi), [@mariamabarham](https://github.com/mariamabarham), [@lhoestq](https://github.com/lhoestq) for adding this dataset. |
smilegate-ai/kor_unsmile | 2022-03-27T12:29:28.000Z | [
"region:us"
] | smilegate-ai | null | null | null | 0 | 111 | Entry not found |
IIC/qges | 2022-06-16T12:11:00.000Z | [
"region:us"
] | IIC | null | null | null | 0 | 111 | Entry not found |
merve/supersoaker-failures | 2022-09-08T16:06:06.000Z | [
"license:apache-2.0",
"region:us"
] | merve | null | null | null | 0 | 111 | ---
license: apache-2.0
---
|
zoheb/sketch-scene | 2022-10-30T10:07:48.000Z | [
"task_categories:text-to-image",
"language_creators:machine-generated",
"multilinguality:monolingual",
"size_categories:n<10K",
"source_datasets:FS-COCO",
"language:en",
"license:cc-by-nc-sa-4.0",
"region:us"
] | zoheb | null | null | null | 11 | 111 | ---
license: cc-by-nc-sa-4.0
language:
- en
language_creators:
- machine-generated
multilinguality:
- monolingual
pretty_name: 'Sketch Scene Descriptions'
size_categories:
- n<10K
source_datasets:
- FS-COCO
tags: []
task_categories:
- text-to-image
task_ids: []
---
# Dataset Card for Sketch Scene Descriptions
_Dataset used to train [Sketch Scene text to image model]()_
We advance sketch research to scenes with the first dataset of freehand scene sketches, FS-COCO. With practical applications in mind, we collect sketches that convey well scene content but can be sketched within a few minutes by a person with any sketching skills. Our dataset comprises around 10,000 freehand scene vector sketches with per-point space-time information by 100 non-expert individuals, offering both object- and scene-level abstraction. Each sketch is augmented with its text description.
For each row, the dataset contains `image` and `text` keys. `image` is a varying size PIL jpeg, and `text` is the accompanying text caption. Only a train split is provided.
## Citation
If you use this dataset, please cite it as:
```
@inproceedings{fscoco,
title={FS-COCO: Towards Understanding of Freehand Sketches of Common Objects in Context.}
author={Chowdhury, Pinaki Nath and Sain, Aneeshan and Bhunia, Ayan Kumar and Xiang, Tao and Gryaditskaya, Yulia and Song, Yi-Zhe},
booktitle={ECCV},
year={2022}
}
``` |
mstz/glass | 2023-04-16T17:29:45.000Z | [
"task_categories:tabular-classification",
"size_categories:n<1k",
"language:en",
"license:cc",
"glass",
"tabular_classification",
"binary_classification",
"UCI",
"region:us"
] | mstz | null | @misc{misc_glass_efficiency_242,
author = {Tsanas,Athanasios & Xifara,Angeliki},
title = {{Glass efficiency}},
year = {2012},
howpublished = {UCI Machine Learning Repository},
note = {{DOI}: \\url{10.24432/C51307}}
} | null | 0 | 111 | ---
language:
- en
tags:
- glass
- tabular_classification
- binary_classification
- UCI
pretty_name: Glass evaluation
size_categories:
- n<1k
task_categories:
- tabular-classification
configs:
- glass
- windows
- vehicles
- containers
- tableware
- headlamps
license: cc
---
# Glass
The [Glass dataset](https://archive-beta.ics.uci.edu/dataset/42/glass+identification) from the [UCI repository](https://archive-beta.ics.uci.edu).
Classify the type of glass.
# Configurations and tasks
| **Configuration** | **Task** | **Description** |
|-------------------|---------------------------|--------------------------|
| glass | Multiclass classification | Classify glass type. |
| windows | Binary classification | Is this windows glass? |
| vehicles | Binary classification | Is this vehicles glass? |
| containers | Binary classification | Is this containers glass?|
| tableware | Binary classification | Is this tableware glass? |
| headlamps | Binary classification | Is this headlamps glass? |
# Usage
```python
from datasets import load_dataset
dataset = load_dataset("mstz/glass", "glass")["train"]
``` |
math-eval/TAL-SCQ5K | 2023-09-15T06:37:10.000Z | [
"license:mit",
"region:us"
] | math-eval | null | null | null | 16 | 111 | ---
license: mit
---
<h1 align="center">TAL-SCQ5K</h1>
## Dataset Description
### Dataset Summary
TAL-SCQ5K-EN/TAL-SCQ5K-CN are high quality mathematical competition datasets in English and Chinese language created by TAL Education Group, each consisting of 5K questions(3K training and 2K testing). The questions are in the form of multiple-choice and cover mathematical topics at the primary,junior high and high school levels. In addition, detailed solution steps are provided to facilitate CoT training and all the mathematical expressions in the questions have been presented as standard text-mode Latex.
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
The text in TAL-SCQ5K-EN is in English and TAL-SCQ5K-CN is in Chinese.
## Dataset Structure
### Data Instances
```
{
"dataset_name": "prime_math_competition_en_single_choice_8K_dev",
"dataset_version": "2023-07-07",
"qid": "244",
"queId": "8afc802a8c304199b1040f11ffa2e92a",
"competition_source_list": [],
"difficulty": "2",
"qtype": "single_choice",
"problem": "A $14$-digit. number $666666 XY 444444$ is a multiple of $26$. If $X$ and $Y$ are both positive, what is the smallest vaue of $X+ Y$? ",
"answer_option_list": [
[{
"aoVal": "A",
"content": "$$3$$ "
}],
[{
"aoVal": "B",
"content": "$$4$$ "
}],
[{
"aoVal": "C",
"content": "$$9$$ "
}],
[{
"aoVal": "D",
"content": "$$14$$ "
}],
[{
"aoVal": "E",
"content": "None of the above "
}]
],
"knowledge_point_routes": ["Overseas Competition->Knowledge Point->Number Theory Modules->Division without Remainders->Divisibility Rules"],
"answer_analysis": ["Since $1001$ is a multiple of $13$, $111111 = 111 \\times 1001$ is also a multiple of $13$. It follows that both $666666$ and $444444$ are both multiples of $26$. $666666XY 444444 = 66666600000000 + XY 000000 + 444444$ $\\Rightarrow XY$ must be divisible by $13$. Smallest $X+Y=1+3=4$. "],
"answer_value": "B"
}
```
### Data Fields
* "dataset_name": identification of the source dataset name from which TAL-SCQ5K-EN/TAL-SCQ5K-CN has been created, use only for inner of TAL education group, please ignore.
* "dataset_version": identification of the source dataset version from which TAL-SCQ5K-EN/TAL-SCQ5K-CN has been created, use only for inner of TAL education group, please ignore.
* "qid": identification of local id of the question in the source dataset from which TAL-SCQ5K-EN/TAL-SCQ5K-CN has been created, use only for inner of TAL education group, please ignore.
* "queId": identification of global id of the question, use only for inner of TAL education group, please ignore.
* "competition_source_list": identification of math competitions in which the questions appeared, if have been logged.
* "difficulty": difficulty level of the questions, value ranged from 0 to 4
* "qtype": question type, valued as "single_choice" for all the questions in this dataset indicates that all the questions are multiple-choice questions with unique ground-truth answer.
* "problem": the question string to a math competition question.
* "answer_option_list": answer choices to be selected
* "knowledge_point_routes": knowledge point route from coarse-grained to fine-grained.
* "answer_analysis": step-by-step answer analysis of the questions, which helps CoT training
* "answer_value": value of the ground-truth answer choice
### Data Splits
<style>
table th:first-of-type {
width: 40%;
}
table th:nth-of-type(2) {
width: 30%;
}
table th:nth-of-type(3) {
width: 30%;
}
</style>
| name|train|test |
|:---:|:----:|:----:|
|TAL-SCQ5K-EN|3K |2K |
|TAL-SCQ5K-CN|3K |2K |
## Usage
Each of the above datasets is located in a separate sub-directory. To load an individual subset, use the data_dir argument of the load_dataset() function as follows:
```python
from datasets import load_dataset
# Load all subsets (share the same schema)
dataset = load_dataset("math-eval/TAL-SCQ5K")
# Load TAL-SCQ5K-EN
dataset = load_dataset("math-eval/TAL-SCQ5K", data_dir="TAL-SCQ5K-EN")
# Load TAL-SCQ5K-CN
dataset = load_dataset("math-eval/TAL-SCQ5K", data_dir="TAL-SCQ5K-CN")
```
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
The TAL-SCQ5K dataset is licensed under the [MIT License](https://opensource.org/license/mit/)
### Citation Information
[More Information Needed]
### Contact
The original authors host this dataset on GitHub here: https://github.com/math-eval/TAL-SCQ5K You can submit inquiries to: matheval.ai@gmail.com |
biomrc | 2023-04-05T09:41:42.000Z | [
"language:en",
"region:us"
] | null | We introduce BIOMRC, a large-scale cloze-style biomedical MRC dataset. Care was taken to reduce noise, compared to the previous BIOREAD dataset of Pappas et al. (2018). Experiments show that simple heuristics do not perform well on the new dataset and that two neural MRC models that had been tested on BIOREAD perform much better on BIOMRC, indicating that the new dataset is indeed less noisy or at least that its task is more feasible. Non-expert human performance is also higher on the new dataset compared to BIOREAD, and biomedical experts perform even better. We also introduce a new BERT-based MRC model, the best version of which substantially outperforms all other methods tested, reaching or surpassing the accuracy of biomedical experts in some experiments. We make the new dataset available in three different sizes, also releasing our code, and providing a leaderboard. | @inproceedings{pappas-etal-2020-biomrc,
title = "{B}io{MRC}: A Dataset for Biomedical Machine Reading Comprehension",
author = "Pappas, Dimitris and
Stavropoulos, Petros and
Androutsopoulos, Ion and
McDonald, Ryan",
booktitle = "Proceedings of the 19th SIGBioMed Workshop on Biomedical Language Processing",
month = jul,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2020.bionlp-1.15",
pages = "140--149",
abstract = "We introduce BIOMRC, a large-scale cloze-style biomedical MRC dataset. Care was taken to reduce noise, compared to the previous BIOREAD dataset of Pappas et al. (2018). Experiments show that simple heuristics do not perform well on the new dataset and that two neural MRC models that had been tested on BIOREAD perform much better on BIOMRC, indicating that the new dataset is indeed less noisy or at least that its task is more feasible. Non-expert human performance is also higher on the new dataset compared to BIOREAD, and biomedical experts perform even better. We also introduce a new BERT-based MRC model, the best version of which substantially outperforms all other methods tested, reaching or surpassing the accuracy of biomedical experts in some experiments. We make the new dataset available in three different sizes, also releasing our code, and providing a leaderboard.",
} | null | 3 | 110 | ---
language:
- en
paperswithcode_id: biomrc
pretty_name: BIOMRC
dataset_info:
- config_name: plain_text
features:
- name: abstract
dtype: string
- name: title
dtype: string
- name: entities_list
sequence: string
- name: answer
dtype: string
splits:
- name: train
num_bytes: 1653301820
num_examples: 700000
- name: validation
num_bytes: 119697683
num_examples: 50000
- name: test
num_bytes: 147832373
num_examples: 62707
download_size: 408080356
dataset_size: 1920831876
- config_name: biomrc_large_A
features:
- name: abstract
dtype: string
- name: title
dtype: string
- name: entities_list
sequence: string
- name: answer
dtype: string
splits:
- name: train
num_bytes: 1653301820
num_examples: 700000
- name: validation
num_bytes: 119697683
num_examples: 50000
- name: test
num_bytes: 147832373
num_examples: 62707
download_size: 408080356
dataset_size: 1920831876
- config_name: biomrc_large_B
features:
- name: abstract
dtype: string
- name: title
dtype: string
- name: entities_list
sequence: string
- name: answer
dtype: string
splits:
- name: train
num_bytes: 1325877001
num_examples: 700000
- name: validation
num_bytes: 96414040
num_examples: 50000
- name: test
num_bytes: 118708586
num_examples: 62707
download_size: 343061539
dataset_size: 1540999627
- config_name: biomrc_small_A
features:
- name: abstract
dtype: string
- name: title
dtype: string
- name: entities_list
sequence: string
- name: answer
dtype: string
splits:
- name: train
num_bytes: 206553549
num_examples: 87500
- name: validation
num_bytes: 14957163
num_examples: 6250
- name: test
num_bytes: 14807799
num_examples: 6250
download_size: 68879274
dataset_size: 236318511
- config_name: biomrc_small_B
features:
- name: abstract
dtype: string
- name: title
dtype: string
- name: entities_list
sequence: string
- name: answer
dtype: string
splits:
- name: train
num_bytes: 165662937
num_examples: 87500
- name: validation
num_bytes: 12047304
num_examples: 6250
- name: test
num_bytes: 11911172
num_examples: 6250
download_size: 57706889
dataset_size: 189621413
- config_name: biomrc_tiny_A
features:
- name: abstract
dtype: string
- name: title
dtype: string
- name: entities_list
sequence: string
- name: answer
dtype: string
splits:
- name: test
num_bytes: 70914
num_examples: 30
download_size: 22519
dataset_size: 70914
- config_name: biomrc_tiny_B
features:
- name: abstract
dtype: string
- name: title
dtype: string
- name: entities_list
sequence: string
- name: answer
dtype: string
splits:
- name: test
num_bytes: 59925
num_examples: 30
download_size: 19685
dataset_size: 59925
---
# Dataset Card for "biomrc"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [http://nlp.cs.aueb.gr/](http://nlp.cs.aueb.gr/)
- **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of downloaded dataset files:** 1.29 GB
- **Size of the generated dataset:** 5.81 GB
- **Total amount of disk used:** 7.09 GB
### Dataset Summary
We introduce BIOMRC, a large-scale cloze-style biomedical MRC dataset. Care was taken to reduce noise, compared to the previous BIOREAD dataset of Pappas et al. (2018). Experiments show that simple heuristics do not perform well on the new dataset and that two neural MRC models that had been tested on BIOREAD perform much better on BIOMRC, indicating that the new dataset is indeed less noisy or at least that its task is more feasible. Non-expert human performance is also higher on the new dataset compared to BIOREAD, and biomedical experts perform even better. We also introduce a new BERT-based MRC model, the best version of which substantially outperforms all other methods tested, reaching or surpassing the accuracy of biomedical experts in some experiments. We make the new dataset available in three different sizes, also releasing our code, and providing a leaderboard.
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Dataset Structure
### Data Instances
#### biomrc_large_A
- **Size of downloaded dataset files:** 408.08 MB
- **Size of the generated dataset:** 1.92 GB
- **Total amount of disk used:** 2.33 GB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"abstract": "\"OBJECTIVES: @entity9 is a @entity10 that may result from greater occipital nerve entrapment. Entrapped peripheral nerves typica...",
"answer": "@entity9 :: (MESH:D009437,Disease) :: ['unilateral occipital neuralgia']\n",
"entities_list": ["@entity1 :: ('9606', 'Species') :: ['patients']", "@entity10 :: ('MESH:D006261', 'Disease') :: ['headache', 'Headache']", "@entity9 :: ('MESH:D009437', 'Disease') :: ['Occipital neuralgia', 'unilateral occipital neuralgia']"],
"title": "Sonographic evaluation of the greater occipital nerve in XXXX .\n"
}
```
#### biomrc_large_B
- **Size of downloaded dataset files:** 343.06 MB
- **Size of the generated dataset:** 1.54 GB
- **Total amount of disk used:** 1.88 GB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"abstract": "\"BACKGROUND: Adults with physical disabilities are less likely than others to receive @entity2 screening. It is not known, howev...",
"answer": "@entity2",
"entities_list": ["@entity2", "@entity1", "@entity0", "@entity3"],
"title": "Does a standard measure of self-reported physical disability correlate with clinician perception of impairment related to XXXX screening?\n"
}
```
#### biomrc_small_A
- **Size of downloaded dataset files:** 68.88 MB
- **Size of the generated dataset:** 236.32 MB
- **Total amount of disk used:** 305.20 MB
An example of 'validation' looks as follows.
```
This example was too long and was cropped:
{
"abstract": "\"PURPOSE: @entity120 ( @entity120 ) is a life-limiting @entity102 that presents as an elevated blood pressure in the pulmonary a...",
"answer": "@entity148 :: (MESH:D001008,Disease) :: ['anxiety']\n",
"entities_list": "[\"@entity1 :: ('9606', 'Species') :: ['patients']\", \"@entity308 :: ('MESH:D003866', 'Disease') :: ['depression']\", \"@entity146 :...",
"title": "A predictive model of the effects of @entity308 , XXXX , stress, 6-minute-walk distance, and social support on health-related quality of life in an adult pulmonary hypertension population.\n"
}
```
#### biomrc_small_B
- **Size of downloaded dataset files:** 57.70 MB
- **Size of the generated dataset:** 189.62 MB
- **Total amount of disk used:** 247.33 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"abstract": "\"Single-agent activity for @entity12 reflected by response rates of 10%-30% has been reported in @entity0 with @entity3 ( @entit...",
"answer": "@entity10",
"entities_list": ["@entity0", "@entity6", "@entity2", "@entity5", "@entity12", "@entity11", "@entity1", "@entity7", "@entity9", "@entity10", "@entity3", "@entity4", "@entity8"],
"title": "No synergistic activity of @entity7 and XXXX in the treatment of @entity3 .\n"
}
```
#### biomrc_tiny_A
- **Size of downloaded dataset files:** 0.02 MB
- **Size of the generated dataset:** 0.07 MB
- **Total amount of disk used:** 0.09 MB
An example of 'test' looks as follows.
```
This example was too long and was cropped:
{
"abstract": "\"OBJECTIVE: Decompressive craniectomy (DC) requires later cranioplasty (CP) in survivors. However, if additional ventriculoperit...",
"answer": "@entity260 :: (MESH:D011183,Disease) :: ['Postoperative Complications']\n",
"entities_list": ["@entity1 :: ('9606', 'Species') :: ['Patients', 'patients', 'Patient']", "@entity260 :: ('MESH:D011183', 'Disease') :: ['VPS regarding postoperative complications']", "@entity1276 :: ('MESH:D006849', 'Disease') :: ['hydrocephalus']"],
"title": "Cranioplasty and Ventriculoperitoneal Shunt Placement after Decompressive Craniectomy: Staged Surgery Is Associated with Fewer XXXX .\n"
}
```
### Data Fields
The data fields are the same among all splits.
#### biomrc_large_A
- `abstract`: a `string` feature.
- `title`: a `string` feature.
- `entities_list`: a `list` of `string` features.
- `answer`: a `string` feature.
#### biomrc_large_B
- `abstract`: a `string` feature.
- `title`: a `string` feature.
- `entities_list`: a `list` of `string` features.
- `answer`: a `string` feature.
#### biomrc_small_A
- `abstract`: a `string` feature.
- `title`: a `string` feature.
- `entities_list`: a `list` of `string` features.
- `answer`: a `string` feature.
#### biomrc_small_B
- `abstract`: a `string` feature.
- `title`: a `string` feature.
- `entities_list`: a `list` of `string` features.
- `answer`: a `string` feature.
#### biomrc_tiny_A
- `abstract`: a `string` feature.
- `title`: a `string` feature.
- `entities_list`: a `list` of `string` features.
- `answer`: a `string` feature.
### Data Splits
#### biomrc_large_A
| |train |validation|test |
|--------------|-----:|---------:|----:|
|biomrc_large_A|700000| 50000|62707|
#### biomrc_large_B
| |train |validation|test |
|--------------|-----:|---------:|----:|
|biomrc_large_B|700000| 50000|62707|
#### biomrc_small_A
| |train|validation|test|
|--------------|----:|---------:|---:|
|biomrc_small_A|87500| 6250|6250|
#### biomrc_small_B
| |train|validation|test|
|--------------|----:|---------:|---:|
|biomrc_small_B|87500| 6250|6250|
#### biomrc_tiny_A
| |test|
|-------------|---:|
|biomrc_tiny_A| 30|
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@inproceedings{pappas-etal-2020-biomrc,
title = "{B}io{MRC}: A Dataset for Biomedical Machine Reading Comprehension",
author = "Pappas, Dimitris and
Stavropoulos, Petros and
Androutsopoulos, Ion and
McDonald, Ryan",
booktitle = "Proceedings of the 19th SIGBioMed Workshop on Biomedical Language Processing",
month = jul,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2020.bionlp-1.15",
pages = "140--149",
abstract = "We introduce BIOMRC, a large-scale cloze-style biomedical MRC dataset. Care was taken to reduce noise, compared to the previous BIOREAD dataset of Pappas et al. (2018). Experiments show that simple heuristics do not perform well on the new dataset and that two neural MRC models that had been tested on BIOREAD perform much better on BIOMRC, indicating that the new dataset is indeed less noisy or at least that its task is more feasible. Non-expert human performance is also higher on the new dataset compared to BIOREAD, and biomedical experts perform even better. We also introduce a new BERT-based MRC model, the best version of which substantially outperforms all other methods tested, reaching or surpassing the accuracy of biomedical experts in some experiments. We make the new dataset available in three different sizes, also releasing our code, and providing a leaderboard.",
}
```
### Contributions
Thanks to [@lewtun](https://github.com/lewtun), [@PetrosStav](https://github.com/PetrosStav), [@lhoestq](https://github.com/lhoestq), [@thomwolf](https://github.com/thomwolf) for adding this dataset. |
swahili_news | 2023-01-25T14:45:11.000Z | [
"task_categories:text-classification",
"task_ids:multi-class-classification",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:sw",
"license:cc-by-4.0",
"region:us"
] | null | Swahili is spoken by 100-150 million people across East Africa. In Tanzania, it is one of two national languages (the other is English) and it is the official language of instruction in all schools. News in Swahili is an important part of the media sphere in Tanzania.
News contributes to education, technology, and the economic growth of a country, and news in local languages plays an important cultural role in many Africa countries. In the modern age, African languages in news and other spheres are at risk of being lost as English becomes the dominant language in online spaces.
The Swahili news dataset was created to reduce the gap of using the Swahili language to create NLP technologies and help AI practitioners in Tanzania and across Africa continent to practice their NLP skills to solve different problems in organizations or societies related to Swahili language. Swahili News were collected from different websites that provide news in the Swahili language. I was able to find some websites that provide news in Swahili only and others in different languages including Swahili.
The dataset was created for a specific task of text classification, this means each news content can be categorized into six different topics (Local news, International news , Finance news, Health news, Sports news, and Entertainment news). The dataset comes with a specified train/test split. The train set contains 75% of the dataset and test set contains 25% of the dataset. | @dataset{davis_david_2020_5514203,
author = {Davis David},
title = {Swahili : News Classification Dataset},
month = dec,
year = 2020,
note = {{The news version contains both train and test sets.}},
publisher = {Zenodo},
version = {0.2},
doi = {10.5281/zenodo.5514203},
url = {https://doi.org/10.5281/zenodo.5514203}
} | null | 2 | 110 | ---
annotations_creators:
- expert-generated
language_creators:
- found
language:
- sw
license:
- cc-by-4.0
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- multi-class-classification
pretty_name: 'Swahili : News Classification Dataset'
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': uchumi
'1': kitaifa
'2': michezo
'3': kimataifa
'4': burudani
'5': afya
config_name: swahili_news
splits:
- name: train
num_bytes: 49517855
num_examples: 22207
- name: test
num_bytes: 16093496
num_examples: 7338
download_size: 65618408
dataset_size: 65611351
---
# Dataset Card for Swahili : News Classification Dataset
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [Homepage for Swahili News classification dataset](https://doi.org/10.5281/zenodo.4300293)
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
Swahili is spoken by 100-150 million people across East Africa. In Tanzania, it is one of two national languages (the other is English) and it is the official language of instruction in all schools. News in Swahili is an important part of the media sphere in Tanzania.
News contributes to education, technology, and the economic growth of a country, and news in local languages plays an important cultural role in many Africa countries. In the modern age, African languages in news and other spheres are at risk of being lost as English becomes the dominant language in online spaces.
The Swahili news dataset was created to reduce the gap of using the Swahili language to create NLP technologies and help AI practitioners in Tanzania and across Africa continent to practice their NLP skills to solve different problems in organizations or societies related to Swahili language. Swahili News were collected from different websites that provide news in the Swahili language. I was able to find some websites that provide news in Swahili only and others in different languages including Swahili.
The dataset was created for a specific task of text classification, this means each news content can be categorized into six different topics (Local news, International news , Finance news, Health news, Sports news, and Entertainment news). The dataset comes with a specified train/test split. The train set contains 75% of the dataset and test set contains 25% of the dataset.
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
The language used is Swahili
## Dataset Structure
### Data Instances
A data instance:
```
{
'text': ' Bodi ya Utalii Tanzania (TTB) imesema, itafanya misafara ya kutangaza utalii kwenye miji minne nchini China kati ya Juni 19 hadi Juni 26 mwaka huu.Misafara hiyo itatembelea miji ya Beijing Juni 19, Shanghai Juni 21, Nanjig Juni 24 na Changsha Juni 26.Mwenyekiti wa bodi TTB, Jaji Mstaafu Thomas Mihayo ameyasema hayo kwenye mkutano na waandishi wa habari jijini Dar es Salaam.“Tunafanya jitihada kuhakikisha tunavuna watalii wengi zaidi kutoka China hasa tukizingatia umuhimu wa soko la sekta ya utalii nchini,” amesema Jaji Mihayo.Novemba 2018 TTB ilifanya ziara kwenye miji ya Beijing, Shanghai, Chengdu, Guangzhou na Hong Kong kutangaza vivutio vya utalii sanjari kuzitangaza safari za ndege za Air Tanzania.Ziara hiyo inaelezwa kuzaa matunda ikiwa ni pamoja na watalii zaidi ya 300 kuja nchini Mei mwaka huu kutembelea vivutio vya utalii.',
'label': 0
}
```
### Data Fields
- `text`: the news articles
- `label`: the label of the news article
### Data Splits
Dataset contains train and test splits.
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
Creative Commons Attribution 4.0 International
### Citation Information
```
@dataset{davis_david_2020_5514203,
author = {Davis David},
title = {Swahili : News Classification Dataset},
month = dec,
year = 2020,
note = {{The news version contains both train and test sets.}},
publisher = {Zenodo},
version = {0.2},
doi = {10.5281/zenodo.5514203},
url = {https://doi.org/10.5281/zenodo.5514203}
}
```
### Contributions
Thanks to [@yvonnegitau](https://github.com/yvonnegitau) for adding this dataset. |
xglue | 2023-06-30T09:06:30.000Z | [
"task_categories:question-answering",
"task_categories:summarization",
"task_categories:text-classification",
"task_categories:text2text-generation",
"task_categories:token-classification",
"task_ids:acceptability-classification",
"task_ids:extractive-qa",
"task_ids:named-entity-recognition",
"task_ids:natural-language-inference",
"task_ids:news-articles-headline-generation",
"task_ids:open-domain-qa",
"task_ids:parsing",
"task_ids:topic-classification",
"annotations_creators:crowdsourced",
"annotations_creators:expert-generated",
"annotations_creators:found",
"annotations_creators:machine-generated",
"language_creators:crowdsourced",
"language_creators:expert-generated",
"language_creators:found",
"language_creators:machine-generated",
"multilinguality:multilingual",
"multilinguality:translation",
"size_categories:100K<n<1M",
"size_categories:10K<n<100K",
"source_datasets:extended|conll2003",
"source_datasets:extended|squad",
"source_datasets:extended|xnli",
"source_datasets:original",
"language:ar",
"language:bg",
"language:de",
"language:el",
"language:en",
"language:es",
"language:fr",
"language:hi",
"language:it",
"language:nl",
"language:pl",
"language:pt",
"language:ru",
"language:sw",
"language:th",
"language:tr",
"language:ur",
"language:vi",
"language:zh",
"license:other",
"paraphrase-identification",
"question-answering",
"arxiv:2004.01401",
"region:us"
] | null | XGLUE is a new benchmark dataset to evaluate the performance of cross-lingual pre-trained
models with respect to cross-lingual natural language understanding and generation.
The benchmark is composed of the following 11 tasks:
- NER
- POS Tagging (POS)
- News Classification (NC)
- MLQA
- XNLI
- PAWS-X
- Query-Ad Matching (QADSM)
- Web Page Ranking (WPR)
- QA Matching (QAM)
- Question Generation (QG)
- News Title Generation (NTG)
For more information, please take a look at https://microsoft.github.io/XGLUE/. | @article{Liang2020XGLUEAN,
title={XGLUE: A New Benchmark Dataset for Cross-lingual Pre-training, Understanding and Generation},
author={Yaobo Liang and Nan Duan and Yeyun Gong and Ning Wu and Fenfei Guo and Weizhen Qi
and Ming Gong and Linjun Shou and Daxin Jiang and Guihong Cao and Xiaodong Fan and Ruofei
Zhang and Rahul Agrawal and Edward Cui and Sining Wei and Taroon Bharti and Ying Qiao
and Jiun-Hung Chen and Winnie Wu and Shuguang Liu and Fan Yang and Daniel Campos
and Rangan Majumder and Ming Zhou},
journal={arXiv},
year={2020},
volume={abs/2004.01401}
} | null | 20 | 110 | ---
annotations_creators:
- crowdsourced
- expert-generated
- found
- machine-generated
language_creators:
- crowdsourced
- expert-generated
- found
- machine-generated
language:
- ar
- bg
- de
- el
- en
- es
- fr
- hi
- it
- nl
- pl
- pt
- ru
- sw
- th
- tr
- ur
- vi
- zh
license:
- other
multilinguality:
- multilingual
- translation
size_categories:
- 100K<n<1M
- 10K<n<100K
source_datasets:
- extended|conll2003
- extended|squad
- extended|xnli
- original
task_categories:
- question-answering
- summarization
- text-classification
- text2text-generation
- token-classification
task_ids:
- acceptability-classification
- extractive-qa
- named-entity-recognition
- natural-language-inference
- news-articles-headline-generation
- open-domain-qa
- parsing
- topic-classification
pretty_name: XGLUE
license_details: Licence Universal Dependencies v2.5
tags:
- paraphrase-identification
- question-answering
dataset_info:
- config_name: ner
features:
- name: words
sequence: string
- name: ner
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': I-PER
'3': B-ORG
'4': I-ORG
'5': B-LOC
'6': I-LOC
'7': B-MISC
'8': I-MISC
splits:
- name: train
num_bytes: 3445854
num_examples: 14042
- name: validation.en
num_bytes: 866569
num_examples: 3252
- name: validation.de
num_bytes: 917967
num_examples: 2874
- name: validation.es
num_bytes: 888551
num_examples: 1923
- name: validation.nl
num_bytes: 659144
num_examples: 2895
- name: test.en
num_bytes: 784976
num_examples: 3454
- name: test.de
num_bytes: 922741
num_examples: 3007
- name: test.es
num_bytes: 864804
num_examples: 1523
- name: test.nl
num_bytes: 1196660
num_examples: 5202
download_size: 875905871
dataset_size: 10547266
- config_name: pos
features:
- name: words
sequence: string
- name: pos
sequence:
class_label:
names:
'0': ADJ
'1': ADP
'2': ADV
'3': AUX
'4': CCONJ
'5': DET
'6': INTJ
'7': NOUN
'8': NUM
'9': PART
'10': PRON
'11': PROPN
'12': PUNCT
'13': SCONJ
'14': SYM
'15': VERB
'16': X
splits:
- name: train
num_bytes: 7279459
num_examples: 25376
- name: validation.en
num_bytes: 421410
num_examples: 2001
- name: validation.de
num_bytes: 219328
num_examples: 798
- name: validation.es
num_bytes: 620491
num_examples: 1399
- name: validation.nl
num_bytes: 198003
num_examples: 717
- name: validation.bg
num_bytes: 346802
num_examples: 1114
- name: validation.el
num_bytes: 229447
num_examples: 402
- name: validation.fr
num_bytes: 600964
num_examples: 1475
- name: validation.pl
num_bytes: 620694
num_examples: 2214
- name: validation.tr
num_bytes: 186196
num_examples: 987
- name: validation.vi
num_bytes: 203669
num_examples: 799
- name: validation.zh
num_bytes: 212579
num_examples: 499
- name: validation.ur
num_bytes: 284016
num_examples: 551
- name: validation.hi
num_bytes: 838700
num_examples: 1658
- name: validation.it
num_bytes: 198608
num_examples: 563
- name: validation.ar
num_bytes: 592943
num_examples: 908
- name: validation.ru
num_bytes: 261563
num_examples: 578
- name: validation.th
num_bytes: 272834
num_examples: 497
- name: test.en
num_bytes: 420613
num_examples: 2076
- name: test.de
num_bytes: 291759
num_examples: 976
- name: test.es
num_bytes: 200003
num_examples: 425
- name: test.nl
num_bytes: 193337
num_examples: 595
- name: test.bg
num_bytes: 339460
num_examples: 1115
- name: test.el
num_bytes: 235137
num_examples: 455
- name: test.fr
num_bytes: 166865
num_examples: 415
- name: test.pl
num_bytes: 600534
num_examples: 2214
- name: test.tr
num_bytes: 186519
num_examples: 982
- name: test.vi
num_bytes: 211408
num_examples: 799
- name: test.zh
num_bytes: 202055
num_examples: 499
- name: test.ur
num_bytes: 288189
num_examples: 534
- name: test.hi
num_bytes: 839659
num_examples: 1683
- name: test.it
num_bytes: 173861
num_examples: 481
- name: test.ar
num_bytes: 561709
num_examples: 679
- name: test.ru
num_bytes: 255393
num_examples: 600
- name: test.th
num_bytes: 272834
num_examples: 497
download_size: 875905871
dataset_size: 19027041
- config_name: mlqa
features:
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: answer_start
dtype: int32
- name: text
dtype: string
splits:
- name: train
num_bytes: 75307933
num_examples: 87599
- name: validation.en
num_bytes: 1255587
num_examples: 1148
- name: validation.de
num_bytes: 454258
num_examples: 512
- name: validation.ar
num_bytes: 785493
num_examples: 517
- name: validation.es
num_bytes: 388625
num_examples: 500
- name: validation.hi
num_bytes: 1092167
num_examples: 507
- name: validation.vi
num_bytes: 692227
num_examples: 511
- name: validation.zh
num_bytes: 411213
num_examples: 504
- name: test.en
num_bytes: 13264513
num_examples: 11590
- name: test.de
num_bytes: 4070659
num_examples: 4517
- name: test.ar
num_bytes: 7976090
num_examples: 5335
- name: test.es
num_bytes: 4044224
num_examples: 5253
- name: test.hi
num_bytes: 11385051
num_examples: 4918
- name: test.vi
num_bytes: 7559078
num_examples: 5495
- name: test.zh
num_bytes: 4092921
num_examples: 5137
download_size: 875905871
dataset_size: 132780039
- config_name: nc
features:
- name: news_title
dtype: string
- name: news_body
dtype: string
- name: news_category
dtype:
class_label:
names:
'0': foodanddrink
'1': sports
'2': travel
'3': finance
'4': lifestyle
'5': news
'6': entertainment
'7': health
'8': video
'9': autos
splits:
- name: train
num_bytes: 280615806
num_examples: 100000
- name: validation.en
num_bytes: 33389140
num_examples: 10000
- name: validation.de
num_bytes: 26757254
num_examples: 10000
- name: validation.es
num_bytes: 31781308
num_examples: 10000
- name: validation.fr
num_bytes: 27154099
num_examples: 10000
- name: validation.ru
num_bytes: 46053007
num_examples: 10000
- name: test.en
num_bytes: 34437987
num_examples: 10000
- name: test.de
num_bytes: 26632007
num_examples: 10000
- name: test.es
num_bytes: 31350078
num_examples: 10000
- name: test.fr
num_bytes: 27589545
num_examples: 10000
- name: test.ru
num_bytes: 46183830
num_examples: 10000
download_size: 875905871
dataset_size: 611944061
- config_name: xnli
features:
- name: premise
dtype: string
- name: hypothesis
dtype: string
- name: label
dtype:
class_label:
names:
'0': entailment
'1': neutral
'2': contradiction
splits:
- name: train
num_bytes: 74444346
num_examples: 392702
- name: validation.en
num_bytes: 433471
num_examples: 2490
- name: validation.ar
num_bytes: 633009
num_examples: 2490
- name: validation.bg
num_bytes: 774069
num_examples: 2490
- name: validation.de
num_bytes: 494612
num_examples: 2490
- name: validation.el
num_bytes: 841234
num_examples: 2490
- name: validation.es
num_bytes: 478430
num_examples: 2490
- name: validation.fr
num_bytes: 510112
num_examples: 2490
- name: validation.hi
num_bytes: 1023923
num_examples: 2490
- name: validation.ru
num_bytes: 786450
num_examples: 2490
- name: validation.sw
num_bytes: 429858
num_examples: 2490
- name: validation.th
num_bytes: 1061168
num_examples: 2490
- name: validation.tr
num_bytes: 459316
num_examples: 2490
- name: validation.ur
num_bytes: 699960
num_examples: 2490
- name: validation.vi
num_bytes: 590688
num_examples: 2490
- name: validation.zh
num_bytes: 384859
num_examples: 2490
- name: test.en
num_bytes: 875142
num_examples: 5010
- name: test.ar
num_bytes: 1294561
num_examples: 5010
- name: test.bg
num_bytes: 1573042
num_examples: 5010
- name: test.de
num_bytes: 996487
num_examples: 5010
- name: test.el
num_bytes: 1704793
num_examples: 5010
- name: test.es
num_bytes: 969821
num_examples: 5010
- name: test.fr
num_bytes: 1029247
num_examples: 5010
- name: test.hi
num_bytes: 2073081
num_examples: 5010
- name: test.ru
num_bytes: 1603474
num_examples: 5010
- name: test.sw
num_bytes: 871659
num_examples: 5010
- name: test.th
num_bytes: 2147023
num_examples: 5010
- name: test.tr
num_bytes: 934942
num_examples: 5010
- name: test.ur
num_bytes: 1416246
num_examples: 5010
- name: test.vi
num_bytes: 1190225
num_examples: 5010
- name: test.zh
num_bytes: 777937
num_examples: 5010
download_size: 875905871
dataset_size: 103503185
- config_name: paws-x
features:
- name: sentence1
dtype: string
- name: sentence2
dtype: string
- name: label
dtype:
class_label:
names:
'0': different
'1': same
splits:
- name: train
num_bytes: 12018349
num_examples: 49401
- name: validation.en
num_bytes: 484287
num_examples: 2000
- name: validation.de
num_bytes: 506009
num_examples: 2000
- name: validation.es
num_bytes: 505888
num_examples: 2000
- name: validation.fr
num_bytes: 525031
num_examples: 2000
- name: test.en
num_bytes: 486734
num_examples: 2000
- name: test.de
num_bytes: 516214
num_examples: 2000
- name: test.es
num_bytes: 511111
num_examples: 2000
- name: test.fr
num_bytes: 527101
num_examples: 2000
download_size: 875905871
dataset_size: 16080724
- config_name: qadsm
features:
- name: query
dtype: string
- name: ad_title
dtype: string
- name: ad_description
dtype: string
- name: relevance_label
dtype:
class_label:
names:
'0': Bad
'1': Good
splits:
- name: train
num_bytes: 12528141
num_examples: 100000
- name: validation.en
num_bytes: 1248839
num_examples: 10000
- name: validation.de
num_bytes: 1566011
num_examples: 10000
- name: validation.fr
num_bytes: 1651804
num_examples: 10000
- name: test.en
num_bytes: 1236997
num_examples: 10000
- name: test.de
num_bytes: 1563985
num_examples: 10000
- name: test.fr
num_bytes: 1594118
num_examples: 10000
download_size: 875905871
dataset_size: 21389895
- config_name: wpr
features:
- name: query
dtype: string
- name: web_page_title
dtype: string
- name: web_page_snippet
dtype: string
- name: relavance_label
dtype:
class_label:
names:
'0': Bad
'1': Fair
'2': Good
'3': Excellent
'4': Perfect
splits:
- name: train
num_bytes: 33885931
num_examples: 99997
- name: validation.en
num_bytes: 3417760
num_examples: 10008
- name: validation.de
num_bytes: 2929029
num_examples: 10004
- name: validation.es
num_bytes: 2451026
num_examples: 10004
- name: validation.fr
num_bytes: 3055899
num_examples: 10005
- name: validation.it
num_bytes: 2416388
num_examples: 10003
- name: validation.pt
num_bytes: 2449797
num_examples: 10001
- name: validation.zh
num_bytes: 3118577
num_examples: 10002
- name: test.en
num_bytes: 3402487
num_examples: 10004
- name: test.de
num_bytes: 2923577
num_examples: 9997
- name: test.es
num_bytes: 2422895
num_examples: 10006
- name: test.fr
num_bytes: 3059392
num_examples: 10020
- name: test.it
num_bytes: 2403736
num_examples: 10001
- name: test.pt
num_bytes: 2462350
num_examples: 10015
- name: test.zh
num_bytes: 3141598
num_examples: 9999
download_size: 875905871
dataset_size: 73540442
- config_name: qam
features:
- name: question
dtype: string
- name: answer
dtype: string
- name: label
dtype:
class_label:
names:
'0': 'False'
'1': 'True'
splits:
- name: train
num_bytes: 28357964
num_examples: 100000
- name: validation.en
num_bytes: 3085501
num_examples: 10000
- name: validation.de
num_bytes: 3304031
num_examples: 10000
- name: validation.fr
num_bytes: 3142833
num_examples: 10000
- name: test.en
num_bytes: 3082297
num_examples: 10000
- name: test.de
num_bytes: 3309496
num_examples: 10000
- name: test.fr
num_bytes: 3140213
num_examples: 10000
download_size: 875905871
dataset_size: 47422335
- config_name: qg
features:
- name: answer_passage
dtype: string
- name: question
dtype: string
splits:
- name: train
num_bytes: 27464034
num_examples: 100000
- name: validation.en
num_bytes: 3047040
num_examples: 10000
- name: validation.de
num_bytes: 3270877
num_examples: 10000
- name: validation.es
num_bytes: 3341775
num_examples: 10000
- name: validation.fr
num_bytes: 3175615
num_examples: 10000
- name: validation.it
num_bytes: 3191193
num_examples: 10000
- name: validation.pt
num_bytes: 3328434
num_examples: 10000
- name: test.en
num_bytes: 3043813
num_examples: 10000
- name: test.de
num_bytes: 3270190
num_examples: 10000
- name: test.es
num_bytes: 3353522
num_examples: 10000
- name: test.fr
num_bytes: 3178352
num_examples: 10000
- name: test.it
num_bytes: 3195684
num_examples: 10000
- name: test.pt
num_bytes: 3340296
num_examples: 10000
download_size: 875905871
dataset_size: 66200825
- config_name: ntg
features:
- name: news_body
dtype: string
- name: news_title
dtype: string
splits:
- name: train
num_bytes: 890709581
num_examples: 300000
- name: validation.en
num_bytes: 34317076
num_examples: 10000
- name: validation.de
num_bytes: 27404379
num_examples: 10000
- name: validation.es
num_bytes: 30896109
num_examples: 10000
- name: validation.fr
num_bytes: 27261523
num_examples: 10000
- name: validation.ru
num_bytes: 43247386
num_examples: 10000
- name: test.en
num_bytes: 33697284
num_examples: 10000
- name: test.de
num_bytes: 26738202
num_examples: 10000
- name: test.es
num_bytes: 31111489
num_examples: 10000
- name: test.fr
num_bytes: 26997447
num_examples: 10000
- name: test.ru
num_bytes: 44050350
num_examples: 10000
download_size: 875905871
dataset_size: 1216430826
config_names:
- mlqa
- nc
- ner
- ntg
- paws-x
- pos
- qadsm
- qam
- qg
- wpr
- xnli
---
# Dataset Card for XGLUE
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [XGLUE homepage](https://microsoft.github.io/XGLUE/)
- **Paper:** [XGLUE: A New Benchmark Dataset for Cross-lingual Pre-training, Understanding and Generation](https://arxiv.org/abs/2004.01401)
- **Point of Contact:** [xglue@microsoft.com](mailto:xglue@microsoft.com?subject=XGLUE Feedback)
### Dataset Summary
XGLUE is a new benchmark dataset to evaluate the performance of cross-lingual pre-trained models with respect to
cross-lingual natural language understanding and generation.
XGLUE is composed of 11 tasks spans 19 languages. For each task, the training data is only available in English.
This means that to succeed at XGLUE, a model must have a strong zero-shot cross-lingual transfer capability to learn
from the English data of a specific task and transfer what it learned to other languages. Comparing to its concurrent
work XTREME, XGLUE has two characteristics: First, it includes cross-lingual NLU and cross-lingual NLG tasks at the
same time; Second, besides including 5 existing cross-lingual tasks (i.e. NER, POS, MLQA, PAWS-X and XNLI), XGLUE
selects 6 new tasks from Bing scenarios as well, including News Classification (NC), Query-Ad Matching (QADSM),
Web Page Ranking (WPR), QA Matching (QAM), Question Generation (QG) and News Title Generation (NTG). Such diversities
of languages, tasks and task origin provide a comprehensive benchmark for quantifying the quality of a pre-trained
model on cross-lingual natural language understanding and generation.
The training data of each task is in English while the validation and test data is present in multiple different languages.
The following table shows which languages are present as validation and test data for each config.

Therefore, for each config, a cross-lingual pre-trained model should be fine-tuned on the English training data, and evaluated on for all languages.
### Supported Tasks and Leaderboards
The XGLUE leaderboard can be found on the [homepage](https://microsoft.github.io/XGLUE/) and
consists of a XGLUE-Understanding Score (the average of the tasks `ner`, `pos`, `mlqa`, `nc`, `xnli`, `paws-x`, `qadsm`, `wpr`, `qam`) and a XGLUE-Generation Score (the average of the tasks `qg`, `ntg`).
### Languages
For all tasks (configurations), the "train" split is in English (`en`).
For each task, the "validation" and "test" splits are present in these languages:
- ner: `en`, `de`, `es`, `nl`
- pos: `en`, `de`, `es`, `nl`, `bg`, `el`, `fr`, `pl`, `tr`, `vi`, `zh`, `ur`, `hi`, `it`, `ar`, `ru`, `th`
- mlqa: `en`, `de`, `ar`, `es`, `hi`, `vi`, `zh`
- nc: `en`, `de`, `es`, `fr`, `ru`
- xnli: `en`, `ar`, `bg`, `de`, `el`, `es`, `fr`, `hi`, `ru`, `sw`, `th`, `tr`, `ur`, `vi`, `zh`
- paws-x: `en`, `de`, `es`, `fr`
- qadsm: `en`, `de`, `fr`
- wpr: `en`, `de`, `es`, `fr`, `it`, `pt`, `zh`
- qam: `en`, `de`, `fr`
- qg: `en`, `de`, `es`, `fr`, `it`, `pt`
- ntg: `en`, `de`, `es`, `fr`, `ru`
## Dataset Structure
### Data Instances
#### ner
An example of 'test.nl' looks as follows.
```json
{
"ner": [
"O",
"O",
"O",
"B-LOC",
"O",
"B-LOC",
"O",
"B-LOC",
"O",
"O",
"O",
"O",
"O",
"O",
"O",
"B-PER",
"I-PER",
"O",
"O",
"B-LOC",
"O",
"O"
],
"words": [
"Dat",
"is",
"in",
"Itali\u00eb",
",",
"Spanje",
"of",
"Engeland",
"misschien",
"geen",
"probleem",
",",
"maar",
"volgens",
"'",
"Der",
"Kaiser",
"'",
"in",
"Duitsland",
"wel",
"."
]
}
```
#### pos
An example of 'test.fr' looks as follows.
```json
{
"pos": [
"PRON",
"VERB",
"SCONJ",
"ADP",
"PRON",
"CCONJ",
"DET",
"NOUN",
"ADP",
"NOUN",
"CCONJ",
"NOUN",
"ADJ",
"PRON",
"PRON",
"AUX",
"ADV",
"VERB",
"PUNCT",
"PRON",
"VERB",
"VERB",
"DET",
"ADJ",
"NOUN",
"ADP",
"DET",
"NOUN",
"PUNCT"
],
"words": [
"Je",
"sens",
"qu'",
"entre",
"\u00e7a",
"et",
"les",
"films",
"de",
"m\u00e9decins",
"et",
"scientifiques",
"fous",
"que",
"nous",
"avons",
"d\u00e9j\u00e0",
"vus",
",",
"nous",
"pourrions",
"emprunter",
"un",
"autre",
"chemin",
"pour",
"l'",
"origine",
"."
]
}
```
#### mlqa
An example of 'test.hi' looks as follows.
```json
{
"answers": {
"answer_start": [
378
],
"text": [
"\u0909\u0924\u094d\u0924\u0930 \u092a\u0942\u0930\u094d\u0935"
]
},
"context": "\u0909\u0938\u0940 \"\u090f\u0930\u093f\u092f\u093e XX \" \u0928\u093e\u092e\u0915\u0930\u0923 \u092a\u094d\u0930\u0923\u093e\u0932\u0940 \u0915\u093e \u092a\u094d\u0930\u092f\u094b\u0917 \u0928\u0947\u0935\u093e\u0926\u093e \u092a\u0930\u0940\u0915\u094d\u0937\u0923 \u0938\u094d\u0925\u0932 \u0915\u0947 \u0905\u0928\u094d\u092f \u092d\u093e\u0917\u094b\u0902 \u0915\u0947 \u0932\u093f\u090f \u0915\u093f\u092f\u093e \u0917\u092f\u093e \u0939\u0948\u0964\u092e\u0942\u0932 \u0930\u0942\u092a \u092e\u0947\u0902 6 \u092c\u091f\u0947 10 \u092e\u0940\u0932 \u0915\u093e \u092f\u0939 \u0906\u092f\u0924\u093e\u0915\u093e\u0930 \u0905\u0921\u094d\u0921\u093e \u0905\u092c \u0924\u0925\u093e\u0915\u0925\u093f\u0924 '\u0917\u094d\u0930\u0942\u092e \u092c\u0949\u0915\u094d\u0938 \" \u0915\u093e \u090f\u0915 \u092d\u093e\u0917 \u0939\u0948, \u091c\u094b \u0915\u093f 23 \u092c\u091f\u0947 25.3 \u092e\u0940\u0932 \u0915\u093e \u090f\u0915 \u092a\u094d\u0930\u0924\u093f\u092c\u0902\u0927\u093f\u0924 \u0939\u0935\u093e\u0908 \u0915\u094d\u0937\u0947\u0924\u094d\u0930 \u0939\u0948\u0964 \u092f\u0939 \u0915\u094d\u0937\u0947\u0924\u094d\u0930 NTS \u0915\u0947 \u0906\u0902\u0924\u0930\u093f\u0915 \u0938\u0921\u093c\u0915 \u092a\u094d\u0930\u092c\u0902\u0927\u0928 \u0938\u0947 \u091c\u0941\u0921\u093c\u093e \u0939\u0948, \u091c\u093f\u0938\u0915\u0940 \u092a\u0915\u094d\u0915\u0940 \u0938\u0921\u093c\u0915\u0947\u0902 \u0926\u0915\u094d\u0937\u093f\u0923 \u092e\u0947\u0902 \u092e\u0930\u0915\u0930\u0940 \u0915\u0940 \u0913\u0930 \u0914\u0930 \u092a\u0936\u094d\u091a\u093f\u092e \u092e\u0947\u0902 \u092f\u0941\u0915\u094d\u0915\u093e \u092b\u094d\u0932\u0948\u091f \u0915\u0940 \u0913\u0930 \u091c\u093e\u0924\u0940 \u0939\u0948\u0902\u0964 \u091d\u0940\u0932 \u0938\u0947 \u0909\u0924\u094d\u0924\u0930 \u092a\u0942\u0930\u094d\u0935 \u0915\u0940 \u0913\u0930 \u092c\u0922\u093c\u0924\u0947 \u0939\u0941\u090f \u0935\u094d\u092f\u093e\u092a\u0915 \u0914\u0930 \u0914\u0930 \u0938\u0941\u0935\u094d\u092f\u0935\u0938\u094d\u0925\u093f\u0924 \u0917\u094d\u0930\u0942\u092e \u091d\u0940\u0932 \u0915\u0940 \u0938\u0921\u093c\u0915\u0947\u0902 \u090f\u0915 \u0926\u0930\u094d\u0930\u0947 \u0915\u0947 \u091c\u0930\u093f\u092f\u0947 \u092a\u0947\u091a\u0940\u0926\u093e \u092a\u0939\u093e\u0921\u093c\u093f\u092f\u094b\u0902 \u0938\u0947 \u0939\u094b\u0915\u0930 \u0917\u0941\u091c\u0930\u0924\u0940 \u0939\u0948\u0902\u0964 \u092a\u0939\u0932\u0947 \u0938\u0921\u093c\u0915\u0947\u0902 \u0917\u094d\u0930\u0942\u092e \u0918\u093e\u091f\u0940",
"question": "\u091d\u0940\u0932 \u0915\u0947 \u0938\u093e\u092a\u0947\u0915\u094d\u0937 \u0917\u094d\u0930\u0942\u092e \u0932\u0947\u0915 \u0930\u094b\u0921 \u0915\u0939\u093e\u0901 \u091c\u093e\u0924\u0940 \u0925\u0940?"
}
```
#### nc
An example of 'test.es' looks as follows.
```json
{
"news_body": "El bizcocho es seguramente el producto m\u00e1s b\u00e1sico y sencillo de toda la reposter\u00eda : consiste en poco m\u00e1s que mezclar unos cuantos ingredientes, meterlos al horno y esperar a que se hagan. Por obra y gracia del impulsor qu\u00edmico, tambi\u00e9n conocido como \"levadura de tipo Royal\", despu\u00e9s de un rato de calorcito esta combinaci\u00f3n de harina, az\u00facar, huevo, grasa -aceite o mantequilla- y l\u00e1cteo se transforma en uno de los productos m\u00e1s deliciosos que existen para desayunar o merendar . Por muy manazas que seas, es m\u00e1s que probable que tu bizcocho casero supere en calidad a cualquier infamia industrial envasada. Para lograr un bizcocho digno de admiraci\u00f3n s\u00f3lo tienes que respetar unas pocas normas que afectan a los ingredientes, proporciones, mezclado, horneado y desmoldado. Todas las tienes resumidas en unos dos minutos el v\u00eddeo de arriba, en el que adem \u00e1s aprender\u00e1s alg\u00fan truquillo para que tu bizcochaco quede m\u00e1s fino, jugoso, esponjoso y amoroso. M\u00e1s en MSN:",
"news_category": "foodanddrink",
"news_title": "Cocina para lerdos: las leyes del bizcocho"
}
```
#### xnli
An example of 'validation.th' looks as follows.
```json
{
"hypothesis": "\u0e40\u0e02\u0e32\u0e42\u0e17\u0e23\u0e2b\u0e32\u0e40\u0e40\u0e21\u0e48\u0e02\u0e2d\u0e07\u0e40\u0e02\u0e32\u0e2d\u0e22\u0e48\u0e32\u0e07\u0e23\u0e27\u0e14\u0e40\u0e23\u0e47\u0e27\u0e2b\u0e25\u0e31\u0e07\u0e08\u0e32\u0e01\u0e17\u0e35\u0e48\u0e23\u0e16\u0e42\u0e23\u0e07\u0e40\u0e23\u0e35\u0e22\u0e19\u0e2a\u0e48\u0e07\u0e40\u0e02\u0e32\u0e40\u0e40\u0e25\u0e49\u0e27",
"label": 1,
"premise": "\u0e41\u0e25\u0e30\u0e40\u0e02\u0e32\u0e1e\u0e39\u0e14\u0e27\u0e48\u0e32, \u0e21\u0e48\u0e32\u0e21\u0e4a\u0e32 \u0e1c\u0e21\u0e2d\u0e22\u0e39\u0e48\u0e1a\u0e49\u0e32\u0e19"
}
```
#### paws-x
An example of 'test.es' looks as follows.
```json
{
"label": 1,
"sentence1": "La excepci\u00f3n fue entre fines de 2005 y 2009 cuando jug\u00f3 en Suecia con Carlstad United BK, Serbia con FK Borac \u010ca\u010dak y el FC Terek Grozny de Rusia.",
"sentence2": "La excepci\u00f3n se dio entre fines del 2005 y 2009, cuando jug\u00f3 con Suecia en el Carlstad United BK, Serbia con el FK Borac \u010ca\u010dak y el FC Terek Grozny de Rusia."
}
```
#### qadsm
An example of 'train' looks as follows.
```json
{
"ad_description": "Your New England Cruise Awaits! Holland America Line Official Site.",
"ad_title": "New England Cruises",
"query": "cruise portland maine",
"relevance_label": 1
}
```
#### wpr
An example of 'test.zh' looks as follows.
```json
{
"query": "maxpro\u5b98\u7f51",
"relavance_label": 0,
"web_page_snippet": "\u5728\u7ebf\u8d2d\u4e70\uff0c\u552e\u540e\u670d\u52a1\u3002vivo\u667a\u80fd\u624b\u673a\u5f53\u5b63\u660e\u661f\u673a\u578b\u6709NEX\uff0cvivo X21\uff0cvivo X20\uff0c\uff0cvivo X23\u7b49\uff0c\u5728vivo\u5b98\u7f51\u8d2d\u4e70\u624b\u673a\u53ef\u4ee5\u4eab\u53d712 \u671f\u514d\u606f\u4ed8\u6b3e\u3002 \u54c1\u724c Funtouch OS \u4f53\u9a8c\u5e97 | ...",
"wed_page_title": "vivo\u667a\u80fd\u624b\u673a\u5b98\u65b9\u7f51\u7ad9-AI\u975e\u51e1\u6444\u5f71X23"
}
```
#### qam
An example of 'validation.en' looks as follows.
```json
{
"annswer": "Erikson has stated that after the last novel of the Malazan Book of the Fallen was finished, he and Esslemont would write a comprehensive guide tentatively named The Encyclopaedia Malazica.",
"label": 0,
"question": "main character of malazan book of the fallen"
}
```
#### qg
An example of 'test.de' looks as follows.
```json
{
"answer_passage": "Medien bei WhatsApp automatisch speichern. Tippen Sie oben rechts unter WhatsApp auf die drei Punkte oder auf die Men\u00fc-Taste Ihres Smartphones. Dort wechseln Sie in die \"Einstellungen\" und von hier aus weiter zu den \"Chat-Einstellungen\". Unter dem Punkt \"Medien Auto-Download\" k\u00f6nnen Sie festlegen, wann die WhatsApp-Bilder heruntergeladen werden sollen.",
"question": "speichenn von whats app bilder unterbinden"
}
```
#### ntg
An example of 'test.en' looks as follows.
```json
{
"news_body": "Check out this vintage Willys Pickup! As they say, the devil is in the details, and it's not every day you see such attention paid to every last area of a restoration like with this 1961 Willys Pickup . Already the Pickup has a unique look that shares some styling with the Jeep, plus some original touches you don't get anywhere else. It's a classy way to show up to any event, all thanks to Hollywood Motors . A burgundy paint job contrasts with white lower panels and the roof. Plenty of tasteful chrome details grace the exterior, including the bumpers, headlight bezels, crossmembers on the grille, hood latches, taillight bezels, exhaust finisher, tailgate hinges, etc. Steel wheels painted white and chrome hubs are a tasteful addition. Beautiful oak side steps and bed strips add a touch of craftsmanship to this ride. This truck is of real showroom quality, thanks to the astoundingly detailed restoration work performed on it, making this Willys Pickup a fierce contender for best of show. Under that beautiful hood is a 225 Buick V6 engine mated to a three-speed manual transmission, so you enjoy an ideal level of control. Four wheel drive is functional, making it that much more utilitarian and downright cool. The tires are new, so you can enjoy a lot of life out of them, while the wheels and hubs are in great condition. Just in case, a fifth wheel with a tire and a side mount are included. Just as important, this Pickup runs smoothly, so you can go cruising or even hit the open road if you're interested in participating in some classic rallies. You might associate Willys with the famous Jeep CJ, but the automaker did produce a fair amount of trucks. The Pickup is quite the unique example, thanks to distinct styling that really turns heads, making it a favorite at quite a few shows. Source: Hollywood Motors Check These Rides Out Too: Fear No Trails With These Off-Roaders 1965 Pontiac GTO: American Icon For Sale In Canada Low-Mileage 1955 Chevy 3100 Represents Turn In Pickup Market",
"news_title": "This 1961 Willys Pickup Will Let You Cruise In Style"
}
```
### Data Fields
#### ner
In the following each data field in ner is explained. The data fields are the same among all splits.
- `words`: a list of words composing the sentence.
- `ner`: a list of entitity classes corresponding to each word respectively.
#### pos
In the following each data field in pos is explained. The data fields are the same among all splits.
- `words`: a list of words composing the sentence.
- `pos`: a list of "part-of-speech" classes corresponding to each word respectively.
#### mlqa
In the following each data field in mlqa is explained. The data fields are the same among all splits.
- `context`: a string, the context containing the answer.
- `question`: a string, the question to be answered.
- `answers`: a string, the answer to `question`.
#### nc
In the following each data field in nc is explained. The data fields are the same among all splits.
- `news_title`: a string, to the title of the news report.
- `news_body`: a string, to the actual news report.
- `news_category`: a string, the category of the news report, *e.g.* `foodanddrink`
#### xnli
In the following each data field in xnli is explained. The data fields are the same among all splits.
- `premise`: a string, the context/premise, *i.e.* the first sentence for natural language inference.
- `hypothesis`: a string, a sentence whereas its relation to `premise` is to be classified, *i.e.* the second sentence for natural language inference.
- `label`: a class catory (int), natural language inference relation class between `hypothesis` and `premise`. One of 0: entailment, 1: contradiction, 2: neutral.
#### paws-x
In the following each data field in paws-x is explained. The data fields are the same among all splits.
- `sentence1`: a string, a sentence.
- `sentence2`: a string, a sentence whereas the sentence is either a paraphrase of `sentence1` or not.
- `label`: a class label (int), whether `sentence2` is a paraphrase of `sentence1` One of 0: different, 1: same.
#### qadsm
In the following each data field in qadsm is explained. The data fields are the same among all splits.
- `query`: a string, the search query one would insert into a search engine.
- `ad_title`: a string, the title of the advertisement.
- `ad_description`: a string, the content of the advertisement, *i.e.* the main body.
- `relevance_label`: a class label (int), how relevant the advertisement `ad_title` + `ad_description` is to the search query `query`. One of 0: Bad, 1: Good.
#### wpr
In the following each data field in wpr is explained. The data fields are the same among all splits.
- `query`: a string, the search query one would insert into a search engine.
- `web_page_title`: a string, the title of a web page.
- `web_page_snippet`: a string, the content of a web page, *i.e.* the main body.
- `relavance_label`: a class label (int), how relevant the web page `web_page_snippet` + `web_page_snippet` is to the search query `query`. One of 0: Bad, 1: Fair, 2: Good, 3: Excellent, 4: Perfect.
#### qam
In the following each data field in qam is explained. The data fields are the same among all splits.
- `question`: a string, a question.
- `answer`: a string, a possible answer to `question`.
- `label`: a class label (int), whether the `answer` is relevant to the `question`. One of 0: False, 1: True.
#### qg
In the following each data field in qg is explained. The data fields are the same among all splits.
- `answer_passage`: a string, a detailed answer to the `question`.
- `question`: a string, a question.
#### ntg
In the following each data field in ntg is explained. The data fields are the same among all splits.
- `news_body`: a string, the content of a news article.
- `news_title`: a string, the title corresponding to the news article `news_body`.
### Data Splits
#### ner
The following table shows the number of data samples/number of rows for each split in ner.
| |train|validation.en|validation.de|validation.es|validation.nl|test.en|test.de|test.es|test.nl|
|---|----:|------------:|------------:|------------:|------------:|------:|------:|------:|------:|
|ner|14042| 3252| 2874| 1923| 2895| 3454| 3007| 1523| 5202|
#### pos
The following table shows the number of data samples/number of rows for each split in pos.
| |train|validation.en|validation.de|validation.es|validation.nl|validation.bg|validation.el|validation.fr|validation.pl|validation.tr|validation.vi|validation.zh|validation.ur|validation.hi|validation.it|validation.ar|validation.ru|validation.th|test.en|test.de|test.es|test.nl|test.bg|test.el|test.fr|test.pl|test.tr|test.vi|test.zh|test.ur|test.hi|test.it|test.ar|test.ru|test.th|
|---|----:|------------:|------------:|------------:|------------:|------------:|------------:|------------:|------------:|------------:|------------:|------------:|------------:|------------:|------------:|------------:|------------:|------------:|------:|------:|------:|------:|------:|------:|------:|------:|------:|------:|------:|------:|------:|------:|------:|------:|------:|
|pos|25376| 2001| 798| 1399| 717| 1114| 402| 1475| 2214| 987| 799| 499| 551| 1658| 563| 908| 578| 497| 2076| 976| 425| 595| 1115| 455| 415| 2214| 982| 799| 499| 534| 1683| 481| 679| 600| 497|
#### mlqa
The following table shows the number of data samples/number of rows for each split in mlqa.
| |train|validation.en|validation.de|validation.ar|validation.es|validation.hi|validation.vi|validation.zh|test.en|test.de|test.ar|test.es|test.hi|test.vi|test.zh|
|----|----:|------------:|------------:|------------:|------------:|------------:|------------:|------------:|------:|------:|------:|------:|------:|------:|------:|
|mlqa|87599| 1148| 512| 517| 500| 507| 511| 504| 11590| 4517| 5335| 5253| 4918| 5495| 5137|
#### nc
The following table shows the number of data samples/number of rows for each split in nc.
| |train |validation.en|validation.de|validation.es|validation.fr|validation.ru|test.en|test.de|test.es|test.fr|test.ru|
|---|-----:|------------:|------------:|------------:|------------:|------------:|------:|------:|------:|------:|------:|
|nc |100000| 10000| 10000| 10000| 10000| 10000| 10000| 10000| 10000| 10000| 10000|
#### xnli
The following table shows the number of data samples/number of rows for each split in xnli.
| |train |validation.en|validation.ar|validation.bg|validation.de|validation.el|validation.es|validation.fr|validation.hi|validation.ru|validation.sw|validation.th|validation.tr|validation.ur|validation.vi|validation.zh|test.en|test.ar|test.bg|test.de|test.el|test.es|test.fr|test.hi|test.ru|test.sw|test.th|test.tr|test.ur|test.vi|test.zh|
|----|-----:|------------:|------------:|------------:|------------:|------------:|------------:|------------:|------------:|------------:|------------:|------------:|------------:|------------:|------------:|------------:|------:|------:|------:|------:|------:|------:|------:|------:|------:|------:|------:|------:|------:|------:|------:|
|xnli|392702| 2490| 2490| 2490| 2490| 2490| 2490| 2490| 2490| 2490| 2490| 2490| 2490| 2490| 2490| 2490| 5010| 5010| 5010| 5010| 5010| 5010| 5010| 5010| 5010| 5010| 5010| 5010| 5010| 5010| 5010|
#### nc
The following table shows the number of data samples/number of rows for each split in nc.
| |train |validation.en|validation.de|validation.es|validation.fr|validation.ru|test.en|test.de|test.es|test.fr|test.ru|
|---|-----:|------------:|------------:|------------:|------------:|------------:|------:|------:|------:|------:|------:|
|nc |100000| 10000| 10000| 10000| 10000| 10000| 10000| 10000| 10000| 10000| 10000|
#### xnli
The following table shows the number of data samples/number of rows for each split in xnli.
| |train |validation.en|validation.ar|validation.bg|validation.de|validation.el|validation.es|validation.fr|validation.hi|validation.ru|validation.sw|validation.th|validation.tr|validation.ur|validation.vi|validation.zh|test.en|test.ar|test.bg|test.de|test.el|test.es|test.fr|test.hi|test.ru|test.sw|test.th|test.tr|test.ur|test.vi|test.zh|
|----|-----:|------------:|------------:|------------:|------------:|------------:|------------:|------------:|------------:|------------:|------------:|------------:|------------:|------------:|------------:|------------:|------:|------:|------:|------:|------:|------:|------:|------:|------:|------:|------:|------:|------:|------:|------:|
|xnli|392702| 2490| 2490| 2490| 2490| 2490| 2490| 2490| 2490| 2490| 2490| 2490| 2490| 2490| 2490| 2490| 5010| 5010| 5010| 5010| 5010| 5010| 5010| 5010| 5010| 5010| 5010| 5010| 5010| 5010| 5010|
#### paws-x
The following table shows the number of data samples/number of rows for each split in paws-x.
| |train|validation.en|validation.de|validation.es|validation.fr|test.en|test.de|test.es|test.fr|
|------|----:|------------:|------------:|------------:|------------:|------:|------:|------:|------:|
|paws-x|49401| 2000| 2000| 2000| 2000| 2000| 2000| 2000| 2000|
#### qadsm
The following table shows the number of data samples/number of rows for each split in qadsm.
| |train |validation.en|validation.de|validation.fr|test.en|test.de|test.fr|
|-----|-----:|------------:|------------:|------------:|------:|------:|------:|
|qadsm|100000| 10000| 10000| 10000| 10000| 10000| 10000|
#### wpr
The following table shows the number of data samples/number of rows for each split in wpr.
| |train|validation.en|validation.de|validation.es|validation.fr|validation.it|validation.pt|validation.zh|test.en|test.de|test.es|test.fr|test.it|test.pt|test.zh|
|---|----:|------------:|------------:|------------:|------------:|------------:|------------:|------------:|------:|------:|------:|------:|------:|------:|------:|
|wpr|99997| 10008| 10004| 10004| 10005| 10003| 10001| 10002| 10004| 9997| 10006| 10020| 10001| 10015| 9999|
#### qam
The following table shows the number of data samples/number of rows for each split in qam.
| |train |validation.en|validation.de|validation.fr|test.en|test.de|test.fr|
|---|-----:|------------:|------------:|------------:|------:|------:|------:|
|qam|100000| 10000| 10000| 10000| 10000| 10000| 10000|
#### qg
The following table shows the number of data samples/number of rows for each split in qg.
| |train |validation.en|validation.de|validation.es|validation.fr|validation.it|validation.pt|test.en|test.de|test.es|test.fr|test.it|test.pt|
|---|-----:|------------:|------------:|------------:|------------:|------------:|------------:|------:|------:|------:|------:|------:|------:|
|qg |100000| 10000| 10000| 10000| 10000| 10000| 10000| 10000| 10000| 10000| 10000| 10000| 10000|
#### ntg
The following table shows the number of data samples/number of rows for each split in ntg.
| |train |validation.en|validation.de|validation.es|validation.fr|validation.ru|test.en|test.de|test.es|test.fr|test.ru|
|---|-----:|------------:|------------:|------------:|------------:|------------:|------:|------:|------:|------:|------:|
|ntg|300000| 10000| 10000| 10000| 10000| 10000| 10000| 10000| 10000| 10000| 10000|
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
[More Information Needed]
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
The dataset is maintained mainly by Yaobo Liang, Yeyun Gong, Nan Duan, Ming Gong, Linjun Shou, and Daniel Campos from Microsoft Research.
### Licensing Information
The XGLUE datasets are intended for non-commercial research purposes only to promote advancement in the field of
artificial intelligence and related areas, and is made available free of charge without extending any license or other
intellectual property rights. The dataset is provided “as is” without warranty and usage of the data has risks since we
may not own the underlying rights in the documents. We are not be liable for any damages related to use of the dataset.
Feedback is voluntarily given and can be used as we see fit. Upon violation of any of these terms, your rights to use
the dataset will end automatically.
If you have questions about use of the dataset or any research outputs in your products or services, we encourage you
to undertake your own independent legal review. For other questions, please feel free to contact us.
### Citation Information
If you use this dataset, please cite it. Additionally, since XGLUE is also built out of exiting 5 datasets, please
ensure you cite all of them.
An example:
```
We evaluate our model using the XGLUE benchmark \cite{Liang2020XGLUEAN}, a cross-lingual evaluation benchmark
consiting of Named Entity Resolution (NER) \cite{Sang2002IntroductionTT} \cite{Sang2003IntroductionTT},
Part of Speech Tagging (POS) \cite{11234/1-3105}, News Classification (NC), MLQA \cite{Lewis2019MLQAEC},
XNLI \cite{Conneau2018XNLIEC}, PAWS-X \cite{Yang2019PAWSXAC}, Query-Ad Matching (QADSM), Web Page Ranking (WPR),
QA Matching (QAM), Question Generation (QG) and News Title Generation (NTG).
```
```
@article{Liang2020XGLUEAN,
title={XGLUE: A New Benchmark Dataset for Cross-lingual Pre-training, Understanding and Generation},
author={Yaobo Liang and Nan Duan and Yeyun Gong and Ning Wu and Fenfei Guo and Weizhen Qi and Ming Gong and Linjun Shou and Daxin Jiang and Guihong Cao and Xiaodong Fan and Ruofei Zhang and Rahul Agrawal and Edward Cui and Sining Wei and Taroon Bharti and Ying Qiao and Jiun-Hung Chen and Winnie Wu and Shuguang Liu and Fan Yang and Daniel Campos and Rangan Majumder and Ming Zhou},
journal={arXiv},
year={2020},
volume={abs/2004.01401}
}
@misc{11234/1-3105,
title={Universal Dependencies 2.5},
author={Zeman, Daniel and Nivre, Joakim and Abrams, Mitchell and Aepli, No{\"e}mi and Agi{\'c}, {\v Z}eljko and Ahrenberg, Lars and Aleksandravi{\v c}i{\=u}t{\.e}, Gabriel{\.e} and Antonsen, Lene and Aplonova, Katya and Aranzabe, Maria Jesus and Arutie, Gashaw and Asahara, Masayuki and Ateyah, Luma and Attia, Mohammed and Atutxa, Aitziber and Augustinus, Liesbeth and Badmaeva, Elena and Ballesteros, Miguel and Banerjee, Esha and Bank, Sebastian and Barbu Mititelu, Verginica and Basmov, Victoria and Batchelor, Colin and Bauer, John and Bellato, Sandra and Bengoetxea, Kepa and Berzak, Yevgeni and Bhat, Irshad Ahmad and Bhat, Riyaz Ahmad and Biagetti, Erica and Bick, Eckhard and Bielinskien{\.e}, Agn{\.e} and Blokland, Rogier and Bobicev, Victoria and Boizou, Lo{\"{\i}}c and Borges V{\"o}lker, Emanuel and B{\"o}rstell, Carl and Bosco, Cristina and Bouma, Gosse and Bowman, Sam and Boyd, Adriane and Brokait{\.e}, Kristina and Burchardt, Aljoscha and Candito, Marie and Caron, Bernard and Caron, Gauthier and Cavalcanti, Tatiana and Cebiro{\u g}lu Eryi{\u g}it, G{\"u}l{\c s}en and Cecchini, Flavio Massimiliano and Celano, Giuseppe G. A. and {\v C}{\'e}pl{\"o}, Slavom{\'{\i}}r and Cetin, Savas and Chalub, Fabricio and Choi, Jinho and Cho, Yongseok and Chun, Jayeol and Cignarella, Alessandra T. and Cinkov{\'a}, Silvie and Collomb, Aur{\'e}lie and {\c C}{\"o}ltekin, {\c C}a{\u g}r{\i} and Connor, Miriam and Courtin, Marine and Davidson, Elizabeth and de Marneffe, Marie-Catherine and de Paiva, Valeria and de Souza, Elvis and Diaz de Ilarraza, Arantza and Dickerson, Carly and Dione, Bamba and Dirix, Peter and Dobrovoljc, Kaja and Dozat, Timothy and Droganova, Kira and Dwivedi, Puneet and Eckhoff, Hanne and Eli, Marhaba and Elkahky, Ali and Ephrem, Binyam and Erina, Olga and Erjavec, Toma{\v z} and Etienne, Aline and Evelyn, Wograine and Farkas, Rich{\'a}rd and Fernandez Alcalde, Hector and Foster, Jennifer and Freitas, Cl{\'a}udia and Fujita, Kazunori and Gajdo{\v s}ov{\'a}, Katar{\'{\i}}na and Galbraith, Daniel and Garcia, Marcos and G{\"a}rdenfors, Moa and Garza, Sebastian and Gerdes, Kim and Ginter, Filip and Goenaga, Iakes and Gojenola, Koldo and G{\"o}k{\i}rmak, Memduh and Goldberg, Yoav and G{\'o}mez Guinovart, Xavier and Gonz{\'a}lez Saavedra, Berta and Grici{\=u}t{\.e}, Bernadeta and Grioni, Matias and Gr{\=u}z{\={\i}}tis, Normunds and Guillaume, Bruno and Guillot-Barbance, C{\'e}line and Habash, Nizar and Haji{\v c}, Jan and Haji{\v c} jr., Jan and H{\"a}m{\"a}l{\"a}inen, Mika and H{\`a} M{\~y}, Linh and Han, Na-Rae and Harris, Kim and Haug, Dag and Heinecke, Johannes and Hennig, Felix and Hladk{\'a}, Barbora and Hlav{\'a}{\v c}ov{\'a}, Jaroslava and Hociung, Florinel and Hohle, Petter and Hwang, Jena and Ikeda, Takumi and Ion, Radu and Irimia, Elena and Ishola, {\d O}l{\'a}j{\'{\i}}d{\'e} and Jel{\'{\i}}nek, Tom{\'a}{\v s} and Johannsen, Anders and J{\o}rgensen, Fredrik and Juutinen, Markus and Ka{\c s}{\i}kara, H{\"u}ner and Kaasen, Andre and Kabaeva, Nadezhda and Kahane, Sylvain and Kanayama, Hiroshi and Kanerva, Jenna and Katz, Boris and Kayadelen, Tolga and Kenney, Jessica and Kettnerov{\'a}, V{\'a}clava and Kirchner, Jesse and Klementieva, Elena and K{\"o}hn, Arne and Kopacewicz, Kamil and Kotsyba, Natalia and Kovalevskait{\.e}, Jolanta and Krek, Simon and Kwak, Sookyoung and Laippala, Veronika and Lambertino, Lorenzo and Lam, Lucia and Lando, Tatiana and Larasati, Septina Dian and Lavrentiev, Alexei and Lee, John and L{\^e} H{\`{\^o}}ng, Phương and Lenci, Alessandro and Lertpradit, Saran and Leung, Herman and Li, Cheuk Ying and Li, Josie and Li, Keying and Lim, {KyungTae} and Liovina, Maria and Li, Yuan and Ljube{\v s}i{\'c}, Nikola and Loginova, Olga and Lyashevskaya, Olga and Lynn, Teresa and Macketanz, Vivien and Makazhanov, Aibek and Mandl, Michael and Manning, Christopher and Manurung, Ruli and M{\u a}r{\u a}nduc, C{\u a}t{\u a}lina and Mare{\v c}ek, David and Marheinecke, Katrin and Mart{\'{\i}}nez Alonso, H{\'e}ctor and Martins, Andr{\'e} and Ma{\v s}ek, Jan and Matsumoto, Yuji and {McDonald}, Ryan and {McGuinness}, Sarah and Mendon{\c c}a, Gustavo and Miekka, Niko and Misirpashayeva, Margarita and Missil{\"a}, Anna and Mititelu, C{\u a}t{\u a}lin and Mitrofan, Maria and Miyao, Yusuke and Montemagni, Simonetta and More, Amir and Moreno Romero, Laura and Mori, Keiko Sophie and Morioka, Tomohiko and Mori, Shinsuke and Moro, Shigeki and Mortensen, Bjartur and Moskalevskyi, Bohdan and Muischnek, Kadri and Munro, Robert and Murawaki, Yugo and M{\"u}{\"u}risep, Kaili and Nainwani, Pinkey and Navarro Hor{\~n}iacek, Juan Ignacio and Nedoluzhko, Anna and Ne{\v s}pore-B{\=e}rzkalne, Gunta and Nguy{\~{\^e}}n Th{\d i}, Lương and Nguy{\~{\^e}}n Th{\d i} Minh, Huy{\`{\^e}}n and Nikaido, Yoshihiro and Nikolaev, Vitaly and Nitisaroj, Rattima and Nurmi, Hanna and Ojala, Stina and Ojha, Atul Kr. and Ol{\'u}{\`o}kun, Ad{\'e}day{\d o}̀ and Omura, Mai and Osenova, Petya and {\"O}stling, Robert and {\O}vrelid, Lilja and Partanen, Niko and Pascual, Elena and Passarotti, Marco and Patejuk, Agnieszka and Paulino-Passos, Guilherme and Peljak-{\L}api{\'n}ska, Angelika and Peng, Siyao and Perez, Cenel-Augusto and Perrier, Guy and Petrova, Daria and Petrov, Slav and Phelan, Jason and Piitulainen, Jussi and Pirinen, Tommi A and Pitler, Emily and Plank, Barbara and Poibeau, Thierry and Ponomareva, Larisa and Popel, Martin and Pretkalni{\c n}a, Lauma and Pr{\'e}vost, Sophie and Prokopidis, Prokopis and Przepi{\'o}rkowski, Adam and Puolakainen, Tiina and Pyysalo, Sampo and Qi, Peng and R{\"a}{\"a}bis, Andriela and Rademaker, Alexandre and Ramasamy, Loganathan and Rama, Taraka and Ramisch, Carlos and Ravishankar, Vinit and Real, Livy and Reddy, Siva and Rehm, Georg and Riabov, Ivan and Rie{\ss}ler, Michael and Rimkut{\.e}, Erika and Rinaldi, Larissa and Rituma, Laura and Rocha, Luisa and Romanenko, Mykhailo and Rosa, Rudolf and Rovati, Davide and Roșca, Valentin and Rudina, Olga and Rueter, Jack and Sadde, Shoval and Sagot, Beno{\^{\i}}t and Saleh, Shadi and Salomoni, Alessio and Samard{\v z}i{\'c}, Tanja and Samson, Stephanie and Sanguinetti, Manuela and S{\"a}rg, Dage and Saul{\={\i}}te, Baiba and Sawanakunanon, Yanin and Schneider, Nathan and Schuster, Sebastian and Seddah, Djam{\'e} and Seeker, Wolfgang and Seraji, Mojgan and Shen, Mo and Shimada, Atsuko and Shirasu, Hiroyuki and Shohibussirri, Muh and Sichinava, Dmitry and Silveira, Aline and Silveira, Natalia and Simi, Maria and Simionescu, Radu and Simk{\'o}, Katalin and {\v S}imkov{\'a}, M{\'a}ria and Simov, Kiril and Smith, Aaron and Soares-Bastos, Isabela and Spadine, Carolyn and Stella, Antonio and Straka, Milan and Strnadov{\'a}, Jana and Suhr, Alane and Sulubacak, Umut and Suzuki, Shingo and Sz{\'a}nt{\'o}, Zsolt and Taji, Dima and Takahashi, Yuta and Tamburini, Fabio and Tanaka, Takaaki and Tellier, Isabelle and Thomas, Guillaume and Torga, Liisi and Trosterud, Trond and Trukhina, Anna and Tsarfaty, Reut and Tyers, Francis and Uematsu, Sumire and Ure{\v s}ov{\'a}, Zde{\v n}ka and Uria, Larraitz and Uszkoreit, Hans and Utka, Andrius and Vajjala, Sowmya and van Niekerk, Daniel and van Noord, Gertjan and Varga, Viktor and Villemonte de la Clergerie, Eric and Vincze, Veronika and Wallin, Lars and Walsh, Abigail and Wang, Jing Xian and Washington, Jonathan North and Wendt, Maximilan and Williams, Seyi and Wir{\'e}n, Mats and Wittern, Christian and Woldemariam, Tsegay and Wong, Tak-sum and Wr{\'o}blewska, Alina and Yako, Mary and Yamazaki, Naoki and Yan, Chunxiao and Yasuoka, Koichi and Yavrumyan, Marat M. and Yu, Zhuoran and {\v Z}abokrtsk{\'y}, Zden{\v e}k and Zeldes, Amir and Zhang, Manying and Zhu, Hanzhi},
url={http://hdl.handle.net/11234/1-3105},
note={{LINDAT}/{CLARIAH}-{CZ} digital library at the Institute of Formal and Applied Linguistics ({{\'U}FAL}), Faculty of Mathematics and Physics, Charles University},
copyright={Licence Universal Dependencies v2.5},
year={2019}
}
@article{Sang2003IntroductionTT,
title={Introduction to the CoNLL-2003 Shared Task: Language-Independent Named Entity Recognition},
author={Erik F. Tjong Kim Sang and Fien De Meulder},
journal={ArXiv},
year={2003},
volume={cs.CL/0306050}
}
@article{Sang2002IntroductionTT,
title={Introduction to the CoNLL-2002 Shared Task: Language-Independent Named Entity Recognition},
author={Erik F. Tjong Kim Sang},
journal={ArXiv},
year={2002},
volume={cs.CL/0209010}
}
@inproceedings{Conneau2018XNLIEC,
title={XNLI: Evaluating Cross-lingual Sentence Representations},
author={Alexis Conneau and Guillaume Lample and Ruty Rinott and Adina Williams and Samuel R. Bowman and Holger Schwenk and Veselin Stoyanov},
booktitle={EMNLP},
year={2018}
}
@article{Lewis2019MLQAEC,
title={MLQA: Evaluating Cross-lingual Extractive Question Answering},
author={Patrick Lewis and Barlas Oguz and Ruty Rinott and Sebastian Riedel and Holger Schwenk},
journal={ArXiv},
year={2019},
volume={abs/1910.07475}
}
@article{Yang2019PAWSXAC,
title={PAWS-X: A Cross-lingual Adversarial Dataset for Paraphrase Identification},
author={Yinfei Yang and Yuan Zhang and Chris Tar and Jason Baldridge},
journal={ArXiv},
year={2019},
volume={abs/1908.11828}
}
```
### Contributions
Thanks to [@patrickvonplaten](https://github.com/patrickvonplaten) for adding this dataset. |
ChristophSchuhmann/MS_COCO_2017_URL_TEXT | 2021-11-27T15:39:29.000Z | [
"region:us"
] | ChristophSchuhmann | null | null | null | 11 | 110 | Entry not found |
rubrix/gutenberg_spacy-ner | 2022-02-24T21:48:13.000Z | [
"region:us"
] | rubrix | null | null | null | 0 | 110 | Entry not found |
bigbio/bionlp_st_2013_cg | 2022-12-22T15:43:57.000Z | [
"multilinguality:monolingual",
"language:en",
"license:other",
"region:us"
] | bigbio | the Cancer Genetics (CG) is a event extraction task and a main task of the BioNLP Shared Task (ST) 2013.
The CG task is an information extraction task targeting the recognition of events in text,
represented as structured n-ary associations of given physical entities. In addition to
addressing the cancer domain, the CG task is differentiated from previous event extraction
tasks in the BioNLP ST series in addressing a wide range of pathological processes and multiple
levels of biological organization, ranging from the molecular through the cellular and organ
levels up to whole organisms. Final test set submissions were accepted from six teams | @inproceedings{pyysalo-etal-2013-overview,
title = "Overview of the Cancer Genetics ({CG}) task of {B}io{NLP} Shared Task 2013",
author = "Pyysalo, Sampo and
Ohta, Tomoko and
Ananiadou, Sophia",
booktitle = "Proceedings of the {B}io{NLP} Shared Task 2013 Workshop",
month = aug,
year = "2013",
address = "Sofia, Bulgaria",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/W13-2008",
pages = "58--66",
} | null | 1 | 110 |
---
language:
- en
bigbio_language:
- English
license: other
multilinguality: monolingual
bigbio_license_shortname: GENIA_PROJECT_LICENSE
pretty_name: BioNLP 2013 CG
homepage: https://github.com/openbiocorpora/bionlp-st-2013-cg
bigbio_pubmed: True
bigbio_public: True
bigbio_tasks:
- EVENT_EXTRACTION
- NAMED_ENTITY_RECOGNITION
- COREFERENCE_RESOLUTION
---
# Dataset Card for BioNLP 2013 CG
## Dataset Description
- **Homepage:** https://github.com/openbiocorpora/bionlp-st-2013-cg
- **Pubmed:** True
- **Public:** True
- **Tasks:** EE,NER,COREF
the Cancer Genetics (CG) is a event extraction task and a main task of the BioNLP Shared Task (ST) 2013.
The CG task is an information extraction task targeting the recognition of events in text,
represented as structured n-ary associations of given physical entities. In addition to
addressing the cancer domain, the CG task is differentiated from previous event extraction
tasks in the BioNLP ST series in addressing a wide range of pathological processes and multiple
levels of biological organization, ranging from the molecular through the cellular and organ
levels up to whole organisms. Final test set submissions were accepted from six teams
## Citation Information
```
@inproceedings{pyysalo-etal-2013-overview,
title = "Overview of the Cancer Genetics ({CG}) task of {B}io{NLP} Shared Task 2013",
author = "Pyysalo, Sampo and
Ohta, Tomoko and
Ananiadou, Sophia",
booktitle = "Proceedings of the {B}io{NLP} Shared Task 2013 Workshop",
month = aug,
year = "2013",
address = "Sofia, Bulgaria",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/W13-2008",
pages = "58--66",
}
```
|
roneneldan/TinyStoriesInstruct | 2023-05-18T21:20:35.000Z | [
"region:us"
] | roneneldan | null | null | null | 18 | 110 | Entry not found |
GATE-engine/medical_decathlon | 2023-06-28T00:08:47.000Z | [
"region:us"
] | GATE-engine | null | null | null | 0 | 110 | ---
dataset_info:
features:
- name: image
sequence:
sequence:
sequence:
sequence: float32
- name: label
sequence:
sequence:
sequence:
sequence: float32
- name: image_meta_dict
struct:
- name: affine
sequence:
sequence: float64
- name: as_closest_canonical
dtype: bool
- name: bitpix
dtype: int16
- name: cal_max
dtype: float32
- name: cal_min
dtype: float32
- name: datatype
dtype: int16
- name: dim
sequence: int16
- name: dim_info
dtype: uint8
- name: extents
dtype: int32
- name: filename_or_obj
dtype: string
- name: glmax
dtype: int32
- name: glmin
dtype: int32
- name: intent_code
dtype: int16
- name: intent_p1
dtype: float32
- name: intent_p2
dtype: float32
- name: intent_p3
dtype: float32
- name: original_affine
sequence:
sequence: float64
- name: original_channel_dim
dtype: int64
- name: pixdim
sequence: float32
- name: qform_code
dtype: int16
- name: qoffset_x
dtype: float32
- name: qoffset_y
dtype: float32
- name: qoffset_z
dtype: float32
- name: quatern_b
dtype: float32
- name: quatern_c
dtype: float32
- name: quatern_d
dtype: float32
- name: scl_inter
dtype: float32
- name: scl_slope
dtype: float32
- name: session_error
dtype: int16
- name: sform_code
dtype: int16
- name: sizeof_hdr
dtype: int32
- name: slice_code
dtype: uint8
- name: slice_duration
dtype: float32
- name: slice_end
dtype: int16
- name: slice_start
dtype: int16
- name: space
dtype: string
- name: spatial_shape
sequence: int16
- name: srow_x
sequence: float32
- name: srow_y
sequence: float32
- name: srow_z
sequence: float32
- name: toffset
dtype: float32
- name: vox_offset
dtype: float32
- name: xyzt_units
dtype: uint8
- name: label_meta_dict
struct:
- name: affine
sequence:
sequence: float64
- name: as_closest_canonical
dtype: bool
- name: bitpix
dtype: int16
- name: cal_max
dtype: float32
- name: cal_min
dtype: float32
- name: datatype
dtype: int16
- name: dim
sequence: int16
- name: dim_info
dtype: uint8
- name: extents
dtype: int32
- name: filename_or_obj
dtype: string
- name: glmax
dtype: int32
- name: glmin
dtype: int32
- name: intent_code
dtype: int16
- name: intent_p1
dtype: float32
- name: intent_p2
dtype: float32
- name: intent_p3
dtype: float32
- name: original_affine
sequence:
sequence: float64
- name: original_channel_dim
dtype: int64
- name: pixdim
sequence: float32
- name: qform_code
dtype: int16
- name: qoffset_x
dtype: float32
- name: qoffset_y
dtype: float32
- name: qoffset_z
dtype: float32
- name: quatern_b
dtype: float32
- name: quatern_c
dtype: float32
- name: quatern_d
dtype: float32
- name: scl_inter
dtype: float32
- name: scl_slope
dtype: float32
- name: session_error
dtype: int16
- name: sform_code
dtype: int16
- name: sizeof_hdr
dtype: int32
- name: slice_code
dtype: uint8
- name: slice_duration
dtype: float32
- name: slice_end
dtype: int16
- name: slice_start
dtype: int16
- name: space
dtype: string
- name: spatial_shape
sequence: int16
- name: srow_x
sequence: float32
- name: srow_y
sequence: float32
- name: srow_z
sequence: float32
- name: toffset
dtype: float32
- name: vox_offset
dtype: float32
- name: xyzt_units
dtype: uint8
- name: task_name
dtype: string
splits:
- name: training.task01braintumour
num_bytes: 86983526676
num_examples: 484
- name: training.task02heart
num_bytes: 1876862286
num_examples: 20
- name: training.task03liver
num_bytes: 123248219480
num_examples: 131
- name: training.task04hippocampus
num_bytes: 134681070
num_examples: 260
- name: training.task05prostate
num_bytes: 761819690
num_examples: 32
- name: training.task06lung
num_bytes: 37161866791
num_examples: 63
- name: training.task07pancreas
num_bytes: 56624596467
num_examples: 281
- name: training.task08hepaticvessel
num_bytes: 44928904186
num_examples: 303
- name: training.task09spleen
num_bytes: 7740805293
num_examples: 41
- name: training.task10colon
num_bytes: 28547100354
num_examples: 126
download_size: 661175837
dataset_size: 388008382293
---
# Dataset Card for "medical_decathlon"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
totally-not-an-llm/sharegpt-hyperfiltered-3k | 2023-07-13T02:17:45.000Z | [
"license:apache-2.0",
"region:us"
] | totally-not-an-llm | null | null | null | 6 | 110 | ---
license: apache-2.0
---
# sharegpt-hyperfiltered-3k
90k sharegpt convos brought down to ~3k (3243) via language filtering, keyword detection, deduping, and regex. Following things were done:
- Deduplication on first message from human
- Remove non-English convos
- Remove censorship, refusals, and alignment
- Remove incorrect/low-quality answers
- Remove creative tasks
- ChatGPT's creative outputs are very censored and robotic; I think the base model can do better.
- Remove URLs
- Remove cutoffs
- Remove math/reasoning questions
- It sucks without CoT prompting, so this data should be mixed with better reasoning examples like OpenOrca or Dolphin.
|
jxie/higgs-normalized | 2023-09-13T00:55:35.000Z | [
"region:us"
] | jxie | null | null | null | 0 | 110 | ---
dataset_info:
features:
- name: label
dtype: float64
- name: inputs
sequence: float64
splits:
- name: train
num_bytes: 2478000000
num_examples: 10500000
- name: test
num_bytes: 118000000
num_examples: 500000
- name: train_1k
num_bytes: 236000
num_examples: 1000
- name: train_10k
num_bytes: 2360000
num_examples: 10000
- name: train_100k
num_bytes: 23600000
num_examples: 100000
download_size: 2144173073
dataset_size: 2622196000
---
# Dataset Card for "higgs-normalized"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Schweinhund/arch_ml_v_1 | 2023-09-29T09:52:11.000Z | [
"region:us"
] | Schweinhund | null | null | null | 0 | 110 | Entry not found |
result-kand2-sdxl-wuerst-karlo/7e27d622 | 2023-10-06T03:10:37.000Z | [
"region:us"
] | result-kand2-sdxl-wuerst-karlo | null | null | null | 0 | 110 | ---
dataset_info:
features:
- name: result
dtype: string
- name: id
dtype: int64
splits:
- name: train
num_bytes: 232
num_examples: 10
download_size: 1424
dataset_size: 232
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "7e27d622"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
lccc | 2022-11-18T22:07:56.000Z | [
"task_categories:conversational",
"task_ids:dialogue-generation",
"annotations_creators:other",
"language_creators:other",
"multilinguality:monolingual",
"size_categories:10M<n<100M",
"source_datasets:original",
"language:zh",
"license:mit",
"arxiv:2008.03946",
"region:us"
] | null | LCCC: Large-scale Cleaned Chinese Conversation corpus (LCCC) is a large corpus of Chinese conversations.
A rigorous data cleaning pipeline is designed to ensure the quality of the corpus.
This pipeline involves a set of rules and several classifier-based filters.
Noises such as offensive or sensitive words, special symbols, emojis,
grammatically incorrect sentences, and incoherent conversations are filtered. | @inproceedings{wang2020chinese,
title={A Large-Scale Chinese Short-Text Conversation Dataset},
author={Wang, Yida and Ke, Pei and Zheng, Yinhe and Huang, Kaili and Jiang, Yong and Zhu, Xiaoyan and Huang, Minlie},
booktitle={NLPCC},
year={2020},
url={https://arxiv.org/abs/2008.03946}
} | null | 13 | 109 | ---
annotations_creators:
- other
language_creators:
- other
language:
- zh
license:
- mit
multilinguality:
- monolingual
paperswithcode_id: lccc
pretty_name: 'LCCC: Large-scale Cleaned Chinese Conversation corpus'
size_categories:
- 10M<n<100M
source_datasets:
- original
task_categories:
- conversational
task_ids:
- dialogue-generation
dataset_info:
- config_name: large
features:
- name: dialog
list: string
splits:
- name: train
num_bytes: 1530827965
num_examples: 12007759
download_size: 607605643
dataset_size: 1530827965
- config_name: base
features:
- name: dialog
list: string
splits:
- name: train
num_bytes: 932634902
num_examples: 6820506
- name: test
num_bytes: 1498216
num_examples: 10000
- name: validation
num_bytes: 2922731
num_examples: 20000
download_size: 371475095
dataset_size: 937055849
---
# Dataset Card for LCCC
## Table of Contents
- [Dataset Card for LCCC](#dataset-card-for-lccc)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Initial Data Collection and Normalization](#initial-data-collection-and-normalization)
- [Who are the source language producers?](#who-are-the-source-language-producers)
- [Annotations](#annotations)
- [Annotation process](#annotation-process)
- [Who are the annotators?](#who-are-the-annotators)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Repository:** https://github.com/thu-coai/CDial-GPT
- **Paper:** https://arxiv.org/abs/2008.03946
### Dataset Summary
LCCC: Large-scale Cleaned Chinese Conversation corpus (LCCC) is a large Chinese dialogue corpus originate from Chinese social medias. A rigorous data cleaning pipeline is designed to ensure the quality of the corpus. This pipeline involves a set of rules and several classifier-based filters. Noises such as offensive or sensitive words, special symbols, emojis, grammatically incorrect sentences, and incoherent conversations are filtered.
LCCC是一套来自于中文社交媒体的对话数据,我们设计了一套严格的数据过滤流程来确保该数据集中对话数据的质量。 这一数据过滤流程中包括一系列手工规则以及若干基于机器学习算法所构建的分类器。 我们所过滤掉的噪声包括:脏字脏词、特殊字符、颜表情、语法不通的语句、上下文不相关的对话等。
### Supported Tasks and Leaderboards
- dialogue-generation: The dataset can be used to train a model for generating dialogue responses.
- response-retrieval: The dataset can be used to train a reranker model that can be used to implement a retrieval-based dialogue model.
### Languages
LCCC is in Chinese
LCCC中的对话是中文的
## Dataset Structure
### Data Instances
```json
{
"dialog": ["火锅 我 在 重庆 成都 吃 了 七八 顿 火锅", "哈哈哈哈 ! 那 我 的 嘴巴 可能 要 烂掉 !", "不会 的 就是 好 油腻"]
}
```
### Data Fields
- `dialog` (list of strings): List of utterances consisting of a dialogue.
### Data Splits
We do not provide the offical split for LCCC-large.
But we provide a split for LCCC-base:
|train|valid|test|
|---:|---:|---:|
|6,820,506 | 20,000 | 10,000|
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
[Needs More Information]
## Considerations for Using the Data
### Social Impact of Dataset
[Needs More Information]
### Discussion of Biases
[Needs More Information]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
[Needs More Information]
### Licensing Information
MIT License
Copyright (c) 2020 lemon234071
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
### Citation Information
```bibtex
@inproceedings{wang2020chinese,
title={A Large-Scale Chinese Short-Text Conversation Dataset},
author={Wang, Yida and Ke, Pei and Zheng, Yinhe and Huang, Kaili and Jiang, Yong and Zhu, Xiaoyan and Huang, Minlie},
booktitle={NLPCC},
year={2020},
url={https://arxiv.org/abs/2008.03946}
}
```
### Contributions
Thanks to [Yinhe Zheng](https://github.com/silverriver) for adding this dataset. |
grosenthal/latin_english_translation | 2023-07-17T21:59:06.000Z | [
"task_categories:translation",
"size_categories:10K<n<100K",
"language:la",
"language:en",
"license:mit",
"doi:10.57967/hf/0903",
"region:us"
] | grosenthal | null | null | null | 4 | 109 | ---
dataset_info:
features:
- name: id
dtype: int64
- name: la
dtype: string
- name: en
dtype: string
- name: file
dtype: string
splits:
- name: train
num_bytes: 39252644
num_examples: 99343
- name: test
num_bytes: 405056
num_examples: 1014
- name: valid
num_bytes: 392886
num_examples: 1014
download_size: 25567350
dataset_size: 40050586
license: mit
task_categories:
- translation
language:
- la
- en
pretty_name: Latin to English Translation Pairs
size_categories:
- 10K<n<100K
---
# Dataset Card for "latin_english_parallel"
101k translation pairs between Latin and English, split 99/1/1 as train/test/val. These have been collected roughly 66% from the Loeb Classical Library and 34% from the Vulgate translation.
For those that were gathered from the Loeb Classical Library, alignment was performd manually between Source and Target sequences.
Each sample is annotated with the index and file (and therefore author/work) that the sample is from. If you find errors, please feel free to submit a PR to fix them.
 |
manu/wmt-en-fr | 2023-09-19T08:27:24.000Z | [
"region:us"
] | manu | null | null | null | 0 | 109 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
dataset_info:
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 14956705827
num_examples: 40836715
- name: validation
num_bytes: 759439
num_examples: 3000
- name: test
num_bytes: 853864
num_examples: 3003
download_size: 3671540079
dataset_size: 14958319130
---
# Dataset Card for "wmt-en-fr"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
distil-whisper/earnings21 | 2023-09-19T11:49:58.000Z | [
"region:us"
] | distil-whisper | null | null | null | 0 | 109 | ---
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
dataset_info:
features:
- name: audio
dtype: audio
- name: file_id
dtype: string
- name: audio_length
dtype: string
- name: sample_rate
dtype: string
- name: company_name
dtype: string
- name: financial_quarter
dtype: string
- name: sector
dtype: string
- name: speaker_switches
dtype: string
- name: unique_speakers
dtype: string
- name: curator_id
dtype: string
- name: transcription
dtype: string
splits:
- name: test
num_bytes: 772215561.0
num_examples: 44
download_size: 766546768
dataset_size: 772215561.0
---
# Dataset Card for "earnings21-long-form"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
gsarti/clean_mc4_it | 2022-10-23T09:01:21.000Z | [
"task_categories:text-generation",
"task_ids:language-modeling",
"annotations_creators:no-annotation",
"language_creators:found",
"multilinguality:monolingual",
"source_datasets:extended",
"language:it",
"license:odc-by",
"arxiv:1910.10683",
"arxiv:2203.03759",
"region:us"
] | gsarti | A thoroughly cleaned version of the Italian portion of the multilingual
colossal, cleaned version of Common Crawl's web crawl corpus (mC4) by AllenAI.
Based on Common Crawl dataset: "https://commoncrawl.org".
This is the processed version of Google's mC4 dataset by AllenAI, with further cleaning
detailed in the repository README file. | @article{JMLR:v21:20-074,
author = {Colin Raffel and Noam Shazeer and Adam Roberts and Katherine Lee and Sharan Narang and Michael Matena and Yanqi Zhou and Wei Li and Peter J. Liu},
title = {Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer},
journal = {Journal of Machine Learning Research},
year = {2020},
volume = {21},
number = {140},
pages = {1-67},
url = {http://jmlr.org/papers/v21/20-074.html}
} | null | 6 | 108 | ---
annotations_creators:
- no-annotation
language_creators:
- found
language:
- it
license:
- odc-by
multilinguality:
- monolingual
size_categories:
tiny:
- 1M<n<10M
small:
- 10M<n<100M
medium:
- 10M<n<100M
large:
- 10M<n<100M
full:
- 100M<n<1B
source_datasets:
- extended
task_categories:
- text-generation
task_ids:
- language-modeling
paperswithcode_id: mc4
pretty_name: mC4_it
---
# Dataset Card for Clean Italian mC4 🇮🇹
## Table of Contents
- [Dataset Card for Clean](#dataset-card-for-mc4)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Preprocessing](#preprocessing)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Original Homepage:** [HF Hub](https://huggingface.co/datasets/allenai/c4)
- **Paper:** [ArXiv](https://arxiv.org/abs/1910.10683)
### Dataset Summary
A thoroughly cleaned version of the Italian split of the multilingual colossal, cleaned version of Common Crawl's web crawl corpus (mC4). Based on the [Common Crawl dataset](https://commoncrawl.org). The original version was prepared by [AllenAI](https://allenai.org/), hosted at the address [https://huggingface.co/datasets/allenai/c4](https://huggingface.co/datasets/allenai/c4), with subsequent preprocessing performed by [Gabriele Sarti](https://gsarti.com) following a standard procedure for all dataset shards.
### Preprocessing
The preprocessing of the dataset follows the procedure used by Yeb Havinga for training the model [`t5-base-dutch`](https://huggingface.co/flax-community/t5-base-dutch) on a portion of the cleaned Dutch split of mC4. The original code, that was adapted for Italian in this case, is available on [GitLab](https://gitlab.com/yhavinga/c4nlpreproc). In summary, the preprocessing procedure includes:
- Removing documents containing words from a selection of the [Italian and English List of Dirty Naught Obscene and Otherwise Bad Words](https://github.com/LDNOOBW/List-of-Dirty-Naughty-Obscene-and-Otherwise-Bad-Words).
- Removing sentences containing:
- Less than 3 words.
- A word longer than 1000 characters.
- An end symbol not matching end-of-sentence punctuation.
- Strings associated to javascript code (e.g. `{`), lorem ipsum, policy information in Italian or English.
- Removing documents (after sentence filtering):
- Containing less than 5 sentences.
- Containing less than 500 or more than 50'000 characters.
- Not identified as prevalently Italian by the `LangDetect` package.
Using parallel processing with 96 CPU cores on a TPUv3 via Google Cloud to perform the complete clean of all the original Italian shards of mC4 (1024 of ~220Mb train, 8 of ~24Mb validation) required roughly 10 hours due to the demanding steps of sentence tokenization and language detection. The total size of compressed `.json.gz` files is roughly halved after the procedure.
## Dataset Structure
### Data Instances
An example from the dataset:
```
{
'timestamp': '2020-02-22T22:24:31Z',
'url': 'https://altreconomia.it/una-rotonda-sul-pane/',
'text': 'Per raggiungere il campo attraversiamo la striscia d’asfalto che porta verso la provinciale numero 13. Mettiamo a rischio la nostra incolumità in un territorio di auto e camion. Sullo sfondo, i profili della Grigna e del Resegone. Più vicini, quelli del solito ipermercato di provincia, e delle villette a schiera che avanzano tra le coltivazioni. È lo sprawling, l’avanzata del cemento.\\nDa questo lato dalla strada, invece, è ancora regno contadino. Almeno per ora. Torniamo a Caponago (Mb), Brianza pura, dove ha avuto i natali il progetto “Spiga e madia”. Ne parlammo su Ae nel gennaio 2009: in un territorio “spaesato”, il Comitato “verso il Distretto di economia solidale della Brianza” (Desbri) e la “Retina” dei gruppi di acquisto locali danno vita a un progetto di produzione di frumento, molitura, panificazione e distribuzione in un raggio di 20 chilometri. Si comincia da zero, nel 2007, senza alcun di finanziamento, quando una famiglia del [...]. Il giochino vale almeno 3 miliardi di euro all’anno. La misura, introdotta in via straordinaria con la finanziaria 2005, è stata prorogata anche con l’ultimo decreto “milleproroghe”.'
}
```
### Data Fields
The data contains the following fields:
- `url`: url of the source as a string
- `text`: text content as a string
- `timestamp`: timestamp of extraction as a string
### Data Splits
To build mC4, the original authors used [CLD3](https://github.com/google/cld3) to identify over 100 languages. For Italian, the whole corpus of scraped text was divided in `1032` jsonl files, `1024` for training following the naming style `c4-it.tfrecord-0XXXX-of-01024.json.gz` and 8 for validation following the naming style `c4-it-validation.tfrecord-0000X-of-00008.json.gz`. The full set of preprocessed files takes roughly 215GB of disk space to download with Git LFS.
For ease of use under different storage capacities, the following incremental splits are available (sizes are estimates). **Important**: The sizes in GB represent the estimated weight for :
|split |train size (docs, words, download + preproc disk space)|validation size|
|:-----|------------------------------------------------------:|--------------:|
|tiny | 10M docs, 4B words (9 GB + 27 GB) | 12k docs |
|small | 20M docs, 8B words (18 GB + 54 GB) | 24k docs |
|medium| 50M docs, 20B words (47 GB + 135 GB) | 48k docs |
|large | 75M docs, 30B words (71 GB + 203 GB) | 72k docs |
|full | 103M docs, 41B words (109 GB + 279 GB) | 96k docs |
You can load any subset like this:
```python
from datasets import load_dataset
mc4_it_tiny = load_dataset("gsarti/clean_mc4_it", "tiny")
```
Since splits are quite large, you may want to traverse them using the streaming mode available starting from 🤗 Datasets v1.9.0:
```python
from datasets import load_dataset
mc4_it_full_stream = load_dataset("gsarti/clean_mc4_it", "full", split='train', streaming=True)
print(next(iter(mc4_it_full_stream))) # Prints the example presented above
```
## Dataset Creation
Refer to the original paper for more considerations regarding the choice of sources and the scraping process for creating `mC4`.
## Considerations for Using the Data
### Social Impact of Dataset
With more than 200GB of cleaned Italian text and more than 41B estimated words, this is by far the largest available corpus for the Italian language. The second largest dataset available is [OSCAR](https://oscar-corpus.com/), which is only 69GB in size for its deduplicated variant. Using this corpus for training language models with adequate computational resources will allow researchers to reach parity with the performances observed for the English language. This can in turn have important repercussions for the development of commercial language technology applications for the Italian language.
### Discussion of Biases
Despit the cleaning procedure aimed at removing vulgarity and profanity, it must be considered that model trained on this scraped corpus will inevitably reflect biases present in blog articles and comments on the Internet. This makes the corpus especially interesting in the context of studying data biases and how to limit their impacts.
## Additional Information
### Dataset Curators
Authors at AllenAI are the original curators for the `mc4` corpus. For inquiries or requests regarding the Italian cleaned portion contained in this repository, please contact me at [gabriele.sarti996@gmail.com](mailto:gabriele.sarti996@gmail.com)
### Licensing Information
AllenAI are releasing this dataset under the terms of ODC-BY. By using this, you are also bound by the Common Crawl terms of use in respect of the content contained in the dataset.
### Citation Information
If you use this dataset in your work, please cite us and the original mC4 authors as:
```
@article{sarti-nissim-2022-it5,
title={IT5: Large-scale Text-to-text Pretraining for Italian Language Understanding and Generation},
author={Sarti, Gabriele and Nissim, Malvina},
journal={ArXiv preprint 2203.03759},
url={https://arxiv.org/abs/2203.03759},
year={2022},
month={mar}
}
@inproceedings{xue-etal-2021-mt5,
title = "m{T}5: A Massively Multilingual Pre-trained Text-to-Text Transformer",
author = "Xue, Linting and
Constant, Noah and
Roberts, Adam and
Kale, Mihir and
Al-Rfou, Rami and
Siddhant, Aditya and
Barua, Aditya and
Raffel, Colin",
booktitle = "Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
month = jun,
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.naacl-main.41",
doi = "10.18653/v1/2021.naacl-main.41",
pages = "483--498",
}
```
### Contributions
Thanks to [@dirkgr](https://github.com/dirkgr) and [@lhoestq](https://github.com/lhoestq) for adding this dataset.
|
suolyer/pile_wikipedia | 2023-03-27T03:58:20.000Z | [
"license:apache-2.0",
"region:us"
] | suolyer | null | null | null | 0 | 108 | ---
license: apache-2.0
---
|
danielpark/MQuAD-v1 | 2023-04-07T12:21:48.000Z | [
"task_categories:question-answering",
"task_categories:text-generation",
"language:en",
"language:ko",
"license:apache-2.0",
"biology",
"region:us"
] | danielpark | null | null | null | 2 | 108 | ---
license: apache-2.0
task_categories:
- question-answering
- text-generation
language:
- en
- ko
tags:
- biology
pretty_name: Medical domain QA dataset for training a medical chatbot.
---
# MQuAD
The Medical Question and Answering dataset(MQuAD) has been refined, including the following datasets. You can download it through the Hugging Face dataset. Use the DATASETS method as follows.
## Quick Guide
```python
from datasets import load_dataset
dataset = load_dataset("danielpark/MQuAD-v1")
```
Medical Q/A datasets gathered from the following websites.
- eHealth Forum
- iCliniq
- Question Doctors
- WebMD
Data was gathered at the 5th of May 2017.
The MQuAD provides embedded question and answer arrays in string format, so it is recommended to convert the string-formatted arrays into float format as follows. This measure has been applied to save resources and time used for embedding.
```python
from datasets import load_dataset
from utilfunction import col_convert
import pandas as pd
qa = load_dataset("danielpark/MQuAD-v1", "csv")
df_qa = pd.DataFrame(qa['train'])
df_qa = col_convert(df_qa, ['Q_FFNN_embeds', 'A_FFNN_embeds'])
```
|
ehartford/WizardLM_alpaca_evol_instruct_70k_unfiltered | 2023-04-28T07:36:17.000Z | [
"region:us"
] | ehartford | null | null | null | 88 | 108 | This dataset is the WizardLM dataset victor123/evol_instruct_70k, removing instances of blatant alignment.
54974 instructions remain.
inspired by https://huggingface.co/datasets/anon8231489123/ShareGPT_Vicuna_unfiltered
All credit to anon8231489123 for the cleanup script that I adapted to wizardlm_clean.py
---
license: apache-2.0
language:
- en
pretty_name: wizardlm-unfiltered
--- |
MMInstruction/M3IT-80 | 2023-06-20T12:43:25.000Z | [
"task_categories:image-to-text",
"task_categories:image-classification",
"size_categories:0.5M<n<1M",
"license:other",
"region:us"
] | MMInstruction | Multi-modal Bi-lingual Instruction Dataset | null | null | 1 | 108 | ---
license: other
task_categories:
- image-to-text
- image-classification
size_categories:
- 0.5M<n<1M
---
# Dataset Card for M3IT-80
Project Page: [https://m3-it.github.io/](https://m3-it.github.io/)
## Dataset Description
- **Homepage: https://huggingface.co/datasets/MMInstruction/M3IT-80**
- **Repository: https://huggingface.co/datasets/MMInstruction/M3IT-80**
- **Paper: https://huggingface.co/papers/2306.04387**
- **Leaderboard:**
- **Point of Contact:**
### Languages
80 languages translated from English.
## Dataset Metainfo
[M3IT](https://huggingface.co/datasets/MMInstruction/M3IT) dataset
compiles diverse tasks of classical vision-language tasks, including captioning,
visual question answering~(VQA), visual conditioned generation, reasoning and classification.
**M3IT-80** is the 80-language translated version of M3IT.
### Languages
```python
_LAN_CODES = [
"af", "am", "ar", "as", "ast", "be", "bg", "bn", "bs", "ca",
"ceb", "cs", "cy", "da", "de", "el", "es", "et", "fi", "fr",
"fuv", "gl", "gu", "ha", "he", "hi", "hr", "hu", "hy", "id",
"ig", "is", "it", "ja", "jv", "ka", "kk", "km", "kn", "ko",
"ky", "lb", "lg", "lij", "li", "ln", "lo", "lt", "lv", "mi",
"mk", "ml", "mr", "mt", "my", "nl", "ny", "oc", "pa", "pl",
"pt", "ro", "ru", "sd", "sk", "sn", "so", "sr", "sv", "ta",
"te", "tg", "th", "tl", "tr", "uk", "ur", "vi", "wo", "zh",
]
```
### Dataset Statistics
We report the number of the train/validation/test of each dataset per language.
| Task | Dataset | #Train | #Val | #Test |
|---------------------------|--------------|--------|------|-------|
| Classification | `imagenet` | 500 | 500 | 0 |
| Visual Question Answering | `vqa-v2` | 500 | 500 | 0 |
| Knowledgeable Visual QA | `okvqa` | 500 | 500 | 0 |
| Reasoning | `winoground` | 0 | 0 | 800 |
| Generation | `vist` | 500 | 500 | 500 |
| Video | `msrvtt` | 500 | 500 | 0 |
| | `msrvtt-qa` | 500 | 500 | 0 |
### Source Data
Source language: English
| Task | Dataset [Citation] | Source |
|---------------------------|--------------------|------------------------------------------------------------------------------------|
| Classification | `imagenet` [1] | [Source](https://www.image-net.org/) |
| Visual Question Answering | `vqa-v2` [2] | [Source](https://visualqa.org/) |
| Knowledgeable Visual QA | `okvqa` [3] | [Source](https://okvqa.allenai.org/) |
| Reasoning | `winoground` [4] | [Source](https://huggingface.co/datasets/facebook/winoground) |
| Generation | `vist` [5] | [Source](https://visionandlanguage.net/VIST/) |
| Video | `msrvtt` [6] | [Source](https://paperswithcode.com/dataset/msr-vtt) |
| | `msrvtt-qa` [7] | [Source](https://paperswithcode.com/sota/visual-question-answering-on-msrvtt-qa-1) |
### Translation
We use free [Alibaba Translate](https://www.alibabacloud.com/product/machine-translation),
a deep neural network translation (NMT) system, to perform the translation task.
## Dataset Structure
### HuggingFace Login (Optional)
```python
# OR run huggingface-cli login
from huggingface_hub import login
hf_token = "hf_xxx" # TODO: set a valid HuggingFace access token for loading datasets/models
login(token=hf_token)
```
### Data Loading
```python
from datasets import load_dataset
ds_name = "okvqa-zh" # change the dataset name here
dataset = load_dataset("MMInstruction/M3IT-80", ds_name)
```
### Data Splits
```python
from datasets import load_dataset
ds_name = "okvqa-zh" # change the dataset name here
dataset = load_dataset("MMInstruction/M3IT-80", ds_name)
train_set = dataset["train"]
validation_set = dataset["validation"]
test_set = dataset["test"]
```
### Data Instances
```python
from datasets import load_dataset
from io import BytesIO
from base64 import b64decode
from PIL import Image
ds_name = "okvqa-zh" # change the dataset name here
dataset = load_dataset("MMInstruction/M3IT-80", ds_name)
train_set = dataset["train"]
for train_instance in train_set:
instruction = train_instance["instruction"] # str
inputs = train_instance["inputs"] # str
outputs = train_instance["outputs"] # str
image_base64_str_list = train_instance["image_base64_str"] # str (base64)
image_0 = Image.open(BytesIO(b64decode(image_base64_str_list[0])))
```
### Data Fields
```python
import datasets
features = datasets.Features(
{
"instruction": datasets.Value("string"),
"inputs": datasets.Value("string"),
"image_base64_str": [datasets.Value("string")],
"outputs": datasets.Value("string"),
}
)
```
### Licensing Information
The content of original dataset follows their original license.
We suggest that for the task with Unknown/Custom license,
the user can check the original project or contact the dataset owner for detailed license information.
Our annotated instruction data is licensed under [CC BY 4.0](https://creativecommons.org/licenses/by/4.0/).
### Citation Information
```bibtex
@article{li2023m3it,
title={M$^3$IT: A Large-Scale Dataset towards Multi-Modal Multilingual Instruction Tuning},
author={Lei Li and Yuwei Yin and Shicheng Li and Liang Chen and Peiyi Wang and Shuhuai Ren and Mukai Li and Yazheng Yang and Jingjing Xu and Xu Sun and Lingpeng Kong and Qi Liu},
journal={arXiv preprint arXiv:2306.04387},
year={2023}
}
```
### Contributions
M3IT-80 is the translated version of M3IT,
an open-source, large-scale Multi-modal, Multilingual Instruction Tuning dataset,
designed to enable the development of general-purpose multi-modal agents.
## References
- [1] Imagenet large scale visual recognition challenge
- [2] Making the V in VQA Matter: Elevating the Role of Image Understanding in Visual Question Answering
- [3] OK-VQA: A Visual Question Answering Benchmark Requiring External Knowledge
- [4] WinoGround: Probing vision and language models for visio-linguistic compositionality
- [5] Visual Storytelling
- [6] Video Question Answering via Gradually Refined Attention over Appearance and Motion
- [7] MSR-VTT: A large video description dataset for bridging video and language
|
AILab-CVC/SEED-Bench | 2023-08-02T03:02:59.000Z | [
"task_categories:visual-question-answering",
"size_categories:10K<n<100K",
"language:en",
"license:cc-by-nc-4.0",
"region:us"
] | AILab-CVC | null | null | null | 10 | 108 | ---
license: cc-by-nc-4.0
task_categories:
- visual-question-answering
language:
- en
pretty_name: SEED-Bench
size_categories:
- 10K<n<100K
---
# SEED-Bench Card
## Benchmark details
**Benchmark type:**
SEED-Bench is a large-scale benchmark to evaluate Multimodal Large Language Models (MLLMs).
It consists of 19K multiple choice questions with accurate human annotations, which
covers 12 evaluation dimensions including the comprehension of both the image and video modality.
**Benchmark date:**
SEED-Bench was collected in July 2023.
**Paper or resources for more information:**
https://github.com/AILab-CVC/SEED-Bench
**License:**
Attribution-NonCommercial 4.0 International. It should abide by the policy of OpenAI: https://openai.com/policies/terms-of-use.
For the images of SEED-Bench, we use the data from Conceptual Captions Dataset (https://ai.google.com/research/ConceptualCaptions/)
following its license (https://github.com/google-research-datasets/conceptual-captions/blob/master/LICENSE).
Tencent does not hold the copyright for these images and the copyright belongs to the original owner of Conceptual Captions Dataset.
For the videos of SEED-Bench, we use tha data from Something-Something v2 (https://developer.qualcomm.com/software/ai-datasets/something-something),
Epic-kitchen 100 (https://epic-kitchens.github.io/2023) and
Breakfast (https://serre-lab.clps.brown.edu/resource/breakfast-actions-dataset/). We only provide the video name. Please download them in their official websites.
**Where to send questions or comments about the benchmark:**
https://github.com/AILab-CVC/SEED-Bench/issues
## Intended use
**Primary intended uses:**
The primary use of SEED-Bench is evaluate Multimodal Large Language Models on spatial and temporal understanding.
**Primary intended users:**
The primary intended users of the Benchmark are researchers and hobbyists in computer vision, natural language processing, machine learning, and artificial intelligence. |
stockmark/ner-wikipedia-dataset | 2023-09-02T14:42:18.000Z | [
"task_categories:token-classification",
"language:ja",
"license:cc-by-sa-3.0",
"Named Entity Recognition",
"NER",
"region:us"
] | stockmark | null | null | null | 1 | 108 | ---
license: cc-by-sa-3.0
language:
- ja
tags:
- Named Entity Recognition
- NER
task_categories:
- token-classification
---
# Wikipediaを用いた日本語の固有表現抽出データセット
- GitHub: https://github.com/stockmarkteam/ner-wikipedia-dataset/
- LICENSE: CC-BY-SA 3.0
Developed by Stockmark Inc. |
TearGosling/limarp_standardized | 2023-09-05T01:01:28.000Z | [
"region:us"
] | TearGosling | null | null | null | 0 | 108 | Entry not found |
MU-NLPC/Calc-svamp | 2023-10-07T17:27:37.000Z | [
"task_categories:text-generation",
"size_categories:1K<n<10K",
"language:en",
"license:mit",
"math world problems",
"math",
"arithmetics",
"region:us"
] | MU-NLPC | null | null | null | 0 | 108 | ---
license: mit
task_categories:
- text-generation
language:
- en
tags:
- math world problems
- math
- arithmetics
size_categories:
- 1K<n<10K
dataset_info:
features:
- name: id
dtype: string
- name: question
dtype: string
- name: chain
dtype: string
- name: result
dtype: string
- name: result_float
dtype: float64
- name: equation
dtype: string
- name: problem_type
dtype: string
splits:
- name: test
num_examples: 1000
---
# Dataset Card for Calc-MAWPS
## Summary
The dataset is a collection of simple math world problems focused on arithmetics. It is derived from <https://github.com/arkilpatel/SVAMP/>.
The main addition in this dataset variant is the `chain` column. It was created by converting the solution to a simple html-like language that can be easily
parsed (e.g. by BeautifulSoup). The data contains 3 types of tags:
- gadget: A tag whose content is intended to be evaluated by calling an external tool (sympy-based calculator in this case)
- output: An output of the external tool
- result: The final answer of the mathematical problem (a number)
## Supported Tasks
This variant of the dataset is intended for training Chain-of-Thought reasoning models able to use external tools to enhance the factuality of their responses.
This dataset presents in-context scenarios where models can out-source the computations in the reasoning chain to a calculator.
## Attributes:
- `id`: problem id from the original dataset
- `question`: the question intended to answer
- `chain`: series of simple operations (derived from `equation`) that leads to the solution
- `result`: the result (number) as a string
- `result_float`: result converted to a floating point
- `equation`: an nested expression that evaluates to the correct result
- `problem_type`: a category of the problem
## Content and data splits
The dataset contains the same data instances ad the original dataset except for a correction of inconsistency between `equation` and `answer` in one data instance.
To the best of our knowledge, the original dataset does not contain an official train-test split, and we do not create one. However, original authors have used cross-validation in the official repository - for more info, see <https://github.com/arkilpatel/SVAMP/>.
## Licence
MIT, consistent with the original source dataset linked above.
## Related work
If you are interested in related datasets (or models), check out the MU-NLPC organization here on HuggingFace. We have released a few other datasets in a compatible format, and several models that use external calculator during inference.
## Cite
If you use this version of dataset in research, please cite the original [SVAMP paper](https://www.semanticscholar.org/paper/Are-NLP-Models-really-able-to-Solve-Simple-Math-Patel-Bhattamishra/13c4e5a6122f3fa2663f63e49537091da6532f35).
TODO |
hyperdemocracy/us-congress-bills | 2023-09-11T01:40:51.000Z | [
"license:mit",
"region:us"
] | hyperdemocracy | null | null | null | 0 | 108 | ---
license: mit
---
|
jphme/wikitext_de_document_level_v01 | 2023-09-21T13:08:52.000Z | [
"region:us"
] | jphme | null | null | null | 0 | 108 | ---
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
dataset_info:
features:
- name: text
dtype: string
splits:
- name: test
num_bytes: 1860002
num_examples: 200
download_size: 1138143
dataset_size: 1860002
---
# Dataset Card for "wikitext_de_document_level_v01"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
anamhira/foundation_action | 2023-10-10T20:28:54.000Z | [
"region:us"
] | anamhira | null | null | null | 0 | 108 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: valid
path: data/valid-*
dataset_info:
features:
- name: prompt
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 663896
num_examples: 289
- name: valid
num_bytes: 8842
num_examples: 3
download_size: 134650
dataset_size: 672738
---
# Dataset Card for "foundation_action"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
DykeF/NCTCRCHE100K | 2023-10-04T19:37:15.000Z | [
"license:cc-by-4.0",
"region:us"
] | DykeF | This is a set of 100,000 non-overlapping image patches from hematoxylin & eosin (H&E) stained histological images of human colorectal cancer (CRC) and normal tissue.
All images are 224x224 pixels (px) at 0.5 microns per pixel (MPP). All images are color-normalized using Macenko's method (http://ieeexplore.ieee.org/abstract/document/5193250/, DOI 10.1109/ISBI.2009.5193250).
Tissue classes are: Adipose (ADI), background (BACK), debris (DEB), lymphocytes (LYM), mucus (MUC), smooth muscle (MUS), normal colon mucosa (NORM), cancer-associated stroma (STR), colorectal adenocarcinoma epithelium (TUM).
These images were manually extracted from N=86 H&E stained human cancer tissue slides from formalin-fixed paraffin-embedded (FFPE) samples from the NCT Biobank (National Center for Tumor Diseases, Heidelberg, Germany) and the UMM pathology archive (University Medical Center Mannheim, Mannheim, Germany). Tissue samples contained CRC primary tumor slides and tumor tissue from CRC liver metastases; normal tissue classes were augmented with non-tumorous regions from gastrectomy specimen to increase variability. | Kather, Jakob Nikolas, Halama, Niels, & Marx, Alexander. (2018). 100,000 histological images of human colorectal cancer and healthy tissue (v0.1) [Data set]. Zenodo. https://doi.org/10.5281/zenodo.1214456 | null | 0 | 108 | ---
license: cc-by-4.0
---
# NCTCRCHE100K Dataset Card
# Citation
```bash
Kather, Jakob Nikolas, Halama, Niels, & Marx, Alexander. (2018). 100,000 histological images of human colorectal cancer and healthy tissue (v0.1) [Data set]. Zenodo. https://doi.org/10.5281/zenodo.1214456
```
# Description
This is a set of 100,000 non-overlapping image patches from hematoxylin & eosin (H&E) stained histological images of human colorectal cancer (CRC) and normal tissue.
All images are 224x224 pixels (px) at 0.5 microns per pixel (MPP). All images are color-normalized using Macenko's method (http://ieeexplore.ieee.org/abstract/document/5193250/, DOI 10.1109/ISBI.2009.5193250).
Tissue classes are: Adipose (ADI), background (BACK), debris (DEB), lymphocytes (LYM), mucus (MUC), smooth muscle (MUS), normal colon mucosa (NORM), cancer-associated stroma (STR), colorectal adenocarcinoma epithelium (TUM).
These images were manually extracted from N=86 H&E stained human cancer tissue slides from formalin-fixed paraffin-embedded (FFPE) samples from the NCT Biobank (National Center for Tumor Diseases, Heidelberg, Germany) and the UMM pathology archive (University Medical Center Mannheim, Mannheim, Germany). Tissue samples contained CRC primary tumor slides and tumor tissue from CRC liver metastases; normal tissue classes were augmented with non-tumorous regions from gastrectomy specimen to increase variability.
### Data Structure
The dataset is structured into training splits (100,000 "train" and 100,000 "train_nonorm" samples) as well as a validation split of 7180 samples.
## Setup Instructions
```bash
from torch.utils.data import DataLoader
from torchvision.transforms import ToTensor
def transform(data):
data["image"] = [ToTensor()(img) for img in data["image"]] # convert to torch.Tensor
return data
from datasets import load_dataset
ds_train = load_dataset("DykeF/NCTCRCHE100K", split="train") # or train_nonorm or validation
ds_train.set_transform(transform)
|
hybrid_qa | 2023-03-28T12:23:49.000Z | [
"task_categories:question-answering",
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:en",
"license:cc-by-4.0",
"multihop-tabular-text-qa",
"arxiv:1909.05358",
"region:us"
] | null | Existing question answering datasets focus on dealing with homogeneous information, based either only on text or KB/Table information alone. However, as human knowledge is distributed over heterogeneous forms, using homogeneous information alone might lead to severe coverage problems. To fill in the gap, we present HybridQA, a new large-scale question-answering dataset that requires reasoning on heterogeneous information. Each question is aligned with a Wikipedia table and multiple free-form corpora linked with the entities in the table. The questions are designed to aggregate both tabular information and text information, i.e., lack of either form would render the question unanswerable. | @article{chen2020hybridqa,
title={HybridQA: A Dataset of Multi-Hop Question Answering over Tabular and Textual Data},
author={Chen, Wenhu and Zha, Hanwen and Chen, Zhiyu and Xiong, Wenhan and Wang, Hong and Wang, William},
journal={Findings of EMNLP 2020},
year={2020}
} | null | 1 | 107 | ---
annotations_creators:
- crowdsourced
language_creators:
- crowdsourced
language:
- en
license:
- cc-by-4.0
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- question-answering
task_ids: []
paperswithcode_id: hybridqa
pretty_name: HybridQA
tags:
- multihop-tabular-text-qa
dataset_info:
features:
- name: question_id
dtype: string
- name: question
dtype: string
- name: table_id
dtype: string
- name: answer_text
dtype: string
- name: question_postag
dtype: string
- name: table
struct:
- name: url
dtype: string
- name: title
dtype: string
- name: header
sequence: string
- name: data
list:
- name: value
dtype: string
- name: urls
list:
- name: url
dtype: string
- name: summary
dtype: string
- name: section_title
dtype: string
- name: section_text
dtype: string
- name: uid
dtype: string
- name: intro
dtype: string
config_name: hybrid_qa
splits:
- name: train
num_bytes: 2745712769
num_examples: 62682
- name: validation
num_bytes: 153512016
num_examples: 3466
- name: test
num_bytes: 148795919
num_examples: 3463
download_size: 217436855
dataset_size: 3048020704
---
# Dataset Card for HybridQA
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://hybridqa.github.io/index.html
- **Repository:** [GitHub](https://github.com/wenhuchen/HybridQA)
- **Paper:** [HybridQA: A Dataset of Multi-Hop Question Answering over Tabular and Textual Data](https://arxiv.org/abs/1909.05358)
- **Leaderboard:** [HybridQA Competition](https://competitions.codalab.org/competitions/24420)
- **Point of Contact:** [Wenhu Chen](wenhuchen@cs.ucsb.edu)
### Dataset Summary
Existing question answering datasets focus on dealing with homogeneous information, based either only on text or
KB/Table information alone. However, as human knowledge is distributed over heterogeneous forms,
using homogeneous information alone might lead to severe coverage problems.
To fill in the gap, we present HybridQA, a new large-scale question-answering dataset that
requires reasoning on heterogeneous information. Each question is aligned with a Wikipedia table
and multiple free-form corpora linked with the entities in the table. The questions are designed
to aggregate both tabular information and text information, i.e.,
lack of either form would render the question unanswerable.
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
The dataset is in English language.
## Dataset Structure
### Data Instances
A typical example looks like this
```
{
"question_id": "00009b9649d0dd0a",
"question": "Who were the builders of the mosque in Herat with fire temples ?",
"table_id": "List_of_mosques_in_Afghanistan_0",
"answer_text": "Ghurids",
"question_postag": "WP VBD DT NNS IN DT NN IN NNP IN NN NNS .",
"table": {
"url": "https://en.wikipedia.org/wiki/List_of_mosques_in_Afghanistan",
"title": "List of mosques in Afghanistan",
"header": [
"Name",
"Province",
"City",
"Year",
"Remarks"
],
"data": [
{
"value": "Kabul",
"urls": [
{
"summary": "Kabul ( Persian : کابل , romanized : Kābol , Pashto : کابل , romanized : Kābəl ) is the capital and largest city of Afghanistan...",
"url": "/wiki/Kabul"
}
]
}
]
},
"section_title": "",
"section_text": "",
"uid": "List_of_mosques_in_Afghanistan_0",
"intro": "The following is an incomplete list of large mosques in Afghanistan:"
}
```
### Data Fields
- `question_id` (str)
- `question` (str)
- `table_id` (str)
- `answer_text` (str)
- `question_postag` (str)
- `table` (dict):
- `url` (str)
- `title` (str)
- `header` (list of str)
- `data` (list of dict):
- `value` (str)
- `urls` (list of dict):
- `url` (str)
- `summary` (str)
- `section_title` (str)
- `section_text` (str)
- `uid` (str)
- `intro` (str)
### Data Splits
The dataset is split into `train`, `dev` and `test` splits.
| | train | validation | test |
| --------------- |------:|-----------:|-----:|
| N. Instances | 62682 | 3466 | 3463 |
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
[More Information Needed]
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
[More Information Needed]
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
The dataset is under a [Creative Commons Attribution 4.0 International License](https://creativecommons.org/licenses/by/4.0/).
### Citation Information
[More Information Needed]
```
@article{chen2020hybridqa,
title={HybridQA: A Dataset of Multi-Hop Question Answering over Tabular and Textual Data},
author={Chen, Wenhu and Zha, Hanwen and Chen, Zhiyu and Xiong, Wenhan and Wang, Hong and Wang, William},
journal={Findings of EMNLP 2020},
year={2020}
}
```
### Contributions
Thanks to [@patil-suraj](https://github.com/patil-suraj) for adding this dataset.
|
BAJIRAO/imdb_sentiment_3000 | 2022-09-13T12:14:34.000Z | [
"region:us"
] | BAJIRAO | null | null | null | 0 | 107 | Entry not found |
jxm/the_office_lines | 2023-03-07T18:30:51.000Z | [
"region:us"
] | jxm | null | null | null | 18 | 107 | ## the_office_lines
<img src="https://a.pinatafarm.com/1351x1232/c8fa71efd1/the-office-handshake.jpg" width="256">
A dataset of lines from the U.S. version of the tv show "The Office". Lines were originally scraped from the website [officequotes.net](https://www.officequotes.net/), are fan-transcribed, and may be of dubious quality.
Contains a train split (47,927 lines), test split (5,991 lines) and validation split (5,991 lines). Contains lines from all 9 seasons, every episode, but may be complete.
Lines are annotated with an ID number, season number, episode number, scene number (within the episode), speaker name, and whether or not the text came from a deleted scene. Here is an example:
```
> dataset["val"][0]
{'id': 3735,
'season': 2,
'episode': 5,
'scene': 32,
'line_text': 'No, you have the power to undo it.',
'speaker': 'Creed',
'deleted': False}
```
|
PNLPhub/DigiMag | 2023-06-20T09:39:05.000Z | [
"license:apache-2.0",
"region:us"
] | PNLPhub | \\\\\\\A total of 8,515 articles scraped from Digikala Online Magazine. This dataset includes seven different classes. | \@article{ParsBERT,
title={ParsBERT: Transformer-based Model for Persian Language Understanding},
author={Mehrdad Farahani, Mohammad Gharachorloo, Marzieh Farahani, Mohammad Manthouri},
journal={ArXiv},
year={2020},
volume={abs/2005.12515}
} | null | 0 | 107 | ---
license: apache-2.0
---
|
PNLPhub/parsinlu-multiple-choice | 2023-09-03T13:45:10.000Z | [
"license:apache-2.0",
"region:us"
] | PNLPhub | A Persian multiple choice task. | @article{huggingface:dataset,
title = {ParsiNLU: A Suite of Language Understanding Challenges for Persian},
authors = {Khashabi, Daniel and Cohan, Arman and Shakeri, Siamak and Hosseini, Pedram and Pezeshkpour, Pouya and Alikhani, Malihe and Aminnaseri, Moin and Bitaab, Marzieh and Brahman, Faeze and Ghazarian, Sarik and others},
year={2020}
journal = {arXiv e-prints},
eprint = {2012.06154},
} | null | 0 | 107 | ---
license: apache-2.0
dataset_info:
features:
- name: answer
dtype: string
- name: candidates
sequence: string
- name: category
dtype: string
- name: question
dtype: string
- name: id
dtype: string
splits:
- name: train
num_bytes: 580795
num_examples: 1271
- name: test
num_bytes: 469886
num_examples: 1050
- name: validation
num_bytes: 64356
num_examples: 139
download_size: 946441
dataset_size: 1115037
---
|
devxpy/therapychat2 | 2023-09-10T08:27:48.000Z | [
"region:us"
] | devxpy | null | null | null | 0 | 107 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
splits:
- name: train
num_bytes: 8703844
num_examples: 1573
download_size: 0
dataset_size: 8703844
---
# Dataset Card for "therapychat2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
JvManger/pharmacy-llama-2-indic2 | 2023-09-26T08:23:28.000Z | [
"region:us"
] | JvManger | null | null | null | 0 | 107 | Entry not found |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.